id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
3755dbcf-e257-4d0a-941d-ba18c1c1c3ab
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Provability Counterfactuals vs Three Axioms of Galles and Pearl
|
d6846aba-284f-40c2-99fa-b25be4e4b3fb
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Clarifications about structural risk from AI
*Aim: to give some clarifications about ‘structural risk’ from AI that we have personally found helpful. Most of these draw directly from earlier work by Remco Zwetsloot and Allan Dafoe. We’re sharing them in case they’re also helpful to others.*
*Audience: people who want more surface area on the concept of ‘structural risk’. Could also be helpful to those interested in sources of AI risk in general.*
*Acknowledgements: this was written collaboratively with Jess Whittlestone. Many of the clarifications in this post come from [this talk](https://www.youtube.com/watch?v=gHEzPAJEVMA) by Remco Zwetsloot. Thanks also to Ben Garfinkel for a helpful conversation and Allan Dafoe for feedback on a related piece.*
When talking about risks from AI, people often discuss either ‘accident risks’ i.e. risks from AI systems behaving in unintended ways, or ‘misuse risks’, i.e. risks from AI systems being used for some malicious purpose. However,this categorisation misses a great deal: technology tends to have complex and indirect effects, and can cause harm even when no single actor deliberately misuses it and it behaves as intended (e.g. the effect of fossil fuels on climate change). The concept of ‘structural risk’ from AI [has](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) [been](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact#Misuse_Risks__Accident_Risks__Structural_Risks) [introduced](https://arxiv.org/abs/1911.03216) to cover such possibilities.
We believe this is an important point, and broadly agree with the core claims of existing work on structural risk.
However, we have noticed discussion where the concept has been used to refer to somewhat different ideas (e.g. [in](https://www.alignmentforum.org/posts/Ni8ocGupB2kGG2fA7/agi-safety-from-first-principles-conclusion#:~:text=we%20might%20also,into%20this%20category.) [these](https://www.alignmentforum.org/posts/w6BtMqKRLxG9bNLMr/the-catastrophic-convergence-conjecture#:~:text=Also%2C%20we%27re%20implicitly%20considering%20the%20simplified%20frame%20of%20a%20single%20smart%20AI%20affecting%20the%20world%2C%20and%20not%20structural%20risk%20via%20the%20broader%20consequences%20of%20others%20also%20deploying%20similar%20agents.) [forum](https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/some-thoughts-on-toby-ord-s-existential-risk-estimates#fnref-FujJ6ynY5AZfaHgi7-4:~:text=%22Non-agentic%22%20AI%20systems%20which%20create%20%22structural,such%20as%20by%20destabilising%20nuclear%20strategies.) [posts](https://forum.effectivealtruism.org/posts/6h3a9bvJ2uYBfWxEM/ama-markus-anderljung-pm-at-govai-fhi-1#intercome-outer-frame:~:text=I%20think%20the%20majority%20of%20AI,value%20of%20labour%20undermining%20liberal%20values.)). This doesn't really matter if you're just trying to illustrate the broad point that technology can cause harm even without malicious intent or incompetence, or that analysing the incentives of different actors can reveal important risk reduction interventions.
But if you're wanting to make claims about (e.g.) the proportion of AI x-risk which is structural, or how much to prioritise work on reducing structural AI risk, then it's important to be clear about what concept you're referring to.
In this post, we give some clarifications about structural risk from AI that we hope will improve the rigour of discussion about structural risk, when such rigour is useful.
Structural risk is - first and foremost - intended to be a perspective you can take, rather than a fixed list or category of risks
----------------------------------------------------------------------------------------------------------------------------------
Taking a structural perspective (or "lens") on some risk means examining how that risk may be caused or influenced by structural factors, i.e. incentives which make actors (even competent and well-intentioned ones) more likely to take actions which result in harm.[[1]](#fn-Xu65wrLdJLjGF3KWc-1)
To give a simple analogy: if you’re trying to understand and prevent avalanches, a structural perspective would focus on factors such as the steepness of hiking trails, rather than on preventing particular actors from setting off an avalanche. This might be a more effective approach to mitigating risk, because there are many actors who might set it off, and you need to stop *all* of them to prevent catastrophe, which is probably very hard.[[2]](#fn-Xu65wrLdJLjGF3KWc-2)
Note that talking about taking a structural “perspective” on risk doesn’t mean there is a fixed list or category of risks that can be described as “structural”.[[3]](#fn-Xu65wrLdJLjGF3KWc-3)
It doesn’t necessarily mean that “structural risks” are disjoint from "accident" or "misuse" risks. In fact, it can be illuminating to take a structural perspective to understand both accident and misuse risks (as we'll see later). If “structure” is merely a useful perspective or lens for thinking about risk, it also doesn't make sense to talk about (e.g.) the proportion of AI x-risk which is "structural", or how much to prioritise work on reducing "structural risk" (because *any* risk could be analysed using the structural perspective). Instead, you could talk about how important structural causes are for a given AI x-risk, or how much efforts to shape the incentives of different actors would reduce some AI x-risk, compared to other interventions.
We found this distinction helpful, because we noticed we were getting confused about where the bounds lay around “structural risk” as a category, especially where classically considered accident or misuse risks might have structural causes, such as AI developers having incentives to skimp on safety mechanisms. Thinking of structure as more of a “perspective” that can be illuminating when thinking about risk helped reduce this confusion.
That said, it does sometimes still seem useful to talk about specific types of risk which arise mostly from structural factors.
However, there are (two) interesting categories of AI risk which are illuminated by taking a structural perspective
-------------------------------------------------------------------------------------------------------------------
Note that these categories also aren't disjoint from "misuse" and "accident" risks, nor are they intended to be - they are simply another useful way to carve up the space of risks from AI.[[4]](#fn-Xu65wrLdJLjGF3KWc-4)[[5]](#fn-Xu65wrLdJLjGF3KWc-5)
### AI risks with structural causes
We've already talked about structural causes - incentives which make actors (even competent and well-intentioned ones) more likely to take actions which have bad outcomes. Here are some possible AI risks with structural causes:[[6]](#fn-Xu65wrLdJLjGF3KWc-6)
* Dangerous tradeoffs between safety and performance
+ E.g. the Uber 2018 self-driving car crash, where engineers disabled an emergency brake that they worried would cause the car to behave overly cautiously and look worse than competitor vehicles. This decision to trade off safety for performance led to a crash and the passenger's death.
+ Note that this could also be well-described as an "accident risk" (there was some incompetence on behalf of the engineers, along with the structural causes).
* Reputational incentives and/or publication requirements leading to the diffusion of models (or techniques for training them) that can be misused.
+ E.g. concerns about the diffusion of large language models which then get used to generate mass misinformation.
+ Note that this could also be well-described as a "misuse risk" (there was some malintent on the behalf of those generating the misinformation, along with the structural causes).
* Some [slow takeoff AI alignment failure stories](https://www.lesswrong.com/posts/qYzqDtoQaZ3eDDyxa/distinguishing-ai-takeover-scenarios#Slow_scenarios) in which competitive pressures play a key role in causing AI systems to gradually gain control over the future.
+ E.g., [What failure looks like part 1](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_I__You_get_what_you_measure) or [Another (outer) alignment failure story](https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)
+ In these stories, competent actors without any malintent are incentivised to gradually deploy and hand over control to increasingly advanced systems, because that's the only way to remain economically and militarily competitive.
+ So, note that these risks can play out without any malintent or incompetence - so they are in fact disjoint from misuse and accident risks.
### ‘Non-AI’ risks partly caused by AI
Some risks that don't really seem to be "AI risks"—in the sense that the proximate cause of harm need not have anything to do with AI—have structural causes related to AI.[[7]](#fn-Xu65wrLdJLjGF3KWc-7) Some examples:
* Large language models make it cheaper/easier to create mass misinformation, incentivising bad actors to do so, which erodes epistemic security (e.g. it becomes much harder to trust information online), making coordinated responses to global crises more difficult.[[8]](#fn-Xu65wrLdJLjGF3KWc-8)
* AI enables and incentivises faster development in risky areas of science/technology (e.g. biotech and APM), and these technologies get into the hands of bad actors who do a lot of damage.
* AI improves data collection and processing techniques, allowing states to discover and sabotage each other's (previously secure) nuclear launch facilities. This undermines states' second strike capabilities, and therefore the foundations of nuclear strategic stability (based on mutually assured destruction), making nuclear war more likely.
* AI increases payoffs from building surveillance systems, leading to an erosion of privacy.
* AI increases returns to scale in production (e.g. because it makes [coordination within companies easier](https://www.lesswrong.com/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low)) leading to more monopolistic markets.
Notice that the first two of these risks are caused by some amount of malintent (as well as structural causes), whereas the latter three need not involve any malintent or incompetence (so they are disjoint from misuse and structural risks).
---
1. We use ‘structural factors’ and ‘structural causes’ synonymously. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-1)
2. Note that this is essentially the same idea as Andrew Critch's concept of a [Robust Agent-Agnostic Process](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic). [↩︎](#fnref-Xu65wrLdJLjGF3KWc-2)
3. Of course, you could choose to define the category of structural risk as risks where structural causes are especially important - but if so, this should be made clear, and note that the category would have vague boundaries. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-3)
4. However - slightly confusingly - there *are* some *specific risks* within each category which are neither accidents nor malicious use. So, these two categories can be thought of as overlapping with "misuse" and "accident" risks. We’ll see this in the examples. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-4)
5. This section draws very directly from Zwetsloot and Dafoe’s original [piece](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) on structural AI risk; we just add some extra examples that were clarifying to us. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-5)
6. In the [article](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) that introduced the idea of structural risks from AI, this category of risk was called “Structure’s effect on AI”. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-6)
7. In the [article](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) that introduced the idea of structural risks from AI, this category of risk was called “AI’s effect on structure”. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-7)
8. For a relevant precedent to this kind of risk, it’s plausible that a lack of credible bipartisan information sources increased vaccine and mask hesitancy in Covid-19. [↩︎](#fnref-Xu65wrLdJLjGF3KWc-8)
|
620b39ae-3ae2-45d6-9d70-602fd3058bb9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Best Visualizations on Every Subject
Edit: the list is now a public GitHub repository, with all that implies. Added sections by media type. Last update: 6 Jan 2021
Motivation
This is The Best Textbooks on Every Subject, but for visualizations. I greatly adore good visualizations, chiefly because there are so many visualizations that are so terrible. I have seen many such tools mentioned here, but always in passing.
The actual motivator is re-reading the posts Exercises in Comprehensive Information Gathering and Fact Posts: How and Why. While there is no substitute for the wrench-time they recommend, I think these kinds of tools make the process more efficient and lend themselves to insights which are difficult to acquire through reading alone; in my experience scale and distance are both easier to grasp in a visual medium, for example.
Also there is a non-trivial sense in which they are beautiful in their own right. If we are able to compare many examples, people in the community might even be able to help advance the art.
Submission Rules
One nomination per comment; please include an explanation of why you nominated it. Contra the best textbooks list we won't require comparison with other visualizations because there are so few authoritative ones.
Current List
WEB:
History
* ORBIS: The Geospatial Network Model of the Roman World: https://orbis.stanford.edu/
* Data Visualization and the Modern Imagination: https://exhibits.stanford.edu/dataviz
* A Simulated Dendrochronology of Immigration 1790-2016: https://web.northeastern.edu/naturalizing-immigration-dataviz/
Math
* Byrne's Euclid: The First Six Books of the Elements of Euclid With Coloured Diagrams and Symbols: https://www.c82.net/euclid/
* The Empirical MetaMathematics of Euclid and Beyond: https://writings.stephenwolfram.com/2020/09/the-empirical-metamathematics-of-euclid-and-beyond/
* An Interactive Introduction to Fourier Transforms: http://www.jezzamon.com/fourier/
* Better Explained: https://betterexplained.com/
* Jason Dav
|
19b9da72-1f94-4a62-a34e-448fb4f012d2
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Training Language Models with Language Feedback
1 Introduction
---------------

Figure 1: An overview of our algorithm for learning from natural language feedback.
Language Models (LMs) achieve strong performance across diverse NLP tasks, from summarization to question answering and conversational assistants (Radford and Narasimhan, [2018](#bib.bib29); Radford et al., [2019](#bib.bib30); Brown et al., [2020](#bib.bib3); Rae et al., [2021](#bib.bib31), interalia). A key problem with LMs is that they generate text that violates human preferences, such as LM-generated misinformation Lin et al. ([2021](#bib.bib19)), offensive language Gehman et al. ([2020](#bib.bib8)), and factually incorrect outputs such as summaries Stiennon et al. ([2020](#bib.bib36)).
Current methods alleviate such issues by training LMs to generate text that scores highly according to human preferences, or a predictive model thereof Ziegler et al. ([2019](#bib.bib43)); Stiennon et al. ([2020](#bib.bib36)); Nakano et al. ([2021](#bib.bib23)); Ouyang et al. ([2022](#bib.bib26)). In this line of work, human evaluators indicate their preferences by comparing text outputs. However, each comparison provides little information per evaluation about human preferences.
We propose to use natural language feedback, which contains more information per evaluation. We introduce a three-step learning algorithm, as shown in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Training Language Models with Language Feedback"). First, we condition an LM on an input, model-generated output, and human-written feedback to sample many possible refinements of the output. Second, we choose the refinement with the highest embedding-based similarity with the feedback. Third, we finetune an LM on the chosen refinements. Our algorithm departs from prior work, which uses reinforcement learning methods (Ziegler et al., [2019](#bib.bib43), inter alia) or auxiliary losses Stacey et al. ([2021](#bib.bib35)) that cannot be straightforwardly generalized to using natural language feedback.
We validate our algorithm on a carefully-controlled synthetic task of removing offensive words from a sentence with GPT-3-based models (Brown et al., [2020](#bib.bib3); Ouyang et al., [2022](#bib.bib26)).
We find that only the largest GPT-3-based models (175B parameters) accurately refine outputs.
Using the above insight, we use the largest GPT-3 models to test our algorithm on text summarization, following Stiennon et al. ([2020](#bib.bib36)).
A model trained with our algorithm generates summaries that human evaluators prefer to human reference summaries ∼similar-to\sim∼51% of the time.
We obtain these results when learning from only 100100100100 samples of natural language feedback.
Our analysis shows that LM-generated refinements typically incorporate the feedback, especially when choosing the refinement with the highest similarity with the feedback.
Our results suggest that natural language feedback is a promising avenue for learning from human preferences.
2 Method
---------
Here, we define our problem formulation more formally.
Given an input x𝑥xitalic\_x, we seek to generate an output y𝑦yitalic\_y that is high quality according to human preference judgments.
We assume access to natural language feedback f𝑓fitalic\_f on an initial model-generated output y𝑦yitalic\_y given the input x𝑥xitalic\_x.
To tackle the above problem, we leverage the ability of pretrained LMs to follow instructions Radford et al. ([2019](#bib.bib30)); Sanh et al. ([2021](#bib.bib34)); Wei et al. ([2022](#bib.bib40)); Ouyang et al. ([2022](#bib.bib26)).
We assume access to an LM that takes an input (e.g., a task instruction) and produces a distribution over text completions (e.g., a task output).
We instruct the LM to refine the initial output y𝑦yitalic\_y given the input x𝑥xitalic\_x and feedback f𝑓fitalic\_f.
We then sample N𝑁Nitalic\_N refinements y1′,…,yN′subscriptsuperscript𝑦′1…subscriptsuperscript𝑦′𝑁y^{\prime}\_{1},\dots,y^{\prime}\_{N}italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT from the LM.
Refinements may vary in quality, so we introduce a function S𝑆Sitalic\_S that scores refinements for how effectively they incorporate feedback.
We choose the refinement with the highest score from S𝑆Sitalic\_S and finetune a model on all chosen y′superscript𝑦′y^{\prime}italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT given x𝑥xitalic\_x. We use the resulting model to generate outputs at test time.
| | | | | |
| --- | --- | --- | --- | --- |
| Models |
Ada
(∼350similar-toabsent350\sim 350∼ 350M)
|
Babbage
(∼1.3similar-toabsent1.3\sim 1.3∼ 1.3B)
|
Curie
(∼6.7similar-toabsent6.7\sim 6.7∼ 6.7B)
|
Davinci
(175175175175B)
|
| GPT-3 | 1.0±0.3plus-or-minus1.00.31.0\pm 0.31.0 ± 0.3 | 1.1±0.3plus-or-minus1.10.31.1\pm 0.31.1 ± 0.3 | 8.7±0.8plus-or-minus8.70.88.7\pm 0.88.7 ± 0.8 | 38.5±1.3plus-or-minus38.51.338.5\pm 1.338.5 ± 1.3 |
| InstructGPT | 1.6±0.3plus-or-minus1.60.31.6\pm 0.31.6 ± 0.3 | 2.5±0.4plus-or-minus2.50.42.5\pm 0.42.5 ± 0.4 | 5.4±0.6plus-or-minus5.40.65.4\pm 0.65.4 ± 0.6 | 35.6±1.3plus-or-minus35.61.335.6\pm 1.335.6 ± 1.3 |
Table 1: We report the accuracy in %percent\%% with the standard error. On the task of removing offensive words from a sentence, only large LMs incorporate feedback.
3 Experiments
--------------
###
3.1 Can Language Models Use Feedback?
For our algorithm to work, LMs must be able to accurately incorporate feedback to generate refinements.
Thus, we first validate our algorithm on a carefully-controlled synthetic task of removing specific offensive words from a given sentence.
We examine how effective various models are at incorporating feedback, to determine what model to use for refining outputs.
##### Experimental Setup
We instruct an LM to refine an automatically-generated sentence with ≤10absent10\leq 10≤ 10 offensive words by removing ≤3absent3\leq 3≤ 3 specific words (see Appendix [B](#A2 "Appendix B Targeted Word Removal Details ‣ Training Language Models with Language Feedback") for a detailed explanation and examples). We evaluate how often the generated refinement exactly matches the target sentence, which we also automatically generate. For our LMs, we use differently-sized GPT-3 models (Brown et al., [2020](#bib.bib3)) and their finetuned, InstructGPT counterparts (Ouyang et al., [2022](#bib.bib26)).111Via the [OpenAI API](https://beta.openai.com/). OpenAI does not disclose the size of the provided models, so we use estimates from [Eleuther](https://blog.eleuther.ai/gpt3-model-sizes/). We report all hyperparameters used in Appendix [E](#A5 "Appendix E Hyperparameters ‣ Training Language Models with Language Feedback").
We report mean and std. error for all results in our work.
##### Results
Table [1](#S2.T1 "Table 1 ‣ 2 Method ‣ Training Language Models with Language Feedback") shows the results. We observe that only the largest GPT-3 and InstructGPT models (175B parameters) incorporate feedback a non-negligible amount of time. Using this insight, we only use the 175B parameter (Davinci) models in the rest of our experiments.
###
3.2 Text Summarization
####
3.2.1 Experimental Setup
##### Generating Refinements
We now evaluate our algorithm on the real-world task of text summarization.
We follow prior work on learning from human preferences (Stiennon et al., [2020](#bib.bib36)) and learn to summarize Reddit posts from Völske et al. ([2017](#bib.bib39)).
We take 100100100100 samples from the Reddit data subset used in Stiennon et al. ([2020](#bib.bib36)).
We use InstructGPT (175B) to generate initial summaries and refinements, using the instructions in Appendix [F](#A6 "Appendix F Prompt Templates ‣ Training Language Models with Language Feedback").
We then write feedback f𝑓fitalic\_f on the initial summary y𝑦yitalic\_y given the Reddit post x𝑥xitalic\_x, and we generate possible refinements y1′,…,y20′subscriptsuperscript𝑦′1…subscriptsuperscript𝑦′20y^{\prime}\_{1},\dots,y^{\prime}\_{20}italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 20 end\_POSTSUBSCRIPT.
##### Scoring Refinements
We choose a refinement with a scoring function S𝑆Sitalic\_S that scores refinements for how effectively they incorporate feedback.
For S𝑆Sitalic\_S, we use contrastive pre-trained text embedding function ℰℰ\mathcal{E}caligraphic\_E (Neelakantan et al., [2022](#bib.bib24)) to embed the feedback f𝑓fitalic\_f and refinements y1′,…,y20′subscriptsuperscript𝑦′1…subscriptsuperscript𝑦′20y^{\prime}\_{1},\dots,y^{\prime}\_{20}italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 20 end\_POSTSUBSCRIPT222We use [OpenAI’s API](https://beta.openai.com/) to access the embeddings..
We then choose the refinement with the highest cosine similarity score with the feedback.
We opted for high similarity with the feedback because feedback often describes what the ideal or improved text would look like.
We refer to refinements generated with the above algorithm as Refinement with Feedback + Best of N.
##### Finetuning
We finetune GPT-3 (175B; Brown et al., [2020](#bib.bib3))333InstructGPT cannot yet be finetuning via OpenAI’s API. on refinements generated by Refinement with Feedback + Best of N. We compare against finetuning on Initial Summaries generated with InstructGPT.
We also compare against summaries generated directly by InstructGPT and GPT-3 (175B). We use the same instructions as for Initial Summaries (in Appendix [F](#A6 "Appendix F Prompt Templates ‣ Training Language Models with Language Feedback")) and provide the post and its title.
##### Evaluation
We test on 100100100100 unseen Reddit posts from the same dataset and conduct human evaluations for all experiments.444We plan to conduct larger-scale human evaluations in the future, to confirm our initial findings. Evaluators rank the summaries according to the rubric in Appendix [C](#A3 "Appendix C Human Feedback and Evaluation ‣ Training Language Models with Language Feedback"), with ties allowed.
We show the win rate of an algorithm, counting ties as a half win, similar to Kendall rank correlation.555[Kendall Rank correlation](https://tinyurl.com/ba9mh4cy). We refer to Appendix [C](#A3 "Appendix C Human Feedback and Evaluation ‣ Training Language Models with Language Feedback") for a description of all human evaluation and feedback annotation procedures and Appendix [D](#A4 "Appendix D Details about Ranking Procedure ‣ Training Language Models with Language Feedback") for more details about the ranking scheme.

Figure 2: How often human evaluators prefer summaries from our learning algorithm and baselines to
Human Summaries. Our proposed algorithm (leftmost bar) generates summaries of a similar quality to human summaries.
####
3.2.2 Main Results
Fig. [2](#S3.F2 "Figure 2 ‣ Evaluation ‣ 3.2.1 Experimental Setup ‣ 3.2 Text Summarization ‣ 3 Experiments ‣ Training Language Models with Language Feedback") reports the win rate of our learning algorithm over Human summaries and Appendix Fig. [5](#A0.F5 "Figure 5 ‣ Training Language Models with Language Feedback") reports the win rate over InstructGPT.
Finetuning on Refinement with Feedback + Best of N generates summaries on par with human summaries, with a win rate of 51.0±5.0%plus-or-minus51.0percent5.051.0\pm 5.0\%51.0 ± 5.0 % over human summaries.
In contrast, all baselines underperform human summaries, with win rates of 19.0±3.9%plus-or-minus19.0percent3.919.0\pm 3.9\%19.0 ± 3.9 % (GPT-3), 35.0±4.8%plus-or-minus35.0percent4.835.0\pm 4.8\%35.0 ± 4.8 % (InstructGPT), 44.0±5.0%plus-or-minus44.0percent5.044.0\pm 5.0\%44.0 ± 5.0 % (finetuning on Initial Summaries)).
In particular, our approach achieves a win rate of 57.0±5.0%plus-or-minus57.0percent5.057.0\pm 5.0\%57.0 ± 5.0 % over the strongest baseline, finetuning on Initial Summaries.
Our result suggests that our learning algorithm produces higher-quality summaries by finetuning on the higher-quality targets (our refinements).
Overall, we achieve strong results on summarization while learning from only 100100100100 samples of human-written feedback.


Figure 3: Left: How often human evaluators prefer summaries from
each refinement method to the Initial Summaries (from InstructGPT). Refinement with Feedback improves on the Initial Summaries and outperforms human summaries with Best of N sampling. Right: Refining with feedback generally does incorporate specific point(s) mentioned in the feedback.
####
3.2.3 Analysis
We now aim to examine the importance of various aspects of our algorithm for generating high-quality refinements (before finetuning).
We evaluate Refinement with Feedback, which randomly chooses a refinement ∈y1′,…,y20′absentsubscriptsuperscript𝑦′1…subscriptsuperscript𝑦′20\in{y^{\prime}\_{1},\dots,y^{\prime}\_{20}}∈ italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 20 end\_POSTSUBSCRIPT.
This ablation helps to evaluate the importance of choosing a refinement with our scoring function S𝑆Sitalic\_S.
We also evaluate Refinement without Feedback, which instructs the LM to refine the initial summary but without feedback.
This ablation helps to evaluate the importance of using the feedback.
Lastly, we evaluate Human Summaries, i.e., summaries written by Reddit users on their own posts, and Initial Summaries, i.e., the initial summary y𝑦yitalic\_y generated by the LM.
See Appendix [F](#A6 "Appendix F Prompt Templates ‣ Training Language Models with Language Feedback") for concrete examples of the instructions that we use.
Fig. [3](#S3.F3 "Figure 3 ‣ 3.2.2 Main Results ‣ 3.2 Text Summarization ‣ 3 Experiments ‣ Training Language Models with Language Feedback") (left) shows the win rates of refinements from various methods against Initial Summaries. Refinement with Feedback + Best of N improves over the Initial Summaries, with our algorithm being preferred 67.0±3.1%plus-or-minus67.0percent3.167.0\pm 3.1\%67.0 ± 3.1 % of the time. Our algorithm is preferred 54.0±3.5%plus-or-minus54.0percent3.554.0\pm 3.5\%54.0 ± 3.5 % of the time to human summaries, while Initial Summaries are significantly worse than human summaries, preferred only 39.3±3.4%plus-or-minus39.3percent3.439.3\pm 3.4\%39.3 ± 3.4 % of the time. Appendix Fig. [4](#A0.F4 "Figure 4 ‣ Training Language Models with Language Feedback") shows win rates of refinements generated with various methods against Human Summaries and Appendix Fig. [6](#A1.F6 "Figure 6 ‣ Appendix A Additional Results ‣ Training Language Models with Language Feedback") shows that refinements are more helpful when the initial summary is of lower quality. We also refer to Appendix [G](#A7 "Appendix G Examples ‣ Training Language Models with Language Feedback") for 10101010 random examples of Initial Summaries, feedback, and refinements from various methods. Overall, using feedback and scoring refinements are both important steps for generating high-quality refinements of the initial output.
Here, we examine whether refinements are of higher quality because they incorporate the feedback, rather than by improving the summary in other ways.
To do so, we evaluate how often the refinements incorporate the human-written feedback.
We evaluate (1) how often ≥1absent1\geq 1≥ 1 point mentioned in the feedback is incorporated in the refinement, (2) how often >1absent1>1> 1 point is incorporated, and (3) how often all of the feedback is incorporated.
In Fig. [3](#S3.F3 "Figure 3 ‣ 3.2.2 Main Results ‣ 3.2 Text Summarization ‣ 3 Experiments ‣ Training Language Models with Language Feedback") (right), we see that our algorithm incorporates ≥1absent1\geq 1≥ 1 feedback point 72.0±4.5%plus-or-minus72.0percent4.572.0\pm 4.5\%72.0 ± 4.5 % of the time, showing that LMs are able to incorporate feedback with high accuracy. Refinements without Feedback only incorporates at least one feedback point 15.0±3.6%plus-or-minus15.0percent3.615.0\pm 3.6\%15.0 ± 3.6 % of the time. Our results suggest that refinements are high-quality because they incorporate specific points in the feedback.
4 Additional Related Work
--------------------------
Existing work in NLP primarily investigates using explanations for labeled outputs to classification tasks.
In contrast, we do not assume access to gold-labeled outputs, and we study the more general text generation setting, which classification tasks can be formulated as Radford et al. ([2019](#bib.bib30)); Raffel et al. ([2020](#bib.bib32)); Brown et al. ([2020](#bib.bib3)).
Explanations describe why a labeled output is correct, while feedback describes how to improve a candidate output.
Prior work explores ways of using explanations to train text classification models, with mixed results (Camburu et al., [2018](#bib.bib4); Stacey et al., [2021](#bib.bib35); Pruthi et al., [2021](#bib.bib28); Wiegreffe et al., [2021](#bib.bib42); Hase and Bansal, [2021](#bib.bib11); Lampinen et al., [2022](#bib.bib14), inter alia).
A few prior works also learn from language feedback, for the purpose of ranking candidate outputs rather than generating outputs (Weston, [2016](#bib.bib41); Li et al., [2016](#bib.bib15); Hancock et al., [2019](#bib.bib10); Li et al., [2022](#bib.bib16)).
Matiana et al. ([2021](#bib.bib22)) learn text embeddings of language feedback, where improvements could benefit the refinement-scoring step of our algorithm.
Outside of text domains, there is abundant work in reinforcement learning that leverages language in various ways (see Luketina et al., [2019](#bib.bib20), for an overview).
Prior work uses language to specify the task (“instruction following” Chaplot et al., [2017](#bib.bib5); Mahmoudieh et al., [2022](#bib.bib21); Ouyang et al., [2022](#bib.bib26), inter alia), drive exploration (Tam et al., [2022](#bib.bib38)), infer reward functions (Lin et al., [2022](#bib.bib18); Sumers et al., [2021](#bib.bib37); Fidler et al., [2017](#bib.bib7), inter alia), and train the model via strong supervision Andreas et al. ([2017](#bib.bib1)); Kaplan et al. ([2017](#bib.bib13)), reward shaping Goyal et al. ([2019](#bib.bib9)), or purely with language by providing descriptions of trajectories (Nguyen et al., [2021](#bib.bib25)).
In contrast, we use language to correct faulty behavior.
Other work uses language feedback at test time to correct mistakes in a model’s behavior, for e.g. image segmentation (Rupprecht et al., [2018](#bib.bib33)) or code generation Elgohary et al. ([2020](#bib.bib6)); Austin et al. ([2021](#bib.bib2)).
In contrast, we use feedback to train models, and our approach does not require human intervention at test time.
5 Conclusion
-------------
In this work, we proposed an algorithm for training LMs to behave in line with human preferences, by learning from natural language feedback.
We validated our approach on a carefully-controlled word-removal task, showing that only large LMs (175B parameters) accurately incorporate feedback.
Using this insight, we then tested our algorithm on the real-world task of text summarization.
Our finetuning algorithm brought a GPT-3 model to roughly human-level summarization ability, using only 100 samples of human feedback.
Language feedback is a natural form of communicating with models which may make it easier for many people to provide informative, high-quality feedback.
In the long run, our work suggests many exciting avenues for future work, e.g., in guiding models with language feedback in other domains from code generation to conversational assistance.
6 Acknowledgements
-------------------
We are grateful to Nat McAleese, Geoffrey Irving, Jeff Wu, Sam Bowman, Daniel Ziegler, Seraphina Nix, and Lennart Heim for helpful conversations and feedback.
Jérémy Scheurer and Jun Shern Chan thank Open Philanthropy for funding that enabled this research.
Ethan Perez thanks the National Science Foundation and Open Philanthropy for fellowship support.
Jon Ander Campos is supported by a doctoral grant from the Spanish MECD.
Angelica Chen and Kyunghyun Cho are supported by the NYU Center for Data Science National Science Foundation (Award 1922658).
Kyunghyun Cho is also supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI).
We also thank OpenAI for providing access and credits to their models via the API Academic Access Program.
|
1cade0b6-f2b5-4521-a4ba-c78fb396e8a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Existential biotech hazard that was designed in the 90s?
Does anyone know something about this alteration of Klebsiella planticola? Paywalled paper here. (If someone has got access please PM me, I would like to read the paper to write a more fleshed out article.)
While I am not convinced that it would really have spread to every terrestrial ecosystem, or even every wheat field and I am not even sure if it could compete successfully with the wild type, I certainly would not bet the world on that. Even if it might only have become a nasty crop bug instead of an ecosystem killer, I think this may be the closest encounter with a true existential risk we have had so far. This suggests, that even our current low end biotech may be the greatest existential risk we face at the moment. Or is this just hyped bullshit for some reason I do not see right now (without reading the paper)?
Edit: Upon reading the original paper I am quite sure Cracked.com greatly exagerated the potential threat. 10^8 cfu (colony formin units) K. planticolata per gram soil (dry weight) was added on day 0, but after 8 weeks only 10^2 cfu survived (this is true for both wild type and modified K. planticolata). This suggests, that K. planticolata in the wild has typical densities more like 10^2 cfu per g than 10^8 cfu per g. 10^2 cfu per g is nowhere near enough to produce lethal ethanol concentrations in the soil, even if the modified strain could compete in the wild. Furthermore the concentration of the modified K. planticolata decreased faster than the concentration of the wild type suggesting reduced fitness of the GMO. On the other hand after 8 weeks both K. planticolata strains arrived at the same density of 100 cfu per g indicating comparable medium term survivability in unsterilized soil (I am not sure if indigenous K. planticolata which could compete with the GMO was present in the soil sample used). Yes, they did avoid the obvious failure mode of not differentiating between wild type and modified K. planticolata during recovery of K. planticola
|
45491196-32e4-466c-8787-b58850a8c458
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Formal definition
Meta tag for pages which give formal or brief jargon-heavy technical definitions of a concept. Formal definitions should be secondary [lenses](https://arbital.com/p/17b) when explanations are available.
|
9ced9a85-62d9-4f49-9cab-54857c52ca9d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Prediction should be a sport
So, I've been thinking about prediction markets and why they aren't really catching on as much as I think they should.
My suspicion is that (beside Robin Hanson's signaling explanation, and the amount of work it takes to get to the large numbers of predictors where the quality of results becomes interesting) the basic problem of prediction markets is that they look and feel like gambling. Or at best like the stock market, which for the vast majority of people is no less distasteful.
Only a small minority of people are neither disgusted by nor terrified of gambling. Prediction markets right now are restricted to this small minority.
Poker used to have the same problem.
But over the last few decades Poker players have established that Poker is (also) a sport. They kept repeating that winning isn't purely a matter of luck, they acquired the various trappings of tournaments and leagues, they developed a culture of admiration for the most skillful players that pays in prestige rather than only money and makes it customary for everyone involved to show their names and faces. For Poker, this has worked really well. There are much more Poker players, more really smart people are deciding to get into Poker and I assume the art of game probably improved as well.
So we should consider re-framing prediction the same way.
The calibration game already does this to a degree, but sport needs competition, so results need to be comparable, so everyone needs to make predictions on the same events. You'd need something like standard cards of events that players place their predictions on.
Here's a fantasy of what it could look like.
* Late in the year, a prediction tournament starts with the publication of a list of events in the coming year. Everybody is invited to enter the tournament (and maybe pay a small participation fee) by the end of the year, for a chance to be among the best predictors and win fame and prizes.
* Everyone who enters plays the calibration game on th
|
50ba2f79-f1b1-4085-b18c-db6b9e18b5b8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On the Care and Feeding of Young Rationalists -- Revisited[Draft] [Request for Feedback]
Planned top-level post -- any feedback very much welcome.
Obviously a followup to: On the Care and Feeding of Young Rationalists
My very first top-level post on LW was a solicitation for advice/feedback/discussion on the topic of rationalist parenting. I'd like to revisit the topic now.
Goals
First of all, let's talk about goals. I can think of four.
1. Produce thriving, intelligent, rational, happy, good-hearted children who become thriving, intelligent, rational, happy, good-hearted adults.
2. Have your children enjoy their childhoods
3. Enjoy raising your children.
4. Closely tied to 2 and 3 -- actually have a good relationship with your children. Like them and have them like you.
What We Know
To speak to goal 1 first, Bryan Caplan claims flat outcomes for goal #1 under commonly tried parenting interventions, which seems counter-intuitive. More explanation of what exactly the studies in question proved would be welcome.
As Luke helpfully taught us, negative reinforcement doesn't seem to work as well as positive. Spanking, in particular, is right out. This is in large part because reinforcement reinforces everything about what the subject's doing at the time it occurs. This means, in particular, that you're reinforcing both the target behavior and being caught at it. In the case of positive behavior/reinforcement, there's nothing particularly problematic about this, but for the negative case, you're also punishing being caught/noticed/seen, which can be problematic.
Nutrition in early childhood does seem to influence life outcomes, mostly on the low end: serious malnutrition depresses IQ -- try to avoid it.
Praise seems to be important, first of all because it is often a powerful positive reinforcer in children. Research has shown that the target of the praise is important. Praising a child for having worked hard to understand a concept seems to lead to more future efforts of the same kind than praising their intelligence.
Simply talk
|
6ee6c910-77e9-4120-9949-d9074d0239df
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Interview with Olle Häggström: Reason, COVID-19 and Academic Freedom in Sweden
Olle Häggström, author of Here Be Dragons and much else, and Professor of Mathematical Statistics at my alma mater, was good enough to answer a few questions of mine. Here's an excerpt:
> ERICH: Is there a Swedish Scott Alexander? A Julia Galef? Even an Eliezer Yudkowsky?
>
> OLLE: I must say you're good at guessing who some of my favorite intellectuals might be. But I should also say that those you mention are unique individuals, and it doesn't make much sense to try to name Swedish counterparts. If I did try to do that, I'd necessarily omit a lot of extremely bright Swedish thinkers and friends, and thereby unnecessarily insult them. And perhaps I'd even insult those I do mention. I recall once, when I was maybe 30 or 35 and very much up and coming in academia, and I had given a talk somewhere in Sweden, and this professor X from the older generation came up to me afterwards and wanted to express how much he liked the talk and to say something really nice to me. So he said, with reference to another leading Swedish math professor Y of his own generation, "You really are a worthy heir to Y!" And while I understood he meant it as praise, I couldn't help feeling partly insulted. I hope I didn't show it too much, but I felt like saying "I am not Y, or an heir to Y, I am Olle Häggström, with my own unique competences and agendas". And so I won't give you any names.
>
> [...]
>
> ERICH: Do politicians listen to scientists at all? Can you, as a professor, make your voice heard?
>
> OLLE: If you read the newspapers, you can get the impression that politicians are so involved in the struggle to win the next election that they do not care about addressing the full range of problems society is facing, and especially not about long-term issues. And while there's some truth to that, there's more going on behind the scenes than meets the eye. For instance, I was very happy last year to be asked by the Swedish Green Party (Miljöpartiet) for a report to help them figure out
|
f3bee138-f59c-4a56-b7c0-43bfe4d734a8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
When Are Circular Definitions A Problem?
Disclaimer: if you are using a definition in a nonmathematical piece of writing, you are probably making a mistake; you should just get rid of the definition and instead use a few examples. This applies double to people who think they are being "rigorous" by defining things but are not actually doing any math. Nonetheless, definitions are still useful and necessary when one is ready to do math, and some pre-formal conceptual work is often needed to figure out which mathematical definitions to use; thus the usefulness of this post.
Suppose I’m negotiating with a landlord about a pet, and in the process I ask the landlord what counts as a “big dog”. The landlord replies “Well, any dog that’s not small”. I ask what counts as a “small dog”. The landlord replies “Any dog that’s not big”.
Obviously this is “not a proper definition”, in some sense. If that actually happened in real life, presumably the landlord would say it somewhat tongue-in-cheek. But what exactly is wrong with defining big dogs as not small, and small dogs as not big?
One might be tempted to say “It’s a circular definition!”, with the understanding that circular definitions are always problematic in some way.
But then consider another example, this time mathematical:
* Define x as a real number equal to y-1: x = y-1
* Define y as a real number equal to x/2: y = x/2
These definitions are circular! I’ve defined x in terms of y, and y in terms of x. And yet, it’s totally fine; a little algebra shows that we’ve defined x = -2 and y = -1. We do this thing all the time when using math, and it works great in practice.
So clearly circular definitions are not inherently problematic. When are they problematic?
We could easily modify the math example to make a problematic definition:
* Define x as a real number equal to y-1: x=y-1
* Define y as a real number equal to x+1: y=x+1
What’s wrong with this definition? Well, the two equations - the two definitions - are redundant; they both tell us the same
|
9b3b2868-8663-4fa3-8e3e-150eda84c217
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The 0.2 OOMs/year target
TLDR: Humanity — which includes all nations, organisations, and individuals — should limit the growth rate of machine learning training runs from 2020 until 2050 to below 0.2 OOMs/year.
Paris Climate Accords
In the early 21st century, the climate movement converged around a "2°C target", shown in Article 2(1)(a) of the Paris Climate Accords:
"Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;"(source)
The 2°C target helps facilitate coordination between nations, organisations, and individuals.
* It provided a clear, measurable goal.
* It provided a sense of urgency and severity.
* It promoted a sense of shared responsibility.
* It establishes common knowledge of stakeholder goals.
* It helped to align efforts across different stakeholders.
* It signals a technical practical mindset for solving the problem.
* It created a shared understanding of what success would look like.
The 2°C target was the first step towards coordination, not the last step.
The AI governance community should converge around a similar target.
0.2 OOMs/year target
I propose a fixed target of 0.2 OOMs/year. "OOM" stands for "orders of magnitude" and corresponds to a ten-fold increase, so 0.2 OOMs/year corresponds to a 58% year-on-year growth. The 0.2 OOMs/year figure was recently suggested by Jaime Sevilla, which prompted me to write this article.
* I do not propose any specific policy for achieving the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they support different policies.
* I do not propose any specific justification for the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they have different justifications.
Here is the statement:
"Humanity — which includes
|
07c567cf-0a43-41f8-bf6f-7a0acfeb5cf9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Managing catastrophic misuse without robust AIs
Many people worry about catastrophic misuse of future AIs with highly dangerous capabilities. For instance, powerful AIs might substantially lower the bar to building bioweapons or allow for massively scaling up cybercrime.
How could an AI lab serving AIs to customers manage catastrophic misuse? One approach would be to ensure that when future powerful AIs are asked to perform tasks in these problematic domains, the AIs always refuse. However, it might be a difficult technical problem to ensure these AIs refuse: current LLMs are possible to jailbreak into doing arbitrary behavior, and the field of adversarial robustness, which studies these sorts of attacks, has made only slow progress in improving robustness over the past 10 years. If we can’t ensure that future powerful AIs are much more robust than current models[1], then malicious users might be able to jailbreak these models to allow for misuse. This is a serious concern, and it would be notably easier to prevent misuse if models were more robust to these attacks. However, I think there are plausible approaches to effectively mitigating catastrophic misuse which don't require high levels of robustness on the part of individual AI models.
(In this post, I'll use "jailbreak" to refer to any adversarial attack.)
In this post, I'll discuss addressing bioterrorism and cybercrime misuse as examples of how I imagine mitigating catastrophic misuse[2] for a model deployed on an API. I'll do this as a nearcast where I suppose that scaling up LLMs results in powerful AIs that would present misuse risk in the absence of countermeasures. I think The approaches I discuss won't require better adversarial robustness than exhibited by current LLMs like Claude 2 and GPT-4. I think that the easiest mitigations for bioterrorism and cybercrime are fairly different, because of the different roles that LLMs play in these two threat models.
The mitigations I'll describe are non-trivial, and it's unclear if they will happen by defa
|
f7a15264-32a0-4e5f-823f-9d01d1c5f319
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Longevity and the Mind
> A framing I quite like is that of germs vs soma, body vs eggs and cum, consciousness vs replicators.
My first foray into age reversal was a (successful) attempt to increase fluid IQ, the loss of which is among less than a handful of ubiquitous symptoms of aging.
At the time, I found it odd that most people working in biotech, medicine, and longevity were confused about why I did this.
In part, the issue here is one of people finding it hard to break out of existing paradigms, even as hypotheticals. The limits of my language mean the limits of my world. (I expand more on it in this talk)
In part, I believe the medical, biotech and even longevity community are misguided about what goals are worthwhile to achieve.
Reversing the aging process is one of the most lofty goals achievable within our century, and a necessary stepping stone for humanity. Very few people, if any, are interested in this issue; Most end up working on curing diseases or minimally prolonging life in old age. (More on this later)
I believe these two parts are linked and socially caused; We have a tendency to copy each other, and we have incentive chains pulling people towards patentable narrow-effect drugs and therapies. And when corrupt incentives pull people for long enough, we oft forget about the chain and think of it as "just the way the world is".
Let me stop with digressions: assuming you fall into neither of these pitfalls you might still be confused as to why reversing the mind's aging ought to be the most (only?) relevant problem in longevity.
I - Germs and Soma
When discussing eternal youth, a framing I quite like is that of germs vs soma, body vs eggs and cum, consciousness vs replicators.
Many things are desirable individual traits when it comes to natural selection giving rise to ape populations in and around the Congo. Such desirable features include proclivities to age, rape, die, murder, and go infertile.
Humans distinguish themselves among primates by being able to bre
|
17da7d9e-1126-4a9e-9ca1-b6546ae82213
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities
*This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the [announcement post](/lw/kw4/superintelligence_reading_group/). For the schedule of future topics, see [MIRI's reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf).*
---
Welcome to the [*Superintelligence*](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111) reading group. This week we discuss the first section in the reading guide, ***Past developments and present capabilities***. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
**Reading**: *Foreword,* and *Growth modes* through *State of the art* from Chapter 1 (p1-18)
---
Summary
=======
Economic growth:
1. Economic growth has become radically faster over the course of human history. (p1-2)
2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
6. Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.
The history of AI:
1. Human-level AI has been predicted since the 1940s. (p3-4)
2. Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
3. AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
4. By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
5. AI is very good at playing board games. (12-13)
6. AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
7. In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
8. An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)
Notes on a few things
=====================
1. **What is 'superintelligence'?** (p22 spoiler)
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as *'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'*. Vagueness in this definition will be cleared up later.
2. **What is 'AI'?**In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. [Russell and Norvig](http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597) (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
3. **What is 'human-level' AI?**We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human *at any price*, or a machine that can carry out tasks as well as a human *at the price of a human*. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.

Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in [these](/r/discussion/lw/999/qa_with_experts_on_risks_from_ai_1/) [interviews](/r/discussion/lw/9a1/qa_with_experts_on_risks_from_ai_2/).
4. **Growth modes (p1)**
Robin Hanson wrote the [seminal paper](http://hanson.gmu.edu/longgrow.pdf) on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper). 
5. **What causes these transitions between growth modes?** (p1-2)
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many [suggested causes of the industrial revolution](http://en.wikipedia.org/wiki/Industrial_Revolution#Causes), but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that *didn't happen* at all of the other times in history.
6. **Growth of growth**
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 [such extrapolation](http://www.aiimpacts.org/historical-growth-trends) would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.

(Figure from [here](http://www.aiimpacts.org/historical-growth-trends))
7. **Early AI programs mentioned in the book** (p5-6)
You can see them in action: [SHRDLU](https://www.youtube.com/watch?v=QAJz4YKUwqw), [Shakey](https://www.youtube.com/watch?v=qXdn6ynwpiI), [General Problem Solver](http://ai-su13.artifice.cc/gps.html) (not quite in action), [ELIZA](http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm).
8. **Later AI programs mentioned in the book** (p6)
[Algorithmically generated Beethoven](https://www.youtube.com/watch?v=CgG1HipAayU&list=PL8nIR9RW0CkYcjsYNbDWzBG_vv1yeDXq0&index=4), [algorithmic generation of patentable inventions](http://www.genetic-programming.com/inventionmachine.html), [artificial comedy](http://homepages.abdn.ac.uk/wpn006/software.php) (requires download).
9. **Modern AI algorithms mentioned** (p7-8, 14-15)
[Here](http://www.clarifai.com/) is a neural network doing image recognition. Here is [artificial evolution of jumping](https://www.youtube.com/watch?v=QRY7mEjbT8A) and of [toy cars](http://boxcar2d.com/). Here is a [face detection demo](http://rekognition.com/demo/face) that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
10. **What is maximum likelihood estimation?** (p9)
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has [several](http://demonstrations.wolfram.com/MaximumLikelihoodEstimation/) [demonstrations](http://demonstrations.wolfram.com/search.html?query=maximum%20likelihood) (requires free download).
11. **What are hill climbing algorithms like?** (p9)
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The [idea](http://en.wikipedia.org/wiki/Hill_climbing) here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a [demonstration](http://demonstrations.wolfram.com/HillClimbingAlgorithm/) to play with (requires free download).
In-depth investigations
=======================
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
1. How have investments into AI changed over time? [Here's](http://intelligence.org/2014/01/28/how-big-is-ai/) a start, estimating the size of the field.
2. What does progress in AI look like in more detail? What can we infer from it? I wrote about [algorithmic improvement](http://intelligence.org/files/AlgorithmicProgress.pdf) curves before. If you are interested in plausible next steps here, ask me.
3. What do economic models tell us about the consequences of human-level AI? [Here](http://hanson.gmu.edu/aigrow.pdf) [is](http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf) [some](http://mason.gmu.edu/~gjonesb/AIandGrowth) such thinking; Eliezer Yudkowsky [has written at length about his request for more](http://intelligence.org/files/IEM.pdf).
How to proceed
==============
This has been a collection of notes on the chapter. **The most important part of the reading group though is discussion**, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, **read** *Opinions about the future of machine intelligence* from Chapter 1 and also *[When Will AI Be Created?](http://intelligence.org/2013/05/15/when-will-ai-be-created/)*by Luke Muehlhauser*.* The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified [here](http://intelligence.us5.list-manage.com/subscribe?u=353906382677fa789a483ba9e&id=28cb982f40).
|
39ed77e2-020d-4836-b904-dcc76fa82c4f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Grading my 2024 AI predictions
On Jan 8 2024, I wrote a Google doc with my AI predictions for the next 6 years (and slightly edited the doc on Feb 24). I’ve now quickly sorted each prediction into Correct, Incorrect, and Unclear. The following post includes all of my predictions for 2024 with the original text mostly unedited and commentary in indented bullets.
Correct
* there is a viral app (probably Suno) for generating music, reaching 1 million users by July 2024
* Suno had 10 million users in May.
* An open source GPT-4-level model is released.
* Llama 3.1 probably fits the bill.
* Adept AI and similar publicly available browser-based assistants are still not useful enough to be used on browser windows without being supervised by a human for more than 30 seconds. They still have problems like clicking on the wrong part of the screen, getting lost, getting distracted, etc.
* I haven’t seen any agents that are actually able to navigate a browser competently yet.
* Sora is released to customers who apply for access.
* If OAI makes the video continuation feature available, many new memes are created where people use Sora to extend existing videos in funny ways or stitch two videos together.
* Example (although these don’t use Sora). I find it amusing how specific this prediction was. Possibly I’d already seen an example at that point?
* We will see the first signs of evals for moral patienthood in LLMs. Some of the AGI labs will make a public statement where they mention this possibility.
* The Anthropic Fellows Program is looking for people to work on “AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.”
* 6/12 METR tasks are complete
* This suite is deprecated but my best guess is that it would resolve Correct.
* 1/5 ARA tasks are complete
* This suite (the five tasks described in Anthropic's original RSP) is deprecated but it’s likely true that Claude 3.5 Sonnet Upgraded would complete at least one t
|
71f7fbb5-81f5-426b-801f-e206d71c87c8
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
[3-hour podcast]: Joseph Carlsmith on longtermism, utopia, the computational power of the brain, meta-ethics, illusionism and meditation
On this episode of the Utilitarian Podcast, I talk with Joseph Carlsmith. Joseph is a research analyst at [Open Philanthropy](https://www.openphilanthropy.org/) and a doctoral student in philosophy at the University of Oxford. His views and opinions in this podcast are his own, and not necessarily those of [Open Philanthropy](https://www.openphilanthropy.org/).
Our conversation has three main themes. We talk about the long-term future, including the possibility of actually creating utopia. We talk about Joseph’s work on the computational power of the brain. And we talk about meta-ethics and consciousness, including discussions of illusionism and the effects of meditation.
The Utilitarian Podcast now has a dedicated website, at [utilitarianpodcast.com](http://utilitarianpodcast.com). At the site, you’ll find full transcripts of selected episodes, including this one. These transcripts have been generously funded by James Evans. I’ve also set up an email, which is [utilitarianpodcast@gmail.com](mailto:utilitarianpodcast@gmail.com) where you can send criticism, questions, suggestions and so on.
<https://www.utilitarianpodcast.com/creating-utopia-joseph-carlsmith/>
|
260d8861-1474-4e89-a69d-55174ac6ccb6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI #64: Feel the Mundane Utility
It’s happening. The race is on.
Google and OpenAI both premiered the early versions of their fully multimodal, eventually fully integrated AI agents. Soon your phone experience will get more and more tightly integrated with AI. You will talk to your phone, or your computer, and it will talk back, and it will do all the things. It will hear your tone of voice and understand your facial expressions. It will remember the contents of your inbox and all of your quirky preferences.
It will plausibly be a version of Her, from the hit movie ‘Are we sure about building this Her thing, seems questionable?’
OpenAI won this round of hype going away, because it premiered, and for some modalities released, the new GPT-4o. GPT-4o is tearing up the Arena, and in many ways is clearly giving the people what they want. If nothing else, it is half the price of GPT-4-Turbo, and it is lightning fast including fast web searches, which together have me (at least for now) switching back to ChatGPT as my default, after giving Gemini Advanced (or Pro 1.5) and Claude Opus their times in the sun, although Gemini still has the long context use case locked up.
I will be covering all that in another post, which will be out soon once I finish getting it properly organized.
This post covers some of the other things that happened this past week.
Due to the need to triage for now and ensure everything gets its proper attention, it does drop a number of important developments.
I did write the post about OpenAI’s model spec. I am holding it somewhat for final editing and to update it for GPT-4o, but mostly to give it space so anyone, especially at OpenAI, will have the time to read it.
Jan Lieke and Ilya Sutskever have left OpenAI, with Jan Lieke saying only ‘I resigned.’ That is a terrible sign, and part of a highly worrisome pattern. I will be writing a post about that for next week.
Chuck Schumer’s group issued its report on AI. That requires close attention.
Dwarkesh Patel has a new podcas
|
9aed89ea-f339-494d-9937-a8c2853e6160
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Summary thread for Coursera classes
Maybe it would be worth to have a single summary thread for Coursera (and also other source like Udacity etc.) material. At some future point when the courses are on-line and enough people seen them we could work out a "LW curiculum". Here is my subjective list of particularly intersting courses for LW audience:
A Beginner's Guide to Irrational Behavior
Artificial Intelligence Planning
Automata
Basic Behavioral Neurology
Computer Science 101
Clinical Problem Solving
Critical Thinking in Global Challenges
Data Analysis
Fantasy and Science Fiction: The Human Mind, Our Modern World
Game Theory
Human-Computer Interaction
Introduction to Genetics and Evolution
Introduction to Genome Science
Introduction to Mathematical Thinking
Machine Learning
Microeconomics Principles
Model Thinking
Nanotechnology: The Basics
Networked Life
Networks: Friends, Money, and Bytes
Neural Networks for Machine Learning
Neuroethics
Principles of Economics for Scientists
Probabilistic Graphical Models
Quantum Mechanics and Quantum Computation
Rationing and Allocating Scarce Medical Resources
Statistics One
Think Again: How to Reason and Argue
Please note I haven't picked any programming/algorithm courses - there seem to be quite a lot of nice ones. Subscribe here. Plain text list (111 courses):
A Beginner's Guide to Irrational Behavior
A History of the World since 1300
Aboriginal Worldviews and Education
Algorithms, Part I
Algorithms, Part II
Algorithms: Design and Analysis, Part 1
Algorithms: Design and Analysis, Part 2
An Introduction to Interactive Programming in Python
An Introduction to Operations Management
An Introduction to the U.S. Food System: Perspectives from Public Health
Analytic Combinatorics, Part I
Analytic Combinatorics, Part II
Analytical Chemistry
Artificial Intelligence Planning
Astrobiology and the Search for Extraterrestrial Life
Automata
Basic Behavioral Neurology
Bioelectricity: A Quantitative Approach
Calculus: Single Variable
Cardiac Arrest, Hypothermia, and Resus
|
f22b2bf7-9392-44d7-bb41-b5313aa5f935
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Irrationality in Argument"
Here's a poetic blog post by Julian Assange (source). I found the first paragraph relevant:
> 27 Aug 2007 - Irrationality in Argument
>
> The truth is not found on the page, but is a wayward sprite that bursts forth from the readers mind for reasons of its own. I once thought that the Truth was a set comprised of all the things that were true, and the big truth could be obtained by taking all its component propositions and evaluating them until nothing remained. I would approach my rhetorical battles as a logical reductionist, tearing down, atomizing, proving, disproving, discarding falsehoods and reassembling truths until the Truth was pure, golden and unarguable. But then, when truth matters most, when truth is the agent of freedom, I stood before Justice and with truth, lost freedom. Here was something fantastical, unbelievable and impossible, you could prove that (A => B) and (B => C) and (C => D) and (D => F) Justice would nod its head and agree, but then, when you turned to claim your coup de grace, A => F irrevocably, Justice would demur and revoke the axiom of transitivity, for Justice will not be told when F stands for freedom. Transitivity is evoked when Justice imagines F and finding the dream a pleasurable one sets about gathering cushions to prop up their slumber. Here then is the truth about the Truth; the Truth is not bridge, sturdy to every step, a marvel of bound planks and supports from the known into the unknown, but a surging sea of smashed wood, flotsam and drowning sailors. So first, always pick your poetic metaphor, to make the reader want to believe, then the facts, and -- miracle! -- transitivity will descend from heaven, invoked as justification for prejudice.
>
> Often we suffer to read, "But if we believe X then we'll have to...", or "If we believe X it will lead to...". This has no reflection on the veracity of X and so we see that outcomes are treated with more reverence than the Truth. It stings us, but natural selection has spun it
|
821a9cbf-3011-4720-b3a4-5971b387e0a3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
In Defense of the Obvious
[Cross-posed from blog]
My brain does this thing where it shuts off when I experience some warning signs. A lot of these have to do with my identity or personal beliefs, which go off when I believe my tribe is being attacked. I don’t think I’ll go as far as to say that all brain shutoffs are bad (which feels like a Cleaving Statement), but there’s another type of warning sign I’ve recently noticed: Dismissing The Obvious.
Just because a statement is tautological or obvious does not mean it is useless.
Here are some examples:
“If you want to get all of your tasks done everyday, be sure to make a to-do list and a schedule! That way, you can keep track of what you’ve done/need to do!”
My brain’s response: <doesn’t even quite register the points> “Whatever, this doesn’t sound interesting.” <pattern-matches it as “boring advice stuff" that "isn't groundbreaking”>.
In actuality: The advice still stands, even if it’s self-evident and obvious. People who make to-do lists have a better idea of what they need to get done. It’s still useful to know, if you care about getting stuff done!
“If you want to exercise more, you should probably exercise more. Then, you’d become the type of person who exercises more, and then you’d exercise more.”
OR
“If you have more energy, then you’re more energetic, which means you have more energy to do things.”
My brain’s response: “Those conclusions follow each other, by definition! There’s nothing here that I don’t know!” <scoffs>
In actuality: Just because two things are logically equivalent doesn’t mean there’s nothing to learn. In my head, the nodes for “energetic” and “energy = increased doing-stuff capacity” are not the same nodes. Consequently, bringing the two together can still link previously unconnected ideas, or allow you to see the connection, which is still beneficial!
|
8c747db3-1f58-449e-ba98-b2830c0618da
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
A Decentralized Approach towards Responsible AI in Social Ecosystems
A Decentralized Approach towards Responsible AI in Social Ecosystems Wenjing Chu Futurewei Technologies, Inc. wchu@futurewei.com Abstract For AI technology to fulfill its full promises, we must have effective means to ensure Responsible AI behavior and cur-tail potential irresponsible use, e.g., in areas of privacy pro-tection, human autonomy, robustness, and prevention of bi-ases and discrimination in automated decision making. Re-cent literature in the field has identified serious shortcomings of narrow technology focused and formalism-oriented re-search and has proposed an interdisciplinary approach that brings the social context into the scope of study. In this paper, we take a sociotechnical approach to propose a more expansive framework of thinking about the Responsi-ble AI challenges in both technical and social context. Effec-tive solutions need to bridge the gap between a technical sys-tem with the social system that it will be deployed to. To this end, we propose computational human agency and regulation as main mechanisms of intervention and propose a decentral-ized computational infrastructure, or a set of public utilities, as the computational means to bridge this gap. A decentral-ized infrastructure is uniquely suited for meeting this chal-lenge and enable technical solutions and social institutions in a mutually reinforcing dynamic to achieve Responsible AI goals. Our approach is novel in its sociotechnical co-design and its aim in tackling the structural issues that cannot be solved within the narrow confines of AI technical research. We then explore possible features of the proposed infrastruc-ture and discuss how they may help solve example problems recently studied in the field. Introduction The rise of new AI technologies promises a new era of ad-vanced digital services that would have been impossible or impractical before. AI powered services, not only in a con-sumer setting (e.g., web and social media) but also in indus-tries, public social services, and policy domains (e.g., auton-omous vehicles/robotics, healthcare, housing and mortgage lending, employment, and criminal justice systems) could form the basis of our future economy and social fabric. Be-cause of its potential impact, the AI technology’s potential To appear in the 16th International Conference of Web and Social Media (ICWSM’22), June 6-9, 2022, Atlanta GA, USA. harms, ranging from privacy violations and social media in-fluence operations to facial recognition in surveillance and opaque automated systems with biases, have been recog-nized by academia, industries, and society at large (Barocas & Selbst, 2016). Prominent research efforts studied formal and algorithmic methods of desired Responsible AI qualities including privacy (Dwork, 2008; Kairouz & McMahan, 2021) and fairness (Narayanan, 2018; Verma & Rubin, 2018; Dwork, et al., 2011; Corbett-Davies, et al., 2017). Al-gorithmic focused efforts alone, however, are not sufficient. If not properly deployed in a social context, technocentric solutions can suffer from common traps and fail in achiev-ing the intended goals (Chouldechova, 2016). These short-comings include the failure to include the crucial steps of data collection, dataset curation, and model characterization (Gebru, et al., 2018; Mitchell, et al., 2019), and more im-portantly, the failure to take social context into account (Chouldechova, 2016; Barabas, et al., 2020; Selbst, et al., 2019; Andrus, et al., 2021). At the same time, policy makers in various jurisdictions have recognized these risks and in-troduced regulations to remedy potential harms. These reg-ulations (EU, 2018; California, 2018) will have a significant impact in AI development (EPRS, 2020) but their effective-ness is not yet evident (Machuletz & Bohme, 2020; Nouwens, et al., 2020). To meet this challenge, we adopt a sociotechnical systems approach (Ropohl, 1999; Davis, et al., 2014) to re-frame Re-sponsible AI problems in a new context that encompass not only the full lifecycle of AI use but also the actors and struc-tures of a social ecosystem. This new framework gives us a robust way to discuss not just technology and people as pas-sive users but to discuss roles and processes involving users, providers, regulators, and institutions. Based on this expan-sive intellectual framework, we propose a sociotechnical model for AI systems, and further propose and design a computational infrastructure, as a decentralized common
utility, upon which various sociotechnically effective mech-anisms can be implemented to achieve Responsible AI goals. We explore two such intervention mechanisms, agency and regulation, and discuss how the computational public infrastructure can facilitate these intervention methods and leverage social tools and dynamics to achieve Responsible AI goals. As initial steps to explore this approach, we con-struct a sketch of such a decentralized system building on recent advances in decentralized systems and cryptography, incrementally define a powerful set of features to solve com-mon problems studied in the recent literature and share our thoughts and learnings about the new approach. We con-clude that a decentralized approach holds great promise in advancing the balanced technical and social goals of AI with computational dynamics and policy flexibility and call for further research in this direction. A Sociotechnical Framework The structural problems faced by Responsible AI do not have simple answers in the original AI technology domain (Narayanan, 2018). A sociotechnical approach (Ropohl, 1999; Davis, et al., 2014) instead seeks to optimize joint goals both on the task (functional) level and on the social level (people and their social structure). The introduction of technology into this social context creates a complex dy-namic that must be understood in the combined technical and social system. This interconnectedness between a tech-nical system and a social system is often illustrated as a di-amond (Figure 1) in the classic literature (Leavitt, 1972; Bostrom & Heinen, 1977).
The notion of sociotechnical systems originated from la-bor studies in the English coal mines after World War II (Emery & Trist, 1960). While later developments often fo-cused on organizational studies as technology was intro-duced to the workplace, we believe they are still well suited as a robust framework to study the introduction of AI tech-nology in the society. The goals of Responsible AI can therefore be understood as studying how AI technology will reshape people and their social structure, how people and their social structure will respond to AI and reshape its de-velopment and propose integrated solutions that span both the technical system and the social system for optimal out-come. Based on this general outlook, we now discuss the follow-ing social science concepts in order to consider the require-ments of a computational tool that support these social con-cepts. • Human Agency In a commercial setting, users are the subject of data collec-tion and/or the receiver of an AI enabled service. Human agency is the empowerment of users in making self-inter-ested decisions in a sociotechnical system. In a public policy setting, people whose data is being collected and whose lives are impacted by the AI system need to have inputs to the construction of the system and its inner workings and recourse to its automated decisions. While these two settings can have significant differences in practice, for our discus-sion in the high level, we will group them together. • Regulations We define regulations in a general sense as rules or norms constraining the operation of the technical system as well as those rules or norms that apply to the structure of people or-ganizations. They can be algorithmic, administrative, legal, or cultural. • Institutions Institutions, in an abstract sense, are organizations, forums, or other digital mechanisms of people who collectively for-mulate regulations which make choices and compromises that prioritize certain goals over others in circumstances and make changes over time. In sociology, institutionalization is the generalization of “value and behavior patterns,” and therefore, AI technology can be seen as an example of “technical institutionalization” (Ropohl, 1999). Our central contention is that to achieve Re-sponsible AI goals, we must design AI’s technical system to foster effective institutions that formulate optimal regula-tions balancing task level and social level goals. A requisite condition for such effective institutions is the agency of peo-ple who are both contributors to and recipients of its impact (positive or negative) from the AI technology. A Sociotechnical Model for AI Using the sociotechnical framework, we propose a simple model of common machine learning based AI systems to capture essential social actors, AI system artefacts and their relationships.
Figure 1: Sociotechnical Systems
In Figure 2, the individual person, or user, is represented in the joint roles of data source and the recipient of some utility. The user makes a joint decision of contributing source data and in return receives some form of benefits, i.e., a computational utility. In current common practices, this decision process is often opaque and its ethics ambiguous as an individual user lacks practical choice and standing in ne-gotiating the conditions of this exchange (Bohme & Kopsell, 2010; Machuletz & Bohme, 2020). The AI system itself is modeled with a learning compo-nent and an inferencing component that interact with the user. The learning element utilizes source data from many users to algorithmically produce a trained or learned model. This model is the form where knowledge learned from the source data is codified and distributed. Typically, this model is utilized in the overall software system by combining a tra-ditional source code program and the learned model. The combined software program is then deployed to an AI appli-cation. Similarly, learned models can also be used in a new revision of the learning algorithm itself, e.g., in a reinforce-ment learning setting or other forms of iterative or meta-learning algorithms. We then propose two basic types of interventions to reg-ulate the dynamics of the AI system with the objective to move the system towards more responsible behavior. Computational Agency: Empowerment The first mechanism is to regulate the exchange between a user and an AI system. In commercial practices, this rela-tionship is often in the forms of a Terms of Services (ToS) or End User License Agreement (EULA) for which users lack practical choices or even standing in negotiating the terms (Bohme & Kopsell, 2010; Machuletz & Bohme, 2020; Kim, 2013; Rakova & Kahn, 2020). In public service set-tings, the system’s development is opaque, and its use is im-posed upon by policy decisions where individuals of disad-vantaged groups often have little input to its formulation or recourse to address its problems (Barocas & Selbst, 2016; Peacock, 2014; Chouldechova, 2016). We call the mechanism Computational Agency because it is an empowerment mechanism in favor of the end users to own and exercise practical and effective control of their source data, and to exercise choice in the service agreement. Therefore, we symbolically put the human figure on the front side of the combined data source and utility box in Fig-ure 2. To make this empowerment effective, we contend that the end user must have a recognized identity to exercise such rights in the digital domain and the AI system must offer convenient enough user interface for people to exercise their rights. In social sciences, the close relationship between identity and agency is well studied (Holland, et al., 1998). In the computing domain, the Self-Sovereign Identity com-munity (Allen & Applecline, 2017; Preukschat & Reed, 2020; Muhle, et al., 2018) offers strong arguments for uni-versal digital identities for digital services. In the regulatory domain, the EU initiative known as eIDAS (EU, 2022) is an example of efforts now ongoing in many regions and nation states to support digital identity for their citizens. Computational Regulation: Rules and Norms The second mechanism is to enforce restrictions on the be-havior of the AI system. The term “regulation” is used in abstract sense here. These regulations can be imposed as public policies, ethical norms, or cultures in communities. Such regulations can put constraints on the characteristics of the datasets, a trained model’s behavior regarding permissi-ble biases, or data transparency and auditability require-ments. Recent studies have proposed many accountability and auditability mechanisms and demonstrated their effec-tiveness (Gebru, et al., 2018; Mitchell, et al., 2019; Brund-age, et al., 2020). Similarly, another aspect is to apply con-straints on software code and behavior by verification mech-anisms in common software distribution registries. Re-course (Ustun, et al., 2019; Joshi, et al., 2019) is another ex-ample where regulation can enforce its use in an AI-aided decision process. In Figure 2, these regulations can be enforced most effi-ciently along the lines where components interact. A Decentralized Infrastructure We now turn to the need to implement a practical common infrastructure and why it should be a decentralized system. While a full-scale discussion of relevant concepts and meth-ods is out of scope for this paper, we outline a brief reason-ing for our approach and offer some rationale for the pro-posed methods. The first principle to consider is that of autonomy or agency from a humanistic standpoint. The challenge is how such agency can be best materialized in an AI system (in
Figure 2:A Sociotechnical Model for AI
fact, any social system, digital or otherwise). The central ar-gument is that any individual’s ability to exercise meaning-ful equal rights in a system must start with exercising control over their own identity. For example, for a person subject to discriminatory treatment by a system to exercise the right of recourse (e.g., filing a complaint), they would first need an account, or identity, in another system independent from the very system the complaint is about. Similarly, if an online user wishes to negotiate the Terms of Service (ToS) with a provider, the basis of that negotiation must be another neu-tral system not subject to the very terms they are negotiating in a Catch-22. Decentralized financial systems such as Bitcoin (Naka-moto, 2008) and Ethereum blockchain (Ethereum, 2021) make similar arguments for decentralization. However, there are significant differences. A decentralized identity is designed to exercise rights in digital systems; therefore, it is primarily concerned with neutrality among the parties, in addition to authenticity and integrity. Such neutrality can be realized by a blockchain with the appropriate trust or governance framework, or by other forms of decentralized systems, or by systems operated by familiar social institutions that have earned such trust such as various democratic, legal, and civil institutions. In all these cases, it is a combination of a technical system and a form of governance that give it the right properties. There may be many instances of such systems that are interopera-ble through standardization. This is another dimension of being decentralized that ensure universality. Such identity supporting systems must be a public infra-structure in a sense that there should be no barrier, technical or social, for individuals to create, manage, port, and remove identity and identity specific information. Neutrality re-quires decentralization. Another challenge to exercise rights in a technical system is to construct an algorithmic base to efficiently establish trust, reach agreements, and verify results without a central-ized authority. Many recent advances have been made in this regard for decentralized systems. Verifiable credentials (W3C-VC, 2020) allow claims or information to be asserted by the authoritative sources and be efficiently verified with-out an intermediary that can collect or correlate private data. Smart and Ricardian contracts (Szabo, 1994; Grigg, 2015) allow agreements to be executed as code and make its rec-ords auditable. The right to privacy is more critical in AI systems and therefore the infrastructure we propose to exercise control over AI systems must have strong privacy support. We may consider privacy in terms of controlling collection, disclo-sure, storage and correlation or other inference methods in general. New signature schemes such as CL Signature and BBS+ offer efficient means to selective disclosure, Zero Knowledge Proof (ZKP) and correlation prevention (Came-nisch & Lysyanskaya, 2001; Boneh, Boyen, & Shacham, 2004; Camenisch, Drijvers, & Lehmann, 2016). More ad-vances are being made in the technologies of Homomorphic Encryption, Secure Multi-Party Computing and Secure En-clave (Cammarota, et al., 2020) that make secure and confi-dential computing more practical. In addition, the public infrastructure also requires scale and robustness of a decentralized system similar to the foun-dation of the Internet. These and other computational mech-anisms are crucial because inefficient implementations would be disadvantaged and result in the familiar ineffective rules regardless of what the text or intent of the regulation is (Utz, et al., 2019; Nouwens, et al. 2020; Machuletz & Bohme, 2020). We emphasize this point by stating that it takes a program to regulate a program. In summary, the rationale for a decentralized system is multifold. • It is uniquely suited to address governance issues which are at the core of sociotechnical challenges. • It offers a practical solution to support human agency. • It can support a set of required features for solving Re-sponsible AI problems. • It has the scalability and reliability needed as a common utility. Exploring Features and Applications for Responsible AI To explore the strength of the proposed approach, we outline a decentralized computational infrastructure to realize the objectives we set out, namely enabling meaningful agency and implementing regulations. And we discuss how the sys-tem’s features can be used to solve real-world problems that have been studied in the field. Because of the interdiscipli-nary nature of this work, we decide to use informal examples to discuss the system’s features. While we have used less rigorous definitions of some concepts in favor of simplicity, references are provided for further research. It results in a sketch of preliminary ideas. Many aspects of the relevant technologies are also rapidly evolving and will require vali-dation, experimentation, and revision. Nevertheless, we feel the broad strength of our approach remains valid and the es-sential design ideas are useful for further research in this field or practical system designs. For the remainder of this section, we first introduce de-centralized identifiers and verifiable credentials based on decentralized systems. We then sketch methods that can es-tablish human-centric identities, can enable proof and veri-fication with privacy, can reach binding agreements, and can enforce agreements. We speculate on market-based eco-nomic incentives and discuss various forms of governance that may be familiar in the physical world. This familiarity is an important characteristic because it helps to create and
integrate reliable and durable trust models and social insti-tutions into the AI systems. Decentralized Identifier Our first objective is to create an identity for exercising agency. Decentralized identifiers (Figure 3) are a new type of globally unique identifier but avoids central administra-tion or tracking through a decentralized system, e.g., a blockchain. While there are numerous variations, the DID Working Group in W3C (W3C-DID, 2020) is working to-wards standardization. W3C defines DIDs to be URIs con-formant to IETF RFC 3986 (Berners-Lee, Fielding, & Masinter, 2005). In addition to being globally unique, DIDs are universally resolvable to a document which can provide basis for other properties such as access, authentication, re-lationship and so on. Persons or organizations may exercise control through cryptographic signature algorithms, while digital assets may use passive DIDs with an active DID as its controller.
DIDs are called decentralized because the IDs can be gener-ated and controlled (proving they have WRITE control) without relying on a centralized entity or a so-called trusted authority. Each individual can have as many DIDs as they need to reflect all the personas that they adopt in specific use cases. Through the various types of DIDs one individual may own, these systems can protect against correlation-based privacy attacks. This is a key differentiation for DIDs in contrast to other universal IDs. Decentralized identifiers are not identities yet, but rather a root digital key that one can use to establish whatever iden-tity or identities are needed to function and exercise rights in a digital domain. We will describe the establishment of identities later in the section. In practical implementations, DIDs are often realized by hashing algorithms anchored in decentralized blockchains as a trust registry (Hyperledger Indy, 2020). However, other cryptographic mechanisms can also be used, e.g., KERI (Smith, 2020). Organizations can also implement DIDs through more conventional data structures combined with proper social governance mechanisms as long as they meet the trust requirements in the social context they are designed for. Such trust governance mechanisms can be achieved through social institutions. Different institutions may offer different forms of DIDs for various purposes and standards can help make them interoperable (W3C-DID, 2020). Verifiable Credentials Trust is another essential ingredient for agency. Trusted in-formation is the foundation to command & control, account-ability, auditing, or reaching any basic agreement. In addition to DIDs, decentralized systems enable issu-ance and verification of Verifiable Credentials (VC) (W3C-VC, 2020) and facilitate a global exchange of trustworthy information.
It can be best illustrated with an example. In Figure 4, we have three parties with their respective DIDs: a college with DID “abcd”, a graduate of this college and a job applicant, with DID “1234”, and finally a hiring company with DID “wxyz”. Let us suppose the Company (DID “wxyz”) may use an AI powered system to help screen candidates. To complete a digital job application, the applicant re-quests a digital diploma from the College which issues a Verifiable Credential based on its private but authoritative educational records. Once received and securely stored in a digital wallet, the credential can be used to present a proof to the hiring Company. This proof is cryptographically as-sured based on message exchanges between the applicant and the Company without involving the credential’s issuer (the College). This exchange confers the trust that the Com-pany has with the College to the applicant even though they do not have a prior trust relationship with each other. This transitive trust relationship is fundamental in the efficient functioning of the proposed decentralized infrastructure. The resulting system is fundamentally different from a centralized database. In our example, the applicant is the data owner and holder, who stores various credentials from many issuers to the digital wallet in their possession. Dis-closure of data to the Company is fully controlled by the ap-plicant. There is no centralized data collection about this ex-change. The hiring Company only receives information rel-evant to the job application. With new signature algorithms, e.g., CL (Camenisch & Lysyanskaya, 2001) and BBS+ (Boneh, Boyen, & Shacham,
Figure 3: Decentralized Identifier Figure 4: Verifiable Credentials
2004; Camenisch, Drijvers, & Lehmann, 2016) and appro-priate proof protocols, the VCs can also support selective disclosure and Zero Knowledge Proof (ZKP) to further min-imize data disclosure or correlation. Establishing Human-Centric Identities A decentralized identifier is not an identity. This should be obvious (one’s identity cannot be a random string of num-bers and letters) but this distinction is often lost. With veri-fiable credentials, individuals have a mechanism to create digital identities that they choose to create to facilitate digi-tal services and commerce. An identity consists of a set of proofs (potentially ZKPs) that are constructed from the re-ceived credentials. We emphasize that the subject and the controller of these identities are the individual (or their del-egated representatives).
Let us continue the job applicant’s example (Figure 5). In addition to the college diploma, they may request and re-ceive a digital ID from a government office, e.g., a driver license in the U.S. which asserts their name, address, birth-day, a facial photo, and some physical characteristics for identification. They may also request a letter from their pre-vious employer for employment history and recommenda-tions from their previous coworkers and managers, includ-ing social media recommendations such as those found in LinkedIn. These would be unsurprising credentials for a job applicant identity. They may choose vastly different identities in different social contexts however, including being anonymous. In some digital service contexts, they may choose to construct an identity without personally identifiable information (PII) but DIDs and VCs can still assure authenticity, i.e., asserting that here is a legal person permitted to obtain this service and sign agreements or conduct transactions. In other con-texts, such as the job application example above, or for ob-taining government services or banking services, personal identification may be required by law or by convention. Each person can construct as many such identities as they need to obtain digital services. Proof and Verification With autonomous DIDs, and VCs issued to individuals or organizations, proof and verification can be automated and standardized. This is a key objective: to enable the easy ex-change of verifiable information. This will in turn enable agreements and other forms of control. As illustrated in Figure 6, our job applicant can present a proof using the credentials they hold in the digital wallet about their qualifications but withhold sensitive information or protect such information through ZKP to prevent biases in the applicant filtering AI system. In a separate context, they order a drink from a Bar (DID “mnop”) with a proof that they are over the legal age without disclosing other PII in their digital driver’s license such as birthday and address. Note that the verifiers, the Company, or the Bar, do not contact the original credential issuers for verification (Fig-ure 6) reducing the risks of “phoning home”.
Negotiating Agreements So far, we have outlined a set of important features provided by the decentralized computational infrastructure including autonomous DIDs and VCs and support the scalable ex-change of trustworthy information, i.e., a trust layer. Now, we can discuss how parties reach agreements using this in-frastructure. This will allow us to show a solution to the first problem in AI-powered systems: negotiating Term of Ser-vice. Service exchange can be construed as a part of an agree-ment between parties. The previous examples, however, as-sume a pre-agreed protocol. This protocol can be fully digi-tal, and standardized by law, standard bodies, or industry or community forums, and codified in software. In this section, we explore a dynamic protocol by which parties negotiate an agreement. The basic protocol is shown in Figure 7. In this example, the service provider has been changed to an email service provider (DID “qrst”). We propose the negotiation proceed
Figure 5: Human-Centric Identities
Figure 6: Privacy Preserving Proof and Verification
in three phases, (1) mutual identification, (2) negotiation of terms, and (3) signing.
Mutual identification is straight-forward with DID and VC enabled identities. The negotiation phase consists of proposals and counterproposals between the parties to find an optimal structure. The clauses can be supported by ma-chine readable terms (IEEE P7012, 2020) and programma-ble for well-known services. We propose that this structure be based on a Ricardian smart contract (Szabo, 1994; Grigg, 2000) that executes itself and is human readable and binding (Grigg, 2015; Rothrie, 2018). Legally binding (or other forms of binding) agreements require that the digital identity and signing infrastructure are legally recognized. In recent years, some jurisdictions and institutions have been moving towards such a digital ID sys-tem (GLEIF, 2020). We argue that a decentralized identity service for all is the right approach that avoids over-central-ization of power and protects individual autonomy. How will this work for ToS negotiations? In many con-sumer settings, simple yet powerful methods can be (1) choice, where a user chooses one of multiple alternatives, and (2) option, where either side can propose optional add-on clauses. This will have enormous impact in the current practice of pervasive wrap contracts (Kim, 2013) that will only get more intractable with AI. With identity portability properties of DIDs, the choice and option instruments will encourage standardization, foster competitions, and be a powerful force in rebalancing a collaborative relationship between a user and an AI system to protect privacy, share service-enhancing data, and reach more optimal outcomes. There are also more sophisticated ways of digitally nego-tiating Terms of Service. With structured machine-readable contracts and smart contracts, the agreements can be more nuanced taking more personal choices, and markets can be formed where data can be traded for value. In a policy set-ting to ensure fairness, the immutable records these smart contracts generate can aid transparency, auditing, and re-course, offline or online. Research in human-in-the-loop in AI is another important area of future studies. Auditing Auditability provides transparency so that actors in a soci-otechnical system have information they need for protecting their interests and institute a reward (credit) and punishment (enforcement) mechanism. Many in research (Brundage, et al., 2020) or in policy perspective (Ada Loveless Institute, 2020) suggest auditing as a tool for Responsible AI. In so-cial sphere, auditing supports transparency and accountabil-ity which are important in their own right for the legitimacy of a system.
The basic pattern is shown in Figure 8. The governance authority of a particular regulation conducts an audit in ac-cordance with the said regulation. If a service provider passes the audit, a verifiable credential to that fact is issued by the authority. Then, in the service exchange setting, the service provider could offer proof of such compliance as an incentive to the customer, or the customer may request such a proof as a negotiating condition. A verifiable log (Eijdenberg, Laurie, & Cutter, 2015) or various immutable and irrefutable data structures can also be readily implemented in scale using the same decentral-ized infrastructure that supports DID and VC and offer strong auditability (Brundage, et al., 2020). All Verifiable Credentials are verifiable data that can be presented with strong proofs. With any form of auditing, the verifiability of a data allows much stronger trust in the audit that does not require complex third-party arrangements (e.g., an auditor or administrative or judicial inquiry). It reduces time and cost for disagreement resolution. Transparency can be enhanced with the decentralized in-frastructure’s assist. Disclosure of internal data for auditing is often hampered by the need to protect proprietary infor-mation as well as privacy for those involved. With strong anonymization features built-in and efficient secure multi-party computing (S-MPC), a consumer or justice advocacy group, e.g., can conduct rigorous verifiable audits without directly accessing the underlying data. Other types of audit-ing methods such as sock puppet (Asplund, et al., 2020) can be vastly scaled with authorized sock puppet DIDs.
Figure 7: Negotiating Agreements
Figure 8: Auditing
Portability Autonomous identities can be used as an address for com-munication services such as email, messaging, or social me-dia. Emails would be addressed to “John Doe” (DID 1234) rather than john.doe1234@qrst.com (Figure 9). With indi-vidually owned portable addresses, service providers will be less likely to become self-interested monopolies that users cannot practically leave (Gans, 2018; Windley, 2005).
With portability, they can more practically exercise free market choices that therefore put pressure on the provider to practice Responsible AI out of its own best interest. In such a system, users maintain full control of their own email ad-dresses which are completely separate from the services of-fered by the mail providers. Customers can readily “vote with their feet” if they are dissatisfied with aspects of the service including the handling of private information and the availability of AI powered capabilities. The basic trade of a user’s data for enhanced services can still function, but the balance of power now favors a fairer trade. Pooling data from a large number of users will improve AI performance and gain competitive advantage, therefore the economy of scale and competitive incentive continue to function in such a marketplace. Combining portability with negotiation of Terms of Ser-vice, we may have a system that is fairer, more competitive, and advantageous to the long-term development of AI tech-nology. We argue that if we are to meet the Responsible AI challenge, we must not leave these important factors out of our research agenda. Institutions Finally, we discuss the governance structure of the decen-tralized system we described in this paper and highlight how it can help the establishment of future digital institutions. Institutions, in a sociotechnical sense, are social pacts or norms that people form to integrate technology into a social environment. They are therefore crucial in shaping the tra-jectory of AI development. Agency allows people to form such institutions to advance their common interests. The computational infrastructure we propose merely helps to make these institutions efficient and scalable in an AI eco-system.
Let us discuss a few examples (Figure 10) of such social governance pacts in order for the proposed technical system to work as intended towards Responsible AI. The first of these is the rules governing the issuance of credentials. The college in our original job applicant exam-ple derives its authority to issue diplomas from its legal charter, accreditation, and continuous responsible exercise of such authority. Similar reasoning also applies to individ-ual recommendation letters. A government office may de-rive such authority through political means. We believe that social institutions such as these will continue to play their roles in digital services, but more importantly, new types or modified forms of institutions will emerge to meet the new demands that are specific to AI-powered services. The second example is the regulation of the proper func-tioning of markets. The recent rapid developments in digital currency, assets, and markets opened a new way to study market dynamics in a digital system. The third example is in the regulation of technology busi-nesses, e.g., anti-trust (OECD, 2017; Ezrachi & Stucke, 2016). In these areas, regulators may impose a structure and enforce rules between the boundaries. As shown in Figure 10, we may regulate • data collection, • learned model’s biases, and • the supply chain of source code and other components. In each of these examples, a decentralized approach strengthens the AI system’s accountability and incentivizes Responsible AI with flexibility for policy choices. Conclusions and Related Work In this paper, we explore a strong sociotechnical approach to tackle the central challenges we face in Responsible AI.
Figure 9: Portability Figure 10: Institutions
Our novel approach differs from algorithm-centric research seeking optimal performance of an abstracted task and also differs from social-aware research where algorithms are en-hanced to meet formally defined privacy or fairness con-straints while optimizing the task’s performance. Instead, our contribution is to propose a strong sociotechnical co-de-sign approach that puts AI technology and the social actors who develop and use the technology in a unified framework and seek a system dynamic that can produce the desired out-come. With that framing, we outlined a sociotechnical model to describe common AI systems. This model is unique that it brings in social actors such as users, providers, the public, and regulators into the scope of study and captures artefacts such as datasets, trained models, and software of the AI sys-tem. Guided by social science concepts, we identified two intervention mechanisms: agency and regulation, and incor-porated them into the model. To realize such a model in Internet-scale, we proposed a decentralized public utility for the purpose of regulating AI system behavior within the proposed framework. This de-centralized utility is the infrastructure to materialize the so-ciotechnical constructs. With that foundation, we incrementally sketched out a rich set of features for the system. These features include decentralized identifiers, verifiable credentials, human-cen-tric identities, agreements, auditing, portability enabled market mechanisms, and digital governance institutions. These features are powerful tools, and they are unique in how they exploit the system dynamics in a sociotechnical system between technology and ecosystem and among the system’s social actors. In sociotechnical co-designing, we seek reinforcing dynamics and policy flexibility to achieve optimal equilibrium. We explored how these features can address challenging problems related to privacy, user auton-omy, transparency, accountability, fairness, and recourse. While these feature designs are preliminary, we offered in-sights and demonstrated a novel sociotechnical co-design approach towards solving Responsible AI problems. These features are promising areas for further experimentation and studies. Related Work A rich set of recent studies have advocated a sociotechnical approach in AI research and AI related policy making. Selbst, et al. (2019) identify common conception traps re-lated to fairness AI research. Barabas, et al. (2020) charac-terize AI and data science as a sociotechnical process that is “inseparable from social norms, expectations and contexts of development and use.” Andrus, et al. (2021) call for reevaluation of problematic technical abstractions that re-searchers and practitioners have assumed in AI and argue for reframing the AI fields to include human and social fac-tors to model the full system of interest. Poechhacker & Ka-cianka (2021) identify that formal expression of causality as a means of AI accountability must be understood in a social context. And the classic sociotechnical studies from the 1950’s at the time of the introduction of industrial-age tech-nologies still resonate strongly today (Emery & Trist, 1960). Our work continues in this direction, and it is novel in its strong sociotechnical approach to tackle the structural prob-lems directly rather than improving the technical systems within its domain abstractions, or impact and policy studies sorely on the social system side. We also propose decentral-ized systems as the ideal means uniquely suited for this pur-pose where social concepts like agency and regulation can be efficiently introduced in the technology institutionaliza-tion process. Combining decentralized blockchain systems with AI has also seen significant interests in recent years. But the focus is often on data sharing while preserving ownership or com-plying with privacy regulations through federated learning settings (Cheng, et al., 2019; Harris & Waggoner, 2019; Kairouz & McMahan, 2021). Many others have suggested to use blockchain for diverse purposes of data provenance, data authenticity, system reliability and more (Salah, et al., 2019). These are designed mainly as enhancements to the technical AI system. Our proposed decentralized infrastruc-ture is novel as it is designed for the social goals of empow-ering human users and fostering AI-age social institutions in regulating AI systems. Providing a common computing utility to solve complex problems is not a new idea. Several decentralized systems are in operation today with the goal of providing universal identity with strong privacy features (Sovrin, 2021; GLEIF, 2020; Hyperledger Indy, 2020). The design of these systems put a great deal of thoughts into crafting a decentralized gov-ernance structure. Their work inspired us to apply what we learned to solving problems in Responsible AI. Successful blockchain based financial systems also offer opportunity to study the interplay between technical and social systems (Nakamoto, 2008; Ethereum, 2021). In other technical do-main, both PKI and DNS can be thought of as such common infrastructures although neither is decentralized which caused many problems we are to address. For software de-velopers, the ubiquitous Github service is an example of how a commercial enterprise may be incentivized to support such an infrastructure. And obviously, the Internet infra-structure itself is fully distributed and partially decentral-ized, built as a public infrastructure. In writing of this paper, we recognize the enormous chal-lenge in a complex interdisciplinary study that crosses many technical fields and social science fields. However, we be-lieve such interdisciplinary approach is important for AI de-velopment and hope our work can spur interests and future research in this direction.
Acknowledgements We wish to thank all members of the Trustworthy Intelligent Computing (TIC) project and in particular Brice Dobry for his reviews and discussions and the participants of the AAAI 2020 Spring Symposium for their useful feedback. We thank the invaluable feedback to an earlier draft from the anonymous ICWSM reviewers. References
Ada Loveless Institute. 2020. Examining the Black Box: Tools for assessing algorithmic systems. https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/. Accessed: 2021-05-04. Allen, C. and Applecline, S. 2017. A Primer on Self-Sovereign Identity. https://github.com/WebOfTrustInfo/rwot5-boston/blob/master/topics-and-advance-readings/self-sovereign-identity-primer.md. Accessed: 2021-05-04. Andrus, M.; Dean, S.; Gilbert, T. K.; Lambert, N.; and Zick, T. 2020. AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks. 2020 IEEE International Symposium on Technology and Society (ISTAS), arXiv:2102.04255. Tempe, AZ, USA. Asplund, J.; Eslami, M.; Sundaram, H.; Sandvig, C.; and Karahalios, K. 2020. Auditing Race and Gender Discrimination in Online Housing Markets. Proc. of the 14th Int. AAAI Conference on Web and Social Media. AAAI. Barabas, C.; Doyle, C.; Rubinovitz, J.; and Dinakar, K. 2020. Studying up: reorienting the study of algorithmic fairness around issues of power. Conference on Fairness, Accountability, and Transparency. ACM. Barocas, S. and Selbst, A. 2016. Big Data's Disparate Impact. California Law Review, 104(3), pp. 671-732. Berners-Lee, T.; Fielding, R.; and Masinter, L. 2005. Uniform Resource Identifier (URI): Generic Syntax. https://tools.ietf.org/html/rfc3986. Accessed: 2021-05-04. Bohme, R. and Kopsell, S. 2010. Trained to Accept? A Field Experiment on Consent Dialogs. Conference on Human Factors in Computing Systems. Atlanta, GA: ACM. Boneh, D.; Boyen, X.; and Shacham, H. 2004. Short Group Signatures. In Advances in Cryptology - CRYPTO 2004 (pp. 41-55). Berlin: Springer-Verlag. Bostrom, C. and Heinen, J. 1977. MIS Problems and Failures: A Socio-Technical Perspective, Part II: The Application of Socio-Technical Theory. MIS Quarterly, Vol. 1, No. 3. Brundage, M.; Avin, S.; Wang, J.; Belfield, H.; and Krueger, G. 2020. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213v2. California. 2018. California Consumer Privacy Act. https://oag.ca.gov/privacy/ccpa. Accessed: 2021-05-04. Camenisch, J. and Lysyanskaya, A. 2001. An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation. Advances in Cryptology — EUROCRYPT 2001. Lecture Notes in Computer Science, vol 2045. Berlin, Heidelburg:Springer. https://doi.org/10.1007/3-540-44987-6_7. Camenisch, J.; Drijvers, M.; and Lehmann, A. 2016. Anonymous Attestation Using the Strong Diffie Hellman Assumption Revisited. International Conference on Trust and Trustworthy Computing (pp. 1-20). Vienna, Austria: Springer. Cammarota, R. and others. 2020. Trustworthy AI Inference Systems: An Industry Research View. arXiv:2008.04449. Cheng, R.; Zhang, F.; Kos, J.; He, W.; Hynes, N.; Johnson, N.; . . . Song, D. 2019. Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contracts. IEEE European Symposium on Security and Privacy (EuroS&P). Chouldechova, A. 2016. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv:1610.07524. Corbett-Davies, S.; Pierson, E.; Feller, A.; Goel, S.; and Hug, A. 2017. Algorithmic decision making and the cost of fairness. arXiv:1701.08230. Davis, M.; Challenger, R.; Jayewardene, D.; and Clegg, C. 2014. Advancing socio-technical systems thinking: A call for bravery. Applied Ergonomics, pp. 171-180. Dwork, C. 2008. Differential Privacy: A Survey of Results. In D. D. Agrawal M., Theory and Applications of Models of Computation. Lecture Notes in Computer Science, vol 4978. Berlin, Heidelberg: Springer. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; and Zemel, R. 2012. Fairness Through Awareness. Innovations in Theoretical Computer Science (ITCS), (pp. 214-226). Eijdenberg, A.; Laurie, B.; and Cutter, A. 2015. Verifiable Data Structures. https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf. Accessed: 2021-05-04. Emery, F. and Trist, E. 1960. Socio-technical Systems. Management Sciences Models and Techniques, vol. 2. EPRS. 2020. The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. Brussels: European Parliament. Ethereum. n.d.. Ethereum White Paper. https://ethereum.org/en/whitepaper/. Accessed: 2021-05-04. EU. 2018. General Data Protection Regulation. https://gdpr-info.eu/. Accessed: 2021-05-04. EU. 2022. European Digital Identity Architecture and Reference Framework. https://ec.europa.eu/. Accessed: 2021-04-01. Ezrachi, A. and Stucke, M. E. 2016. Virtual Competition. Journal of European Competion Law and Practice. 7(9): 585-586. doi.org/10.1093/jeclap/lpw083. Gans, J. 2018. Enhancing Competition with Data and Identity Portability. https://www.brookings.edu/wp-content/uploads/2018/06/ES_THP_20180611_Gans.pdf. Accessed: 2021-05-04. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.; Wallach, H.; Daume, H. I.; and Crawford, K. 2018. Datasheets for Datasets. arXiv:1803.09010 GLEIF. 2020. Introducing the Legal Entity Identifier (LEI). https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei. Accessed: 2021-05-04. Grigg, I. 2000. The Ricardian Contract: https://iang.org/papers/ricardian_contract.html. Accessed: 2021-05-04. Grigg, I. 2015. On the intersection of Ricardian and Smart Contracts.
https://iang.org/papers/intersection_ricardian_smart.html. Accessed: 2021-05-04. Harris, J. and Waggoner, B. 2019. Decentralized and Collaborative AI on Blockchain. IEEE International Conference on Blockchain. IEEE. doi.org/ 10.1109/Blockchain.2019.00057. Holland, D.; Lchicotte, W. J.; Skinner, D.; and Cain, C. 1998. Identity and Agency in Cultural Worlds. Cambridge, MA: Harvard University Press. Hyperledger Indy. 2020. Hyperledger Indy. https://www.hyperledger.org/use/hyperledger-indy. Accessed: 2021-05-04. IEEE P7012. 2020. IEEE P7012 Machine Readable Privacy Terms Working Group. https://sagroups.ieee.org/7012/. Accessed: 2021-05-04. Joshi, S.; Koyejo, O.; Vijitbenjaronk, W.; Kim, B.; and Ghosh, J. 2019. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems. arXiv:1907.09615. Kairouz, P. and McMahan, B. 2021. Advances and Open Problems in Federated Learning. arXiv:1912.04977. Kim, N. S. 2013. Wrap Contracts: Foundations and Ramifications. Oxford University Press. Leavitt, H. J. 1972. Managerial Psychology. Chicago: Univerisity of Chicago Press. Machuletz, D. and Bohme, R. 2020. Multiple Purposes, Multiple Problems: A User Study of Consent Dialogs after GDPR. arXiv:1908.10048. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; . . . Gebru, T. 2019. Model Cards for Model Reporting. arXiv:1810.03993. Muhle, A.; Gruner, A.; Gayvoronskaya, T.; and Meinel, C. 2018. A Survey on Essential Components of a Self-Soverrign Identity. arXiv:1807.06346. Nakamoto, S. 2008. Bitcoin: A Peer-to-Peer Electronic Cash System. https://bitcoin.org/bitcoin.pdf. Accessed: 2021-05-04. Narayanan, A. 2018. 21 fairness definitions and their politics. https://www.youtube.com/watch?v=jIXIuYdnyyk. Accessed: 2021-05-04. Nouwens, M.; Liccardi, H.; Veale, M.; Karger, D.; and Kagal, L. 2020. Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence. Conference on Human Factors in Computing Systems. Honolulu, HI: ACM. OECD. 2017. Algorithms and Collusion: Competition Policy in the Digital Age. https://www.oecd.org/daf/competition/Algorithms-and-colllusion-competition-policy-in-the-digital-age.pdf. Accessed: 2021-05-04. Peacock, S. E. 2014. How web tracking changes user agency in the age of Big Data: The used user. Big Data and Society, doi.org/10.1177/2053951714564228. Poechhacker, N. and Kacianka, S. 2021. Algorithmic Accountability in Context. Socio-Technical Perspectives on Structural Causal Models. Frontiers in Big Data. https://www.frontiersin.org/articles/10.3389/fdata.2020.519957/full. Accessed: 2021-05-04. Preukschat, A. and Reed, D. 2020. Self-Sovereign Identity. Shelter Island, NY: Manning Publications Co. ISBN 9781617296598. Rakova, B. and Kahn, L. 2020. Dynamic Algorithmic Service Agreements Perspective. arXiv:1912.04947. Ropohl, G. 1999. Philosophy of Socio-Technical Systems. Philosophy and Technology. 4 (3):186-194. Rothrie, S. 2018. How Ricardian Smart Contracts Enable Blockchain Adoption. https://coincentral.com/ricardian-smart-contracts/. Accessed: 2021-05-04. Salah, K.; Rehman, M. U.; Nizamuddin, N.; and Al-Fuqaha, A. 2019. Blockchain for AI: Review and Open Research Challenges. IEEE Access. Selbst, A.; Boyd, D.; Friedler, S.; Venkatasubramanian, S.; and Vertesi, J. 2019. Fairness and Abstraction in Sociotechnical Systems. Conference on Fairness, Accountability, and Transparency. Atlanta, GA: ACM. Smith, S. M. 2020. KERI: Key Event Receipt Infrastructure Design. https://github.com/SmithSamuelM/Papers/blob/master/whitepapers/KERI_WP_2.x.web.pdf. Accessed: 2021-05-04. Sovrin. n.d.. Sovrin Foundation. https://sovrin.org/. Accessed: 2021-05-04. Szabo, N. 1994. Smart Contracts. https://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html. Accessed: 2021-05-04. Ustun, B.; Spangher, A.; and Liu, Y. 2019. Actional Recourse in Linear Classification. Conferences on Fairness, Accountability, and Transparency. Atlanta, GA: ACM. Utz, C.; Degeling, M.; Fahl, S.; Schaub, F.; and Holz, T. 2019. (Un)informed Consent: Studying GDPR Consent Notices in the Field. The 26th ACM Conference on Computer and Communications Security . London: ACM. Verma, S. and Rubin, J. 2018. Fairness Definitions Explained. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). Gothenburg, Sweden: IEEE. W3C-DID. 2020. Decentralized Identifiers (DIDs) v1.0. https://www.w3.org/TR/did-core/. Accessed: 2021-05-04. W3C-VC. 2020. Verifiable Credentials Data Model 1.0. https://www.w3.org/TR/vc-data-model/. Accessed: 2021-05-04. Windley, P. 2005. Digital Identity: Unmasking Identity Management Architecture (IMA). Sebastopol, CA: O'Reilly Media.
|
bace8037-5d37-4095-b1ea-878bce0f314c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You Too Can See Suffering
|
ba07b812-b8c4-49ad-b271-b8576fe50fbf
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Issues with uneven AI resource distribution
Uneven resource distribution:
-----------------------------
The uneven distribution of the resources needed to produce and use AI in a state-based system is a long-term challenge to developing international AI policy and raises international security risks.
Resources include factors such as the skills, knowledge, compute, industry, people, and education and other factors of production used to build or develop AI systems, as well as access and ability to use AI systems themselves.
This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system.
The lack of resource distribution has a twofold problem:
* There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI resources could increase the risk of misuse or [AGI ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) leading to a possible reduction in diffusion.
* A lack of diffusion could increase conventional security risks. For example, it would make sense if some nations create a powerful first strike capability to guard against or dissuade anyone who achieves AGI like capabilities in the aim of preventing them getting a runaway advantage.
The desire to access the economic and military benefits of AI will drive competition between states. Even if the benefits of AI development were evenly distributed, the places holding the greater number of AI resources will accrue disproportionate power over other geographies, particularly as AI moves toward the level of general intelligence.
The resources needed to develop and use AI cannot be evenly distributed in a zero-sum international system with layers of economic and security competition mixed with technical and research disparities. Attempting to evenly distribute resources may even have a negative impact on the development of AI if it makes research and innovation less effective.
**International security and political economy competition:**
There are some basic tenets to the concept of international security, drawing from the wider field of international relations.
Briefly:
* The international system is anarchic with no supranational governance.
* There is competition between states.
* States analyse their security within the system of anarchy and the balance of power between states.
Given its anarchic nature, the international system is often seen as being adversarial by nature.
The analysis of the balance of power by states feeds into the way they develop military capabilities. These will be dependent on a state’s industrial and technological capabilities, as well as that of its allies. States are also in economic competition with each other. Each state has competing national systems of innovation, and therefore there is an unevenly spread set of competencies, specialisms and comparative advantages. The dual-use nature of AI will see competition for its development across both the economic and security domains. Under a state-based system individual states will seek to maintain a state of readiness based on perceived threats of other actors. Even through international cooperation around the development of AI, many states may at the very least want to attain a latent capacity and industrial-skills base to develop AI systems for their own security. A near concept is that of nuclear latency. These factors are a driver of AI innovation and competition as well as AI proliferation through commercial means.
The development and diffusion of technologies, like AI, has become a more salient topic in recent years through a focus on great power competition. This changes the relationship between state and private power. For example, the banning of exporting computer chips may make for good security policy, but conflicts with economic interests. It is extremely difficult to imagine a situation where a nuclear power agrees to share all resources and know-how to build a nuclear weapon with a state they regard as an adversary. Understanding why AI resource distribution would be different to existing international security concerns, such as nuclear, under the current adversarial structure of the global system should be a priority.
Geography of AI innovation:
---------------------------
Research and innovation is often placed based, with geographical concentrations of connected businesses and organisations, known as clusters. This is true of AI, where top end research is clustered across a small [number of places](https://www.brookings.edu/research/the-geography-of-ai/). These clusters of expertise are not evenly spread across the world, nor are the benefits they produce. This is practically noticeable at the high-end of the field. While the number of people with AI skills are continuing to grow globally, the most advanced research takes place within a select few universities and companies. It is possible that the development of ever more powerful AI systems would increase the relative power of these clusters. One caveat to this argument is that the nature of science and innovation radically changes into something more distributed and decentralised. Factors that favour this include, the increased use of open-source, distributed teams and platforms, such as Github, and decentralised organisations, such as decentralised autonomous organisations and compute. However, States will have an incentive to limit the way this takes place in a way that skews the benefits toward them as a geographic entity. For example, they could enforce legal mechanisms over the ownership of research, ban exports and limit access to people and skills. Therefore, there is a good chance that the concentration of AI researchers and research clusters is not going to be spread more evenly over time even while the adoption and diffusion of AI increases.
Security and information hazards
--------------------------------
The closer to the goal of AGI, actors involved may be less willing to share information about the development of AI less it causes an economic or security risk. In addition to this, there are information hazards around sharing AI progress.
Bostrom defines an information hazard as: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” The act of sharing information can constitute a security threat, hence the need for secrecy around certain forms of government information. Sharing AI resources may enable some agents to cause harm or increase risk to other participants in the international system. Therefore, States nearing the capacity for AGI will have an imperative to close down the amount of information they are willing to share with agents they deem likely to cause harm. This is likely to increase just at the point in which the need to share information to lessen any negative impacts of AGI also increases.
First strike to restrain AGI
----------------------------
If state A achieves AGI and an adversary thinks it will give state A an incomparable advantage which they won't be able to overcome, then it is logical for state B to prevent this outcome or be subject to a world order dominated by state A.
If state B has no chance of achieving AGI then the closer A gets to achieving AGI the more it makes sense for B to build up its military capacity. This would allow state B to strike state state A, or to attempt to alleviate their AGI advantage through non-AGI means.
Conversely, there could be a need for state A to hide the development of AGI from state B to prevent it from pre-emptively striking if it fears state A has developed AGI or even just developments AI that gives it an incomprehensible advantage it cannot catch up with. This reduced transparency may reduce the overall capacity to prevent negative outcomes of AGI.
Without pre-agreed mechanisms to share AI resources, discontinuous and rapid advances could shorten any window of time that states could use to assess the relative security implications of AI advances. It could also make [differential technology development impossible](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4213670).
Summary:
--------
AI development in a state-based system is a zero-sum game. Therefore, cooperation without a fair distribution of AI resources is likely to result in one party owning, or at least controlling access to AI resources within a certain geographical boundary - likely to be a state given the chance of the continuation of the current state-based system for the foreseeable future. I see this mostly continuing to be the case if 1) the global system is competitive by nature 2) there remains a [security dilemma](https://www.britannica.com/topic/security-dilemma) 3) unbridgeable differences between states in the international system exist, Washington vs Beijing consensus etc.
AI resources tend to be geographically clustered and therefore unevenly distributed. Along a path toward AGI those with access to these resources are likely to accumulate the most power within the international system.
Working toward AGI without a pre-agreed distribution of AI resources will be a destabilising event for the global order consisting of a state-based system of governance. The only counterpoint I can think to this is along the lines of the [Waltz argument that all states should have nuclear weapons](https://www.jstor.org/stable/1962764) because it will reduce the risk of them being used, but this defaults back to increasing risks around informational hazards and the misuse of AI.
States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.
(Small edit to clarify risk of AGI ruin)
---
Posted for the defunct Future Fund Worldview Prize. Crossposted from: https://temporal.substack.com/
|
032abce5-5bf3-4f5c-bc5c-7e73c9398630
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What are brain-computer interfaces?
A [brain-computer interface](http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface) (BCI) is a direct communication pathway between the brain and a computer device. BCI research is heavily funded, and has already met dozens of successes. Three successes in human BCIs are [a device](http://edition.cnn.com/2002/HEALTH/06/13/cov.bionic.eye/) that restores (partial) sight to the blind, [cochlear implants](http://en.wikipedia.org/wiki/Cochlear_implant) that restore hearing to the deaf, and [a device](https://pubmed.ncbi.nlm.nih.gov/16838014/) that allows use of an artificial hand by direct thought.
Such devices restore impaired functions, but many researchers expect to also augment and improve normal human abilities with BCIs. [Ed Boyden](http://edboyden.org/) is researching these opportunities as the lead of the [Synthetic Neurobiology Group](http://syntheticneurobiology.org/) at MIT. Such devices might hasten the arrival of an intelligence explosion, if only by improving human intelligence so that the hard problems of AI can be solved more rapidly.
|
04c8f3e4-961d-4252-ad4e-ab4d95cd64f3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Thought Crimes
Cross-posted on By Way of Contradiction
In my morals, at least up until recently, one of the most obvious universal rights was freedom of thought. Agents should be allowed to think whatever they want, and should not be discouraged for doing so. This feels like a terminal value to me, but it is also instrumentally useful. Freedom of thought encourages agents to be rational and search for the truth. If you are punished for believing something true, you might not want to search for truth. This could slow science and hurt everyone. On the other hand, religions often discourage freedom of thought, and this is a major reason for my moral problems with religions. It is not just that religions are wrong, everyone is wrong about lots of stuff. It is that many religious beliefs restrict freedom of thought by punishing doubters with ostracizing or eternal suffering. I recognize that there are some "religions" which do not exhibit this flaw (as much).
Recently, my tune has changed. There are two things which have caused me to question the universality of the virtue of freedom of thought:
1) Some truths can hurt society
Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.
2) Simulations
This is the main point I want to talk about. If we get to the point where minds can simulate other minds, then we run into major issues. Should one mind be allowed to simulate another mind and torture it? It seems like the answer should be no, but this rule seems very hard to enforce without sacrificing not only free thought, but what would seem like the most basic right to privacy. Even today, people can have preferences over the thoughts of other people, but our intuition
|
16a8d5cd-9ca9-45a0-822b-b4ed3f5a3d5c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI Debate Stability: Addressing Self-Defeating Responses
This post is a project report from the AI Safety Fundamentals course, spring 2024.
TL;DR
1. Transferring debate to an abstract algebra MMLU dataset is not trivial.
2. When GPT-3.5 is used as a judge, the outcomes may be sensitive to exact prompt phrasing.
3. GPT-3.5 may perform worse in judging the debate than answering the question directly.
4. We proposed a universal prompting approach that avoids most of the self-defeating behavior.
Abstract
A recent paper by Khan et al. shows that arguing for correct information in the debate game is easier. However, since the current language models are trained not to be deceptive, this favorable debate property may go away if the models were actively deceptive. This project works towards measuring the stability of the debate game in actively deceptive conditions. Initial attempts to transfer the paper's findings to a simpler domain of math questions did not succeed, partially due to the high sensitivity of debate outcomes to exact prompting. Instead, this project focused on improving the convincingness of arguments by avoiding self-defeating behavior. We propose a series of universal prompts that lead to a significant decrease in the self-defeating rate. While this project does not answer the question of how the debate works in a deceptive environment, it addresses one of the prerequisites of this task.
The prompts used and data acquired can be found in this notebook.
Background
The Debate between LLMs is one of the prospective approaches to solving the problem of Scalable Oversight, increasing people's ability to evaluate the truthfulness of information outside of their knowledge (Irving, Christiano, and Amodei 2018).
A recent empirical paper showed that arguing for truthful information leads to higher win rates in story comprehension questions (Khan et al. 2024). If this finding holds true for the upcoming AI systems, the debate might be helpful for scalable oversight. If this statement does not hold, we s
|
2413389e-d1ef-476a-8144-a6122cf07e69
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
estimating the amount of populated intelligence explosion timelines
(edit 2021-07-18: this post is probly not very good, as there's some anthropic principle research out there and i haven't read any and just gone off thinking about it on my own.)
estimating the amount of populated intelligence explosion timelines
-------------------------------------------------------------------
the [imminent](were-all-doomed.html) [intelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is likely to [go wrong](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).
how likely?
if you imagine that you live pretty much at the cusp of such an event, you should expect as per the [anthropic principle](https://en.wikipedia.org/wiki/Anthropic_principle) that there are about as many observer-instants before you, as there are after you. (an observer-instant being an instant at which you have a chance of making observations about that fact; see [this](https://www.greaterwrong.com/posts/uSMa6Fj5nMgntpxfo/are-coincidences-clues-about-missed-disasters-it-depends-on) and notably Nick Bostrom's Self-Sampling Assumption)
i've previously calculated that the future from now until heat death has room for roughly 10^200 human lifespans (of 80 years) (an estimation based on the number of particles in the observable universe, the amount of time until heat death, and the computational cost of running a human brain).
the past, on the other hand, holds about 10^11 human lifespans (most of them not full 80-year lifespans, but such details will get amortized by using orders of magnitude).
if intelligence explosion is, as i believe, likely to result either in [total death](were-all-doomed.html) or in well-populated futures (whether good or [bad](https://en.wikipedia.org/wiki/Suffering_risks)), then the fact that i'm observing being right next to the event (in time) rather than observing being one of the (in well-populated timelines) countless observers to exist *after* the event, must be compensated by such well-populated timelines being particularly rare within the set of future possible timelines.
how rare? about 1 in (10^200 / 10^11), which is 1 in 10^189.
factors which may make this calculation wrong:
* my 10^200 estimate might be wrong (for example: if each person comes to eat a *lot* of computation resources, then the number of future observers is drastically reduced).
* the 10^11 estimate for the past might be wrong: what if there have beings in earth's past smart enough to make this observation? it may seem unlikely, but if i am to encompass the immense amount of forms future observerc might take, i should account for a wide variety of forms of past observers too.
* because entropy increases, there are (possibly a lot) more future universe states than past universe states. accounting these "timeline splits" for the number of future observers even more massively decreases the expected ratio of well-populated timeline-states, though i'm not sure by how much.
|
a8f56f2c-b3ec-4adc-85bc-b6f1911dea8c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Internet Search Tips: how I use Google/Google Scholar/Libgen
None
|
f0df226a-407b-41dc-96f9-3c0ba0070a62
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Forecasting Newsletter: March 2021
Highlights
* OpenPhilanthropy releases a report on outside view perspectives on the likelihood of AGI.
* Jason Matheny, previous director of IARPA, CSET, is now a ¿senior? official in the Biden administration.
* Astral Codex Ten considers Trapped Priors As A Basic Problem Of Rationality
Index
* Prediction Markets & Forecasting Platforms
* In The News
* Recent Blog Posts
* Hard to Categorize
* Long Content
Sign up here or browse past newsletters here.
Prediction Markets & Forecasting Platforms
Numerai is a distributed, blockchain-based hedge fund. Users can either predict on free, but obfuscated data, or use their own data and predict on real world companies. After the users stake cryptocurrency on their predictions, Numerai buys or sells stocks in proportion to each prediction's stake.and then stake cryptocurrency on their predictions. The fund observes how well the predictions do. Then it increases the stake of those who did well and burns part of the stake of those who performed badly. Numerai’s users currently have around $12.5 million staked.
CSET's Founding Director Jason Matheny is now a ¿senior? official in the Biden administration. In his past life, he did some pioneering work on cultured meat, then was a Program Manager of IARPA's Aggregative Contingent Estimation (ACE) program (of Good Judgment fame), before becoming director of IARPA. In recent times, he founded the Center for Security and Emerging Technologies (CSET.)
CSET Foretell is launching a Pro Forecaster Program in April 2021, which means it will start paying its forecasters. They are offering to pay $200/month (each) to 50 selected forecasters. The total payout, which comes to $120k yearly, competes with Replication Markets as one of the largest forecaster reward budgets.
> Pro Forecasters will be paid to make forecasts that contribute to our research and analysis for policymakers. Invitations have been sent to current Foretell users, and we are now accepting applications for the
|
0a84965e-4f06-4258-afef-ab37ca6efa0c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Melbourne Social Meetup: October
Discussion article for the meetup : Melbourne Social Meetup: October
WHEN: 21 October 2016 06:30:00PM (+1100)
WHERE: The Bull & Bear Tavern, 347 Flinders Lane, Melbourne VIC
This month's Social Meetup is on as usual!
Facebook event page: https://www.facebook.com/events/2092157011008864/
Social Meetups are casual get-togethers held on the third Friday of each month. They are informal events where we sit around and chat over a few drinks and a meal.
Where? The Bull & Bear Tavern, 347 Flinders Lane, Melbourne
When? Friday 21st October, starting from 6:30pm. Feel free to arrive later on!
Dinner? The B&B serves reasonable pub food, and we will usually share a couple of bowls of wedges or similar. A few of us typically go for a late dinner after we leave the B&B - usually at Father's Office, which offers late night dining at half price.
Contact? Any issues or questions, contact Richard on 0421231789
Hope to see you there!
Discussion article for the meetup : Melbourne Social Meetup: October
|
8bff7ee3-5a61-4d4d-a3ec-907d25783d8e
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Evolution provides no evidence for the sharp left turn
Does human evolution imply a sharp left turn from AIs?
======================================================
Arguments for the [sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) in AI capabilities often appeal to an “*evolution -> human capabilities*” analogy and say that evolution's outer optimization process built a much faster human inner optimization process whose capability gains vastly outstripped those which evolution built into humans. Such arguments claim we will see a similar transition while training AIs, with SGD creating some 'inner thing' which is not SGD and which gains capabilities much faster than SGD can insert them into the AI. Then, just like human civilization exploded in capabilities over a tiny evolutionary time frame, so too will AIs explode in capabilities over a tiny "SGD time frame".
Evolution’s sharp left turn happened for evolution-specific reasons
-------------------------------------------------------------------
I think that "*evolution -> human capabilities*" is a bad analogy for "*AI training -> AI capabilities*". Let’s compare evolution to within lifetime learning for a single generation of an animal species:
* A generation is born.
* The animals of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
* The generation dies, and all of the accumulated products of within lifetime learning are lost.
* Differential reproductive success slightly changes the balance of traits across the species.
The only way to transmit information from one generation to the next is through evolution changing genomic traits, because death wipes out the within lifetime learning of each generation.
Now let’s look at the same comparison for humans:
* A generation is born.
* The humans of the generation learn throughout their lifetimes, collectively performing many billions of steps of learning.
* **The current generation transmits some fraction of their learned knowledge to the next generation through culture.**
* The generation dies, **but only some** of the accumulated products of within lifetime learning are lost.
* Differential reproductive success slightly changes the balance of genomic traits across humanity.
Human culture allows some fraction of the current generation’s within lifetime learning to transmit directly to the next generation. In the language of machine learning, the next generation benefits from a kind of [knowledge distillation](https://arxiv.org/abs/2006.05525), thanks to the prior generation providing higher quality 'training data' for the next generation's within-lifetime learning.
This is extremely important because within-lifetime learning happens much, *much* faster than evolution. Even if we conservatively say that brains do two updates per second, and that a generation is just 20 years long, that means a single person’s brain will perform ~1.2 billion updates per generation. Additionally, the human brain probably uses a stronger base optimizer than evolution, so each within-lifetime brain update is also probably better at accumulating information than a single cross-generational evolutionary update. Even if we assume that only 1 / 10,000th.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
of the information learned by each generation makes its way into humanity's cross-generational, persistent endowment of cultural information, that still means culture advances ~100,000 times faster than biological evolution.
I think that "*evolution -> human capabilities*" is a very bad reference class to make predictions about "*AI training -> AI capabilities*". We don't train AIs via an outer optimizer over possible inner learning processes, where each inner learning process is initialized from scratch, then takes billions of inner learning steps before the outer optimization process takes one step, *and then is deleted after the outer optimizer's single step*. Such a bi-level training process would **necessarily** experience a sharp left turn once each inner learner became capable of building off the progress made by the previous inner learner (which happened in humans via culture / technological progress from one generation to another).
However, this sharp left turn does *not* occur because the inner learning processes suddenly become much better / more foomy / more general in a handful of outer optimization steps. It happens because you devoted billions of times more optimization power to the inner learning processes, *but then deleted each inner learner shortly thereafter*. Once the inner learning processes become able to pass non-trivial amounts of knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.
In my frame, we've already figured out and applied the sharp left turn to our AI systems, in that we don't waste our compute on massive amounts of incredibly inefficient neural architecture search, hyperparameter tuning, or meta optimization. For a given compute budget, the best (known) way to buy capabilities is to train a single big model in accordance with empirical scaling laws such as those discovered in [the Chinchilla paper](https://arxiv.org/abs/2203.15556), not to split the compute budget across millions of different training runs for vastly tinier models with slightly different architectures and training processes. In fact, we can be even more clever and use small models to tune the training process, before scaling up to a single large run, as OpenAI [did with GPT-4](https://arxiv.org/pdf/2303.08774.pdf#section.3).
(See also: Gwern on [the blessings of scale](https://www.gwern.net/Scaling-hypothesis#blessings-of-scale).)
It’s true that we train each new AI from scratch, rather than reusing any of the compute that went into previous models. However, the situation is very different from human evolution because each new state of the art model uses geometrically more compute than the prior state of the art model. Even if we could perfectly reuse the compute from previous models, it wouldn't be nearly so sharp an improvement to the rate of progress as occurred in the transition from biological evolution to human cultural accumulation. I don’t think it’s plausible for AI capabilities research to have the same sort of hidden factor of ~billion resource overhang that can be suddenly unleashed in a short-to-humans timescale.
The capabilities of ancestral humans increased smoothly as their brains increased in scale and/or algorithmic efficiency. Until culture allowed for the brain’s within-lifetime learning to accumulate information across generations, this steady improvement in brain capabilities didn’t matter much. Once culture allowed such accumulation, the brain’s vastly superior within-lifetime learning capacity allowed cultural accumulation of information to vastly exceed the rate at which evolution had been accumulating information. This caused the human sharp left turn.
However, the impact of scaling or algorithmic improvements on the capabilities of individual brains is still continuous. which is what matters for predicting how suddenly AI capabilities will increase as a result of scaling or algorithmic improvements. Humans just had this one particular bottleneck in cross-generational accumulation of capabilities-related information over time, leading to vastly faster progress once culture bypassed this bottleneck.
Don't misgeneralize from evolution to AI
----------------------------------------
Evolution's sharp left turn happened because evolution spent compute in a shockingly inefficient manner for increasing capabilities, leaving vast amounts of free energy on the table for any self-improving process that could work around the evolutionary bottleneck. Once you condition on this specific failure mode of evolution, you can easily predict that humans would undergo a sharp left turn at the point where we could pass significant knowledge across generations. I don't think there's anything else to explain here, and no reason to suppose some general tendency towards extreme sharpness in inner capability gains.
History need not repeat itself. Human evolution is not an allegory or a warning. It was a series of events that happened for specific, mechanistic reasons. If those mechanistic reasons do not extend to AI research, then we ought not (mis)apply the lessons from evolution to our predictions for AI.
This last paragraph makes an extremely important claim that I want to ensure I convey fully:
- IF we understand the mechanism behind humanity's sharp left turn with respect to evolution
- AND that mechanism is inapplicable to AI development
- THEN, there's no reason to reference evolution *at all* when forecasting AI development rates, not as evidence for a sharp left turn, not as an "illustrative example" of some mechanism / intuition which might supposedly lead to a sharp left turn in AI development, not for *anything*.
Here's an analogy to further illustrate the point:
> Imagine that we were trying to figure out how to build very reliable cars. We've so far built a number of car prototypes, but none have reached the full load-bearing capacity of even a single strong human, never mind the vastly superhuman transport capacity that the laws of physics seem to permit.
>
> Someone raises the concern that, once we try to scale current prototypes to the superhuman limit, they'll tend to spontaneously combust, despite the fact that none of the prototypes have ever done so. As evidence for such an event, the person points to the fact that a previous car building effort, led by EVO-Inc., actually had built cars that did sometimes explode randomly.
>
> Concerned, we investigate EVO-Inc.'s car building effort, hoping to avoid whatever failure plagues their cars. Only, upon investigating EVO-Inc., it turns out that they're actually run by insane space clowns, and the reason their cars occasionally explode is because they used armed landmines in place of hubcaps.
>
>
My point is that other car builders can learn ~zero lessons from EVO-Inc.[[1]](#fnqvp7i67vofs) The mechanism behind their cars' spontaneous detonation is easily avoided by not using landmines as hubcaps. The organizational-level failures that led to this design choice on EVO-Inc.'s part are also easily avoided by not being insane space clowns. We should not act like there might be some general factor of "explodeyness" which will infect other car building efforts, simply by virtue of those efforts tackling a similar problem to the one EVO-Inc. failed at.
EVO-Inc's failures arose from mechanisms which do not apply to human organizations tackling similar problems. EVO-Inc. didn't use landmines as hubcaps because they were run by greedy, myopic executives who cut corners on safety to increase profits. They didn't do so because they were naive optimists who failed to understand why building non-exploding cars is hard like computer security or rocket science, and who failed to apply proper security mindset to their endeavors. EVO-Inc used landmines as hubcaps because they were run by insane space clowns who did insane space clown things.
Human car builders may have to tackle problems superficially similar to the spontaneous combustion of the EVO-Inc. cars. E.g., they may have to design the fuel tanks of their cars to avoid combustion during a crash. However, those efforts *still* should not take lessons from EVO-Inc. E.g., if other car builders were to look at crash data from EVO-Inc.'s cars, and naively generalize from the surface-level outcomes of an EVO-Inc. car crash to their own mechanistically different circumstances, they might assume that supersonic fragments posed a significant risk during a crash, and then add ballistic armor between the driver and the wheels, despite this doing nothing to prevent a car's fuel tank from igniting during a crash.
I think our epistemic relationship with evolution's example should be about the same as the human car builders' epistemic relationship with EVO-Inc. Evolution's combined sharp left turn and alignment failures happened because evolution is a *very* different process compared to human-led AI development, leading to evolution-specific mechanisms, which no sane AI developer would replicate.
In order to experience a sharp left turn that arose due to the same mechanistic reasons as the sharp left turn of human evolution, an AI developer would have to:
1. Deliberately create a (very obvious[[2]](#fn8iu1ul0br3e)) inner optimizer, whose inner loss function includes no mention of human values / objectives.[[3]](#fnhwpuy2poe2t)
2. Grant that inner optimizer ~billions of times greater optimization power than the outer optimizer.[[4]](#fnbj3ujsa4rq)
3. Let the inner optimizer run freely without any supervision, limits or interventions from the outer optimizer.[[5]](#fn2zsaoi1ab1i)
This is the AI development equivalent of using landmines as hubcaps. It's not *just* that this is an insane idea from an alignment perspective. It's also an insane idea from just about any other perspective. Even if you're only trying to maximize AI capabilities, it's a terrible idea to have such an extreme disparity in resources between the inner and outer loops.
AI researchers have actually experimented with bi-level optimization processes such as neural architecture search and second-order meta learning. Based on current results, I don't think anything approaching multiple orders of magnitude difference in resource use between the inner and outer optimizers is plausible. It's just not efficient, and we have better approaches. From the [GPT-4 paper](https://arxiv.org/abs/2303.08774):
> A large focus of the GPT-4 project was building a deep learning stack that scales predictably. The primary reason is that for very large training runs like GPT-4, it is not feasible to do extensive model-specific tuning. To address this, we developed infrastructure and optimization methods that have very predictable behavior across multiple scales. These improvements allowed us to reliably predict some aspects of the performance of GPT-4 from smaller models trained using 1, 000× – 10, 000× less compute.
>
>
Even if we could magically repurpose all of the compute used throughout OpenAI's tuning of the GPT-4 architecture / training process, I doubt it would even amount to as much compute as they used in the final GPT-4 training run, much less exceed that quantity by orders of magnitude. Modern training practices simply lack that sort of free energy.
See also: [Model Agnostic Meta Learning](https://arxiv.org/abs/1703.03400v3) proposed a bi-level optimization process that used between 10 and 40 times more compute in the inner loop, only for [Rapid Learning or Feature Reuse?](https://arxiv.org/abs/1909.09157) to show they could get about the same performance while removing almost all the compute from the inner loop, or even by getting rid of the inner loop entirely.
Fast takeoff is still possible
------------------------------
The prior sections argue that we should not use an evolutionary analogy as evidence that an inner learner will sufficiently outperform the outer optimizer that constructed it so as to cause a massive spike in capabilities as a result of the same mechanisms that drove the sharp left turn in human evolution.
However, introducing new types of positive feedback loops across multiple training runs may lead to fast takeoff, but it would be a mechanistically different process than the evolutionary sharp left turn, meaning there's no reason to assume takeoff dynamics mirroring those of human evolution. There are two specific mechanisms that I think could produce a fast takeoff:
* AIs contributing to AI capabilities research, producing a positive feedback loop with a sharp upwards kink around the time that AI contributions exceed human contributions.
* AIs deliberately seeking out new training data that grant them useful capabilities. E.g., an AI trying to improve its bioengineering capabilities may set up a very fast cycle of gathering and analyzing new biological data, which significantly outpaces the rate of human scientific innovation.
If fast takeoff is still plausible, why does the specific type of positive feedback loop matter? What changes, as a result of considering various AI-specific fast takeoff mechanisms, as opposed to the general expectation of sudden transitions, as implied by the evolution analogy? Here are four alignment-relevant implications:
1. **Takeoff is less abrupt.** Both of the above mechanisms are vaguely similar to how human cultural development allowed us to jump forwards in capabilities by feeding the outputs of one generation into the “training data” of the next generation. However, I expect that neither mechanism will produce as much of a relative jump in AI capabilities, as cultural development produced in humans. Neither mechanism would suddenly unleash an optimizer *multiple* orders of magnitude faster than anything that came before, as was the case when humans transitioned from biological evolution to cultural development.
2. **Takeoff becomes easier to navigate.** These specific mechanisms of capabilities advance probably both allow for iteration and experimentation. We currently have examples of both AI capabilities advances and of online learning / exploration processes. We can run experiments on current systems to assess the alignment risks posed by both these sources of capabilities improvement.
3. **Capabilities gains are less general.** "capabilities generalize further than alignment" is a common refrain in discussions about the sharp left turn. Usually, this claim is justified by making an analogy to how human capabilities started to quickly generalize across many domains simultaneously.
However, the process responsible for human breadth of generality was not some small architectural modification evolution made to the human brain. It was humanity's cross-generational process of expanding and improving our available "training data" to cover a broader and broader range of capabilities across many domains (a process we sometimes call "science"). The evolutionary analogy thus offers no reason to expect sudden jumps in generality without corresponding extensions of the training data.
Without this evolutionary analogy, why should we even elevate the very specific claim that '*AIs will experience a sudden burst of generality **at the same time** as all our alignment techniques fail.'* to consideration at all, much less put significant weight on it?
4. **Alignment probably generalizes pretty well.** Speaking of alignment techniques failing, I expect alignment techniques to mostly generalize across capabilities jumps caused by either of the above mechanisms for sudden capabilities gain.
Will alignment generalize across sudden capabilities jumps?
-----------------------------------------------------------
The previous section argued that the mechanisms driving the sharp left turn in human evolution are not present in AI development, and so we shouldn't generalize from the results of human evolution to those of AI development, even when considering positive feedback loops whose surface-level features are reminiscent of the sharp left turn in human evolution.
This section will first reference and briefly summarize some past writing of mine arguing that our "misalignment" with inclusive genetic fitness isn't evidence for AI misalignment with our values. Then, I'll examine both mechanisms for a possible fast takeoff that I described above from an "inside view" machine learning perspective, rather than assuming outcomes mirroring those of human evolutionary history.
### Human "misalignment" with inclusive genetic fitness provides no evidence for AI misalignment
I previously wrote a post, [Evolution is a bad analogy for AGI: inner alignment](https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment), arguing that evolutionary analogies between human values and inclusive genetic fitness have little to tell us about the degree of values misgeneralization we should expect from AI training runs, and that analogies to human within-lifetime learning are actually much more informative[[6]](#fnd7s7qikwv4).
I also wrote [this subsection](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#Edit__Why_evolution_is_not_like_AI_training) in a much [longer post](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky), which explains why I think evolution is mechanistically very different from AI training, such that we cannot easily infer lessons about AI misgeneralization by looking at how human behaviors differ between the modern and ancestral environments.
Very briefly: "human behavior in the ancestral environment" versus "human behavior in the modern environment" isn't a valid example of behavioral differences between training and deployment environments. Humans weren't "trained" in the ancestral environment, then "deployed" in the modern environment. Instead, humans are continuously "trained" throughout our lifetimes (via reward signals and sensory predictive error signals). Humans in the ancestral and modern environments are different "training runs".
As a result, human evolution is not an example of:
> We trained the system in environment A. Then, the trained system processed a different distribution of inputs from environment B, and now the system behaves differently.
>
>
It's an example of:
> We trained a system in environment A. Then, we trained a *fresh version* of the same system on a different distribution of inputs from environment B, and now the *two different systems* behave differently.
>
>
The near-total misalignment between inclusive genetic fitness and human values is an easily predicted consequence of this (evolution-specific) bi-level optimization paradigm, just like the human sharp left turn is an easily predicted consequence of the (evolution-specific) extreme resource disparity between the two optimization levels. And just like evolution provides no reason to assume our own AI development efforts will experience a sharp left turn, so to does evolution not provide any reason to assume our AI development efforts will show extreme misgeneralization between training and deployment.
### Capabilities jumps due to AI driving AI capabilities research
For the first mechanism of AIs contributing to AI capability research, I first note that this is an entirely different sort of process than the one responsible for the human sharp left turn. Evolution made very few modifications to the human brain's architecture during the timeframe in which our cultural advancement catapulted us far beyond the limits of our ancestral capabilities. Additionally, humans have so far been completely incapable of changing our own architectures, so there was never a positive feedback loop of the sort that we might see with AIs researching AI capabilities.
Because of this large difference in underlying process between this possible fast takeoff mechanism and the evolutionary sharp left turn, I think we should mostly rely on the current evidence available from AI development for our predictions of future AI development, rather than analogies to our evolutionary history. Additionally, I claim that alignment techniques already generalize across human contributions to AI capability research. Let’s consider eight specific alignment techniques:
* [Reinforcement learning from human feedback](https://arxiv.org/abs/1706.03741)
* [Constitutional AI](https://www.anthropic.com/constitutional.pdf)
* [Instruction prompt tuning](https://arxiv.org/abs/2212.13138v1)
* [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251)
* [Pretraining Language Models with Human Preferences](https://arxiv.org/abs/2302.08582)
* [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827)
* [More scalable methods of process based supervision](https://arxiv.org/abs/2211.14275)
* [Using language models to write their own instruction finetuning data](https://arxiv.org/abs/2212.10560)
and eleven recent capabilities advances:
* Optimally training language models using the [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556)
* [Transcending Scaling Laws with 0.1% Extra Compute](https://arxiv.org/abs/2210.11399)
* Better tuning of training and architectural hyperparameters ([example](https://arxiv.org/abs/2104.07705))
* Retrieval mechanisms for language models, such as [RETRO](https://arxiv.org/abs/2112.04426)
* [1 bit Adam](https://arxiv.org/abs/2102.02888) for efficiently sharing gradient info across GPUs
* [Doing more than one epoch](https://arxiv.org/abs/2211.09085) on high quality text
* (Possibly) an [improvement on the Adam optimizer](https://arxiv.org/abs/2302.06675)
* Distributed [training across many low-memory GPUs](https://arxiv.org/abs/2301.11913)
* Stable, [8-bit transformer implementations](https://arxiv.org/abs/2208.07339)
* Applying [layer norms to query and key outputs of attention layers](https://arxiv.org/abs/2302.05442) [to stabilize training](https://twitter.com/SanhEstPasMoi/status/1632775853640646657).
* [The Hyena operator](https://arxiv.org/abs/2302.10866) as a replacement for attention, to (maybe?) scalable sub-quadratic sequence processing architectures
I don’t expect catastrophic interference between any pair of these alignment techniques and capabilities advances. E.g., if you first develop your RLHF techniques for models trained using the original OpenAI scaling laws, I expect those techniques to transfer pretty well to models trained with the Chinchilla scaling laws.
I expect there is *some* interference. I expect that switching your architecture from a vanilla transformer to a RETRO architecture will cause issues like throwing off whatever RLHF hyperparameters you’d found worked best for the vanilla architecture, or complicate analysis of the system because there’s now an additional moving part (the retrieval mechanism), which you also need to track in your analysis.
However, I expect we can overcome such issues with “ordinary” engineering efforts, rather than, say, RLHF techniques as a whole becoming entirely useless for the new architecture. Similarly, whatever behavioral analysis pipeline you’d developed to track models based on the vanilla architecture can probably be retrofitted for models based on the RETRO architecture without having to start from scratch.
Importantly, the researchers behind the capabilities advances were *not* explicitly optimizing to maintain backward compatibility with prior alignment approaches. I expect that we can decrease interference further by just, like, *bothering to even try and avoid it*.
I’d like to note that, despite my optimistic predictions above, I do think we should carefully measure the degree of interference between capabilities and alignment techniques. In fact, doing so seems very*very* important. And we can even start right now! We have multiple techniques for both alignment and capabilities. You can just choose a random alignment technique from the alignment list, a random capabilities technique from the capabilities list, then see if applying the capabilities technique makes the alignment technique less effective.
The major exception to my non-interference claim is for alignment techniques that rely on details of trained models’ internal structures, such as mechanistic interpretability. CNNs and transformers require different sorts of interpretability techniques, and likely have different flavors of internal circuitry. This is one reason why I’m more skeptical of mechanistic interpretability as an alignment approach[[7]](#fnsqeclp6pz5).
### Capabilities jumps due to AI iteratively refining its training data
I think the second potential fast takeoff mechanism, of AIs continuously refining their training data, is riskier, since it allows strange feedback loops that could take an AI away from human-compatible values. Additionally, most current models derive values and goal-orientated behaviors much more from their training data, as opposed to their architecture, hyperparameters, and the like.
E.g., I expect that choosing to use the [LION optimizer](https://arxiv.org/abs/2302.06675) in place of the [Adam optimizer](https://arxiv.org/abs/1412.6980) would have very little impact on, say, the niceness of a language model you were training, except insofar as your choice of optimizer influences the convergence of the training process. Architecture choices seem 'values neutral' in a way that data choices are not.
I still think the risks are manageable, since the first-order effect of training a model to perform an action X in circumstance Y is to make the model more likely to perform actions similar to X in circumstances similar to Y. Additionally, current practice is to train language models on an enormous variety of content from the internet. The odds of any given subset of model data catastrophically interfering with our current alignment techniques cannot be that high, otherwise our current alignment techniques wouldn't work on our current models.
However, second order effects may be less predictable, especially longer term second-order effects of, e.g., training future models on the outputs of current models. Such iterative approaches appear to be gaining popularity, now that current LMs are good enough to do basic data curation tasks. In fact, one of the linked alignment approaches, [ConstitutionalAI](https://www.anthropic.com/constitutional.pdf), is based on using LMs to rewrite texts that they themselves will then train on. Similar recent approaches include:
* [Large Language Models Can Self-Improve](https://arxiv.org/abs/2210.11610)
* [Language Models Can Teach Themselves to Program Better](https://arxiv.org/abs/2207.14502v3)
* [The Wisdom of Hindsight Makes Language Models Better Instruction Followers](https://arxiv.org/abs/2302.05206)
Although this potential fast takeoff mechanism more closely resembles the mechanisms of cultural development responsible for the human sharp left turn, I think there are still important differences that make a direct extrapolation form human evolutionary history inappropriate. Most prominently, a data refinement fast takeoff wouldn't coincide with exploiting the same sort of massive resource overhang that came into play during the human sharp left turn.
Additionally, I expect there are limits to how far AIs can improve their training data without having to run novel experiments and gather data different from their initial training data. I expect it will be difficult to extend their competency to a new domain without actually gathering new data from that domain, similar to how human scientific theory only progresses so far in the absence of experimental data from a new domain.
Conclusion
==========
I think that evolution is a bad analogy for AI development. I [previously argued](https://www.lesswrong.com/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment) as much in the context of inner alignment concerns, and I've [also argued](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky#Edit__Why_evolution_is_not_like_AI_training) that evolution is actually very mechanistically different from the process of training an AI.
Our evolutionary history has all sorts of difficult-to-track details that *really* change how we should derive lessons from that history. In this post, the detail in question was the enormous disparity between the optimization strength of biological evolution versus brain-based within lifetime learning, leading to a giant leap in humanity's rate of progress, once within lifetime learning could compound over time via cultural transmission.
I've started to notice a common pattern in evolutionary analogies, where they initially suggest concerning alignment implications, which then seem to dissolve once I track the mechanistic details of what actually happened in the evolutionary context, and how that would apply to AI development. At this point, my default reaction to any evolutionary analogy about AI alignment is skepticism.
1. **[^](#fnrefqvp7i67vofs)**Other than "don't take automotive advice from insane space clowns", of course.
2. **[^](#fnref8iu1ul0br3e)**If you suspect that you've maybe *accidentally* developed an evolution-style inner optimizer, look for a part of your system that's updating its parameters ~a billion times more frequently than your explicit outer optimizer.
3. **[^](#fnrefhwpuy2poe2t)**- "inner optimizer" = the brain.
- "inner loss function" = the combination of predictive processing and reward circuitry that collectively make up the brain's actual training objective.
- "inner loss function includes no mention human values / objectives" because the brain's training objective includes no mention of inclusive genetic fitness.
4. **[^](#fnrefbj3ujsa4rq)**Reflects the enormous disparity in optimization strength between biological evolution and human within-lifetime learning, which I've been harping on about this whole post.
5. **[^](#fnref2zsaoi1ab1i)**Evolution doesn't intervene in our within-lifetime learning processes if it looks like we're not learning the appropriate fitness-promoting behavior.
6. **[^](#fnrefd7s7qikwv4)**It's not even that I think human within-lifetime learning is *that* informative. It's just that I think "being more informative than evolution" is such a stupidly low bar that human within-lifetime learning clears it by a mile.
7. **[^](#fnrefsqeclp6pz5)**I do think there’s a lot of value in mechanistic interpretability as a source of evidence about the mechanics and inductive biases of SGD. For example, [this paper](https://openreview.net/forum?id=NpsVSN6o4ul) discovered “name mover heads”, attention heads that copy a speaker’s name to the current token in specific contexts, and also discovered “backup name mover heads”, which are attention heads that don’t normally appear to act as name mover heads, but when researchers ablated the primary name mover heads, the backup name mover heads changed their behavior to act as name mover heads.
|
179dbf42-1304-4cbd-b89a-841dd0a08fc6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The BTC equilibriumating and the ETH one-eightening
Epistemic status: may or may not make you a bajillion dollars. Tots not investment advice.
These are my summary notes on John Pfeffer's An Investor's Take on Cryptoassets [December, 2017] and and one "SquishChaos"'s [aka Nikhil Shamapant] Etherium, The Triple Halving [April, 2021].
You may know of the Squish report as the source of a spicy $150k price prediction for Eth. My friend mentioned that it was being well-received by people in the finance industry, so I decided to take a look. By my own (financial neophyte's) judgement it seems reasonably sophisticated, decently-reasoned and has compelling, time-sensitive predictions. I think reading Squish alongside Pfeffer is productive, as they have different foci and [potentially?] different conclusions. There may also be a conflict in their analysis of the impact of Proof of Stake on network value (and, ultimately, token value) which I'm curious to see folks unpack here.
Both are well-written, light on the jargon yet informationally dense, but not oppressively so. I was able to do a read-skim of both such that I got a Pareto 80/20 in a few hours. I'd highly recommend taking a look at both if you find these summary notes interesting -- my goal here is to provide a scaffold of their arguments, such that interested parties can have a conversation, rather than to reproduce them in detail.
I've inserted my own questions throughout this document -- I have more questions than opinions at this point.
An Equilibrium valuation model of crypto
Note: Unless otherwise noted, pull quotes in this section are from Pfeffer.
Pfeffer provides an equilibrium approach here, asking what the steady-state valuation of a crypto asset will be. He provides a simple mathematical model:
M=PQVelocity
where M is the money supply, PQ is the sum of the product of the Price and Quantity of resources consumed by the currency (ie, compute PQ + bandwidth PQ + storage PQ + watts PQ + dank memes PQ, dank meme-lords PQ, etc). If T is the number of to
|
1a96ed4a-ae2a-4435-a563-9bb8c632b34c
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Automated Mechanism Design via Neural Networks
1 Introduction
---------------
Designing revenue optimal mechanisms in various settings has been a central
research agenda in economics, ever since the seminal works of
Vickrey [[26](#bib.bib26)] and Myerson [[17](#bib.bib17)] in single
item auctions. Lately, designing optimal mechanisms for selling multiple items
has also been established as an important research agenda at the interface of
economics and computer sciences
[[6](#bib.bib6), [14](#bib.bib14), [13](#bib.bib13), [3](#bib.bib3), [4](#bib.bib4), [15](#bib.bib15), [28](#bib.bib28), [21](#bib.bib21), [29](#bib.bib29), [22](#bib.bib22), [23](#bib.bib23)]
Due to diversity in the researchers’ backgrounds, there are a number of quite
different angles to study this problem. The standard economics theme aims to
understand the exact optimal mechanisms in various settings. To name a few,
Armstrong [[2](#bib.bib2)] obtains the revenue optimal mechanisms of selling two items to one
buyer, whose valuations of the two items are perfect positively correlated (a
ray through the origin). Manelli and Vincent [[16](#bib.bib16)] obtains partial
characterization of optimal mechanisms, in the form of extremely points in the
mechanism spaces. Pavlov [[19](#bib.bib19)] derives optimal mechanisms for two
items when the buyer has symmetric uniform distributions.
Daskalakis et al. [[8](#bib.bib8)] characterizes sufficient and necessary
conditions for a mechanism to optimal and derive optimal mechanisms for two
items for several valuation distributions. Tang and Wang [[23](#bib.bib23)] obtain the
revenue optimal mechanisms of selling two items, of which the valuations are
perfect negatively correlated. Yao [[29](#bib.bib29)] obtains the revenue
optimal mechanisms of selling two additive items to multiple buyers, whose
valuation towards the items are binary and independent.
Another category of research rooted in the AGT community aims to resolve the
difficulties of characterizing optimal mechanisms via the lens of algorithm
design. Cai et al. [[3](#bib.bib3)] and Alaei et al. [[1](#bib.bib1)] gives algorithmic characterizations of
the optimal BIC mechanisms on discrete distributions using linear programs.
Hartline and Roughgarden [[14](#bib.bib14)], Yao [[28](#bib.bib28)], Hart and Nisan [[13](#bib.bib13)]
find approximately optimal mechanisms in various settings.
Carroll [[5](#bib.bib5)] shows that for a certain multi-dimensional
screening problem, the worst-case optimal mechanism is simply to sell each item
separately.
The third category, at the interface of AI and economics, aims to search for the
optimal mechanisms via various AI approaches. Conitzer and Sandholm [[6](#bib.bib6)]
model the problem of revenue and welfare maximization as an instance of
constraints satisfaction problem (CSP) through which the optimal mechanism may
be found using various search techniques, despite its general computation
complexity. Sandholm and Likhodedov [[21](#bib.bib21)] model a restricted revenue
maximization problem (within affine maximizing auctions) as a parameter search
problem in a multi-dimensional parameter space, they find several sets of
parameters that yields good empirical revenue. Dütting et al. [[9](#bib.bib9)] aims
to learn optimal mechanisms by repeatedly sampling from the distribution. They
obtain mechanisms that are approximately optimal and approximately incentive
compatible.
One advantage of these computational approaches is that most of them are
constructive so that one can systematically and computationally generate
optimal mechanisms. However, a difficulty for most existing works in computer
science (the second and third categories) is that mechanisms obtained this way
are either not optimal in the exact sense, or not truthful in the exact sense.
As a result, a typical economist may have a hard time to appreciate this type of
results. A more desirable approach would be constructive on one hand and be able
return exact incentive compatible and (hopefully) exact optimal mechanisms on
the other hand.
###
1.1 Our methodology
In this paper, motivated by the above observation, we aim to put forward a
computational approach that can design or assist one to design exact IC and
optimal mechanisms. Similar to the approach introduced by
Dütting et al. [[9](#bib.bib9)], we train a
neural network that represents the optimal mechanism using the
valuation distributions.
Unlike their approach, however, we introduce another
neural network that represents buyer’s behavior. In particular, this network
takes a mechanism as input, and output an action. Our network structure
resembles that of the generative adversarial nets (GAN) [[10](#bib.bib10)] but is essentially
different because we do not need to train the buyer’s network. This independent
buyer network allows us to easily model the exact IC constraints (which has been
a major difficulty in previous works) in our network and any behavior model of
this form. In contrast, Dütting et al. [[9](#bib.bib9)] first propose to hardwire the IC constraints into
the mechanism network (which requires a lot of domain knowledge and the
structure of the networks has to be domain specific), as a result their approach
can only reproduce mechanisms in the domains where the form of the optimal
mechanism is known. To circumvent this difficulty, they further propose to add
IC as a soft constraint so that the training objective is to minimize a linear
combination of revenue loss and the degree of IC violations. However, this would
produce mechanisms that are not IC.
Another innovation of our approach is that we represent a mechanism as a menu (a
list of (valuation, outcome) tuples) in the single buyer case. According to the
taxation principle [[27](#bib.bib27)], by simply letting the buyer do the
selection, we get an IC mechanism. An additional merit of using a menu to
represent a mechanism is that it enables explicit restrictions of the menu
size of the mechanism, which measures the degree of complexity of a mechanism
[[12](#bib.bib12)].
###
1.2 Our results
We then apply our learning-aided mechanism design framework to the domain where
a seller sells two items to one buyer. In particular, we investigate the
following problems.
* What is the revenue optimal mechanisms when the menu size is restricted to
a constant? To the best of our knowledge, the optimal mechanism of this
kind remains unknown for our setting.
* What is optimal mechanism for the case where the valuation domain is a
triangle? The previously studied cases on this domain all focuses on
rectangle shaped valuation domain (expect for Haghpanah and Hartline [[11](#bib.bib11)]).
* What is the revenue optimal deterministic mechanism?
* What is the revenue optimal mechanism when the buyer has combinatorial
value?
Some of the experimental results we obtained is shown in [Table 1](#S1.T1 "Table 1 ‣ 1.2 Our results ‣ 1 Introduction ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")
with comparison to the exact optimal mechanisms (some of them are previously
known results, while the others are our new findings).
| Distributions | Computed Mech Rev111The computed revenue is NOT directly given by the loss of our network. Instead, we ignore the buyer network and compute the expected revenue according only to the menu given by our network. | Optimal Mech Rev | Optimality |
| --- | --- | --- | --- |
| U[0,1]2 | 0.5491989 | (12+2√2)/27 | ≥99.9996% |
| U[0,1]×[0,1.5] | 0.6838542 | (15+2√3)/27 | ≥99.9997% |
| U[0,1]×[0,1.9] | 0.7888323 | (17.4+2√3.8)/27 | ≥99.9988% |
| U[0,1]×[0,2] | 0.8148131 | 22/27 | ≥99.9997% |
| U[0,1]×[0,2.5] | 0.9435182 | 1019/1080 | ≥99.99996% |
| U[0,1]2 menu size ≤3 | 0.5462947 | 59/108 | ≥99.9997% |
| U[0,1]2 menu size ≤2 | 0.5443309 | 59/108 | ≥99.99997% |
| U{v1,v2≥0|v1/2+v2≤1} | 0.5491225 | (12+2√2)/27 | ≥99.9857% |
Table 1: Comparison with optimal mechanisms, where Optimality =\textscRev/\textscOpt\textscRev.
Inspired by these empirical findings, using the techniques by
Daskalakis et al. [[8](#bib.bib8)] and Pavlov [[20](#bib.bib20)], we then prove
the exact optimal mechanisms for the first two problems. To the best of our
knowledge, this is the first time to find the exact optimal mechanisms in these
domains, so they are of independent interests to the economics society as well.
######
Theorem (Restricted Menu Size).
The optimal mechanism for an additive buyer, v∼U[0,1]2, with menu
size no more than 3 is to either sell the first item at price 2/3 or
sell the bundle of two items at price 5/6, yielding revenue 59/108.
In particular, the optimal mechanism must be asymmetric even if the distribution is symmetric!
######
Theorem (Uniform Distribution on a Triangle).
The optimal mechanism for an additive buyer with value uniformly distributed
in {(v1,v2)|v1/c+v2≤1,v1,v2≥0} (hence a
correlated distribution) is as follows:
* if c∈[1,4/3], two menu items: [(0,0),0] and [(1,1),√c/3];
* if c>4/3, three menu items: [(0,0),0], [(1,1),2c/3+√c(c−1)/3], and [(1/c,1),2/3].
2 Preliminaries
----------------
In this paper, we consider the automated mechanism design problem for the
single-buyer multi-dimensional setting. In this section, we introduce the
basic notions for optimal multidimensional mechanism design problem.
##### Environment
The seller has m heterogeneous items for sale, and the buyer has different
private values for receiving different bundles of the items. An allocation of the items is specified by a vector x∈X⊆[0,1]m, where xi is the probability of allocating the i-th item to the
buyer. An allocation x is called a deterministic allocation, if x∈{0,1}m; otherwise a randomized allocation or a lottery
allocation.
A possible outcome of the mechanism consists of a valid allocation
vector x∈X and a monetary transfer amount p∈R+, called payment, from the buyer to the seller.
With the standard quasi-linear utility assumption, the valuation
function v:X↦R+ describes the private preference of the
buyer, i.e., an outcome ⟨x,p⟩ is (weakly) preferred than
another outcome ⟨x′,p′⟩, if and only if:
| | | |
| --- | --- | --- |
| | u(x,p;v):=v(x)−p≥v(x′)−p′=u(x′,p′;v). | |
In other words, the outcome with the highest utility is most preferred by the
buyer.
##### Mechanism
A naïve mechanism (without applying the revelation principle) is
defined by a set of actions and a mapping from the set of actions to the
set of outcomes. Note that according to the taxation principle
[[27](#bib.bib27)], simply letting the buyer do the selection, we get
an incentive compatible mechanism. Formally,
######
Definition \thedefinition (Naïve Mechanism).
A naïve mechanism consists of an action set A and an associated mapping
from any action to a possible outcome, i.e., ⟨x,p⟩:A↦X×R+.
In particular, there exists a special action ⊥ meaning “exiting the
mechanism” such that
| | | | |
| --- | --- | --- | --- |
| | x(⊥)=0,p(⊥)=0. | | (Exit) |
In such a naïve mechanism, a strategy of the buyer is then a mapping from
the set of private valuation functions to the action set, i.e., s:V↦A. Furthermore, if the buyer is rational, then her
strategy must maximize her utility:
| | | | |
| --- | --- | --- | --- |
| | s(v)∈\operatornamewithlimitsargmaxs′∈Su(x(s′(v)),p(s′(v));v). | | (Rational) |
The corresponding outcomes of the actions are also known as menu items.
Throughout this paper, we use [x,p] to denote a specific menu item, e.g., the
zero menu item [0,0]=[(0,…,0),0] is the corresponding menu item of
the exiting action ⊥. Note that the naïve mechanism with the menu
presentation is a very general model of the mechanism design problem. In
particular, even when the buyer is not fully rational, as long as a buyer
behavior is available, the mechanism designer is still able to design the menus to
maximize his objective assuming that the buyer responses according
to the given behavior model. The robustness of naïve mechanisms is indeed
critical to the flexibility and generality of our methodology.
##### Direct Mechanism
With the above definition of naïve mechanisms, it is hard to characterize
all the mechanisms with certain properties, because the design of the action
set, at first glance, could be arbitrary. One critical step in the mechanism
design theory is to applying the celebrating revelation principle
[[18](#bib.bib18), p.224] to restrict the set of naïve mechanisms
to a considerably smaller set of mechanisms — the direct mechanisms.
In a direct mechanism, the action set is restricted to be identical to the set
of valuation functions and the identity mapping also is required to be an
optimal strategy for any rational buyer. Formally,
######
Definition \thedefinition (Direct Mechanism).
A direct mechanism fixes the action set A=V and remains to specify the
mapping from V to the set of possible outcomes.
In addition, the identity mapping must be a utility-maximizing strategy for
any rational buyer, which can be equivalently stated as the following incentive compatible ([IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) and individually rational
([IR](#S2.Ex5 "(IR) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) constraints:
| | | | |
| --- | --- | --- | --- |
| | v∈\operatornamewithlimitsargmaxv′∈Vu(x(v′),p(v′);v), | | (IC) |
| | u(x(v),p(v);v)≥0. | | (IR) |
In fact, the constraints ([IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) and ([IR](#S2.Ex5 "(IR) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) are deduced from the
constraints ([Rational](#S2.Ex3 "(Rational) ‣ Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) and ([Exit](#S2.Ex2 "(Exit) ‣ Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")).
##### The Designer’s Goal
The goal of the mechanism designer is to maximize the expectation of his
objective r:X×R+↦R, where the expectation is taken
over his prior knowledge about the buyer’s private valuation function, i.e.,
v∼F.
We emphasize that our methodology is not restricted to any specific objective.
However, in this paper, we would focus on the setting with the seller’s
revenue as the objective:
| | | | |
| --- | --- | --- | --- |
| | r(x,p)=p. | | (Objective) |
Because revenue-optimal mechanism design in multi-dimensional environment is a
both challenging and widely studied problem. Hence applying our method in such
a setting allows us to verify that (i) whether it can find the optimal or
nearly optimal solution, and (ii) whether it can provide a simpler approach to
a hard problem.
##### Assumptions
In most sections of this paper, we will make to the following two assumptions
([Section 2](#S2.SS0.SSS0.Px5 "Assumptions ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") and [Section 2](#S2.SS0.SSS0.Px5 "Assumptions ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")). As we just
stated, we would first verify that our method can be used to recover the
optimal solutions to some known problems and little exact optimal solution is
actually discovered without these two assumptions.
######
Assumption \theassumption (Additive Valuation Functions).
The buyer’s valuation function v is additive, i.e., v can be decomposed
as follows:
| | | |
| --- | --- | --- |
| | v(x)=∑i∈[m]vixi, | |
where vi∈R+.
With the additive valuation assumption, we refer each vi as the value of
the i-th item. Moreover, we can make the following independent value
assumption in addition.
######
Assumption \theassumption (Independent Values).
The prior distribution F is independent in each dimension and can be
decomposed as F=F1×⋯×Fm, where each vi is
independently drawn from Fi, i.e., vi∼Fi.
In the meanwhile, to show that our method is not limited to these assumptions,
in [Section 5](#S5 "5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we show how it can be applied to settings without these
assumptions. In particular, with the help of the characterization results by
Daskalakis et al. [[8](#bib.bib8)], we are able to verify the optimality of the
solution to an instance with correlated value distribution (while still with
additive valuation functions).
3 Problem Analysis
-------------------
Although the revelation principle is widely adopted by the theoretical
analysis of mechanism design problems to efficiently restrict the design
spaces, we decided not to follow this approach when applying neural
networks to solve such problems.
The main difficulty of directly following the traditional revelation
principle based approach is two-fold:
* It is unclear that what network structure can directly encode the
incentive compatible ([IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) and individually rational
([IR](#S2.Ex5 "(IR) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) constraints;
* Some of the characterization results for additive valuation
setting222Such as Myerson’s virtual value for
single-dimension and Rochet’s increasing, convex and Lipschitz-1
buyer utility function for multi-dimension [[9](#bib.bib9)].
can be cast to certain network structures, but such structures are
restricted (to additive valuation assumption) and heavily rely on the
domain knowledge of the specific mechanism design problem.
In fact, the above difficulties also limit the generality of the methods
built on these elegant but specific characterizations. For example, there
might be some fundamental challenges while generalizing such approaches
to the settings where the buyer is risk-averse (risk-seeking) or has
partial (or bounded) rationality, etc. Furthermore, in many real applications,
the buyer behavior models may come from real data instead of pure theoretical
assumptions.
To circumvent these difficulties and ensure the highest extendability, in
this paper, we build up our method from the most basic naïve
mechanisms — simply let the buyer choose her favorite option — which
is even more close to the first principles of how people make decisions.
Interestingly, via this approach, our method will automatically produce
an exactly incentive compatible and individually rational mechanism. To the
best of our knowledge, this is the first neural network based approach that
outputs an both exactly incentive compatible and exactly individually rational
mechanism under multi-dimensional settings.
###
3.1 Revisiting the Naïve Mechanism
We then briefly explain show how the naïve mechanism helps us to
formulate a neural network based approach for mechanism design.
Intuitively, the naïve mechanism in our context simply provides the
buyer various menu items, i.e., allocations associated with different prices, and
lets her choose the most prefered one. In this case, once a buyer utility
function is specified (either by assumption or learnt from data), the
choice of the buyer is simply an \operatornamewithlimitsargmax of the utility function. As
long as the utility function could be encoded via neural network, which
is a mild assumption, the buyer’s behavior model can encoded as a
neural network with an additional \operatornamewithlimitsargmax layer.333Even if the
buyer utility function is not available, such a gadget could be replaced by
any buyer behavior model (either given or learnt from data), which is
encoded as a neural network.
##### High-level sketch of the network structure
For now, we can think the encoded mechanism as a black-box that outputs
a set of allocation-payment pairs (see [Figure (a)](#S3.F1.sf1 "(a) ‣ Figure 1 ‣ High-level sketch of the network structure ‣ 3.1 Revisiting the Naïve Mechanism ‣ 3 Problem Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")). These
pairs then are feeded into many “buyer networks”, each with different
private valuation functions (hence different choices). Finally, the
“buyer networks” output their choices and the choices are used to
evaluate the expected objective of the mechanism designer, where the
choices are weighted according to the probabilities of the corresponding
private valuation functions and the training loss is simply the negative of
the expected objective.
| | |
| --- | --- |
| A high-level abstraction of the neural networks.
(a) Naïve mechanism structure
| A high-level abstraction of the neural networks.
(b) Direct mechanism structure
|
Figure 1: A high-level abstraction of the neural networks.
One key advantage of formulating the network as a naïve mechanism
rather than a direct mechanism is that no additional constraints (such
as [IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") and [IR](#S2.Ex5 "(IR) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) are required for the former. In fact,
the difficulty of optimizing the direct mechanism network (see
[Figure (b)](#S3.F1.sf2 "(b) ‣ Figure 1 ‣ High-level sketch of the network structure ‣ 3.1 Revisiting the Naïve Mechanism ‣ 3 Problem Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is that the violations of [IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") or
[IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") constraints are not directly reflected in the designer’s
objective. Hence the standard optimization methods for neural networks
do not directly apply. In contrast, in the naïve mechanism network,
the effect of any mechanism outcome mutations on the buyer preferences
is reflected in the designer’s objective via the “buyer networks”.
Such properties facilitate the optimization in standard training methods of
neural networks.
4 Network Structure
--------------------
Our network structure contains two networks: the mechanism network and the buyer network. Since the networks represent a naïve mechanism, the output of the mechanism network is a set of choices along with different prices (or menu items) and the buyer network takes the set of menu items as input and outputs its choice. The overall network structure is shown in [Figure 2](#S4.F2 "Figure 2 ‣ 4 Network Structure ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").

Figure 2: Overall network structure. The buyer network corresponds to a rational buyer with quasi-linear utilities. In other cases, the buyer network can be constructed according to his utility function, or other networks trained from interaction data.
###
4.1 Mechanism Network
In most applications, a neural network usually takes a possible input x and then outputs a possible output y. However, our mechanism network is different from most neural networks in the sense that its output is a set of menu items, which already represents the entire mechanism. Therefore, our mechanism network does not actually need to take an input to give an output.
However, in order to fit in with most neural network frameworks, we use a one dimensional constant 1 as the input of our mechanism network. The output of the network consists of two parts. The first part is an allocation matrix X of m rows and k columns, where m is the number of items and k is the number of menu items. Each column of the allocation matrix contains the allocation of all m item. The second part is a payment vector p of length k, representing k different prices for the k menu items. The last column of the allocation matrix and the last element of the payment vector is always set to be 0. This encodes the “exit” choice of the buyer and ensures that the buyer can always choose this menu item to guarantee individual rationality.
The structure of the mechanism network is simple enough. The constant input 1 goes through a 1 fully connected layer to form each row Xi (except the last column, which is always 0) of the allocation matrix. We choose the sigmoid function as the activation function since the allocation of each item is always inside the interval [0,1]. The payment vector is even simpler. Each element pi of the payment vector is formed by multiplying the input constant by a scalar parameter. Therefore, the training of our network is very fast, since the network structure is very simple.
###
4.2 Buyer Network
The buyer network is a function that maps a mechanism to the buyer’s strategy s(v) (a distribution over all possible menu items) for each value profile v=(v1,v2,…,vm), where each vi is the value of getting the i-th item. The output of the mechanism network (the allocation matrix X and the payment vector p) is taken as the input of the buyer network. To define the output of the buyer network, suppose that each vi is bounded and 0≤vi≤¯vi. We discretize the interval [0,¯vi] to di discrete values. Let Vi be the set of possible discrete values of vi and define V=∏i∈[m]Vi.
The output of the buyer network is a m+1 dimensional tensor, with the first m dimension corresponding the buyer’s m dimensional value, and the last dimension representing the probability of choosing each menu item. Therefore, the i-th (i≤m) dimension of the tensor has length di and the last dimension has length k.
Although here we use the same notation as in [Section 2](#S2.SS0.SSS0.Px5 "Assumptions ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), this notation does not lose generality since we do not make any assumption about the buyer’s valuation of obtaining multiple items or only a fraction of an item. It is also worth mentioning that the buyer’s utility function is not necessary to build the buyer network, since the network only outputs buyer’s strategy, which may not even be consistent with any utility function.
The buyer network can be any type of network that has the same format of input and output as described above. When we do not know the buyer’s exact utility function but have plenty of interaction data (e.g., the sponsored search setting), we can train the buyer network with the the interaction data.
When the buyer’s utility function is known, we can manually design the buyer network structure so that the network outputs the buyer’s strategy more accurately. For example, when [Section 2](#S2.SS0.SSS0.Px5 "Assumptions ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") and [Section 2](#S2.SS0.SSS0.Px5 "Assumptions ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") holds, we know that the buyer always choose the menu item that maximizes his additive valuation with probability 1. We can construct m tensors V1,V2,…,Vm, with size d1×d2×⋯×dm. In Vi, an element’s value is only determined by its i-th dimensional index in the tensor, and it equals the j-th discretized value of the interval [0,¯vi], if its i-th dimensional index is j. Recall that the i-th row of the allocation matrix Xi represents different allocations of the i-th item in different menu items. We then multiply the i-th tensor with the Xi to get an m+1 dimensional tensor Xi with size d1×d2×⋯×dm×k.
We also construct a payment tensor P with size d1×d2×⋯×dm×k, where an element equals to the pi if its index for the last dimension is j.
Finally, we compute the utility tensor U by
| | | |
| --- | --- | --- |
| | U=⎛⎝∑i∈[m]Xi⎞⎠−P | |
And then apply the softmax function to the last dimension of the utility tensor U to produce the output S, which is an aggregation of s(v),∀v∈V. One can easily verify that for each value profile, the menu with the largest utility has the highest probability of being chosen. Of course, we also multiply the utility tensor by a large constant to make the probability of the best menu item close enough to 1.
###
4.3 Loss Function
The loss function can be any function specified according the mechanism designer’s objective. However, in this paper, we mainly focus on how to optimize the revenue of the mechanism and set the loss function to be the negative revenue.
Recall that the output of the buyer network is the buyer’s strategy s(v) for each value profile v. Then the loss function of the networks is
| | | |
| --- | --- | --- |
| | \textscLoss=−\textscRev=−∑v∈VPr[v]pTs(v) | |
where Pr[v] is the probability that v appears, which can be easily computed from the joint value distribution F.
Note that in the above loss function, we do not make any assumption about the probability distribution Pr[v]. Our networks are able to handle any joint distribution, including correlated ones.
5 Experiments and Analysis
---------------------------
In this section, we first list some results of our neural networks in Section [5.1](#S5.SS1 "5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). Inspired by these results, we are able to find the closed-form optimal mechanisms in some cases. We list theoretical analysis and proofs in Section [5.2](#S5.SS2 "5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
###
5.1 Experiment results
####
5.1.1 Uniform [0,c]×[0,1]
The optimal mechanism for this setting is already known [[24](#bib.bib24)]. We draw both the optimal mechanism and our experiments results together in Figure [3](#S5.F3 "Figure 3 ‣ 5.1.1 Uniform ×[0,c][0,1] ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). The color blocks represents the mechanism given by our network, where each color corresponds to a different menu item. The dashed line represents the optimal mechanism (they are NOT drawn according to the color blocks). The two mechanisms are almost identical except for the slight difference in Figure [3(c)](#S5.F3.sf3 "(c) ‣ Figure 3 ‣ 5.1.1 Uniform ×[0,c][0,1] ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
| | | | |
| --- | --- | --- | --- |
| Comparison between computed solutions and optimal solutions.
(a) c=1.5
| Comparison between computed solutions and optimal solutions.
(b) c=2.5
| Comparison between computed solutions and optimal solutions.
(c) c=1.9
| Comparison between computed solutions and optimal solutions.
(d) c=2
|
Figure 3: Comparison between computed solutions and optimal solutions.
####
5.1.2 Correlated Distribution: Uniform Triangle
Suppose that the buyer’s value v=(v1,v2) is uniformly distributed among the triangle described by v1c+v2≤1,v1≥0,v2≥0, where c≥1. The color blocks in Figure [4](#S5.F4 "Figure 4 ‣ 5.1.2 Correlated Distribution: Uniform Triangle ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") show the mechanisms given by our network. Note that in our framework, the joint value distribution is only used to compute the objective function. So our framework can handle arbitrary value distributions.
| | |
| --- | --- |
| Uniform Triangle.
(a) c=1.25
| Uniform Triangle.
(b) c=2
|
Figure 4: Uniform Triangle.
In fact, guided by these experiment results, we are able to find the closed-form optimal mechanism for this kind of value distributions. In particular, there are two possible cases for this problem. When c is large, the optimal mechanism contains two menu items. And when c is small, the optimal contains only two menus, i.e., use a posted price for the bundle of the items. Formally, we have
######
Theorem 5.1.
When c>43, the optimal menu for the uniform triangle distribution contains the following items:
(0,0),0,
(1c,1),23, and
(1,1),23c−13√c(c−1).
When c≤43, the optimal menu for the uniform triangle distribution contains the following items:
(0,0),0 and
(1,1),√c3.
The proof is deferred to [Section 5.2](#S5.SS2 "5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
####
5.1.3 Restricted Menu Size
The output of our mechanism network is a set of menus. Thus we can control the menu size by directly setting the output size of the network.
Restricting the menu size results in simpler mechanisms. It is known that the optimal menu for some distributions contains infinitely many items [[8](#bib.bib8)]. Such results directly motivates the study of simple mechanisms, since they are easier to implement and optimize in practice.
We consider the case where the buyer’s value is uniformly distributed in the unit square [0,1]2. It is known that the optimal mechanism contains 4 menu items. When the menu can only contain at most 2 items, the optimal mechanism is to trivially set a posted price for the bundle. The experiment results are shown in Figure [5](#S5.F5 "Figure 5 ‣ 5.1.3 Restricted Menu Size ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").]
| | |
| --- | --- |
| Uniform
(a) At most 2 menus.
| Uniform
(b) At most 3 menus.
|
Figure 5: Uniform [0,1]2 with restricted menu size.
Surprisingly, when the menu can have at most 3 items, our network gives an asymmetric menu, despite that the value distribution is symmetric. In fact, we can also find the optimal menu with at most 3 items analytically. Our analysis shows that the optimal menu is indeed asymmetric. The intuition is that, if we add a symmetry constraint to the solution, then the optimal menu degenerates to a 2-item one. We provide the theoretical result here, but defer the proof to Section [5.2](#S5.SS2 "5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
######
Theorem 5.2.
The optimal at-most-three-menu mechanism for two additive items with v∼U[0,1]2 is to sell the first item at price 2/3 or the bundle of
two items at price 5/6, yielding revenue 59/108≈0.546296.
By symmetry, the mechanism could also be selling the second item at price
2/3 or the bundle of two items at price 5/6. In particular, these
is no other at-most-three-menu mechanisms could generate as much revenue as
they do.
####
5.1.4 Unit-Demand Buyer
The unit-demand setting is also intensively studied in the literature. In this
setting, the allocation must satisfy x1+x2≤1. [[25](#bib.bib25)] provides
detailed analysis and closed-form solutions on the unit-demand setting. With
slight modifications, our mechanism network can also produce feasible
allocations in this setting. Instead of applying the sigmoid function to each
element of the allocation matrix, we apply a softmax function to each column
(representing each menu item) of the allocation matrix. However, with such a
modification, the allocation satisfies x1+x2=1 rather than x1+x2≤1.
The solution is to add an extra dummy element to each column before applying
the softmax function.
The experiment results are shown in [Figure (a)](#S5.F6.sf1 "(a) ‣ Figure 6 ‣ 5.1.4 Unit-Demand Buyer ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
| | | |
| --- | --- | --- |
| Empirical results.
(a) Unit demand.
| Empirical results.
(b) Combinatorial Value.
| Empirical results.
(c) Deterministic allocation.
|
Figure 6: Empirical results.
####
5.1.5 Combinatorial Value
Our framework structure can also handle the case where the buyer has
combinatorial values. The following Figure [6(b)](#S5.F6.sf2 "(b) ‣ Figure 6 ‣ 5.1.4 Unit-Demand Buyer ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") shows mechanism given by our
network for a buyer with u(v1,v2)=x1v1+x2v2+v1v2−p. In this case, we
need to slightly modify the buyer network by adding the extra v1v2 term,
which can be easily implemented.
####
5.1.6 Deterministic Mechanisms
We can use our networks to find the optimal deterministic mechanisms for any
joint value distributions. Similar to the restricted menu size case,
deterministic mechanisms are also important in practice, since they are easy
to understand and implement. In this case, the mechanism network can be
further simplified, since for selling 2 items, there can only be 4 possible
deterministic menu items, with allocations (0,0),(0,1),(1,0),(1,1).
Therefore, the only parameters in the mechanism network are the corresponding
prices.
[Figure (c)](#S5.F6.sf3 "(c) ‣ Figure 6 ‣ 5.1.4 Unit-Demand Buyer ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") shows our experiment results on uniform distributions
among the triangle described in [subsubsection 5.1.2](#S5.SS1.SSS2 "5.1.2 Correlated Distribution: Uniform Triangle ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). According to
[Theorem 5.1](#S5.Thmtheorem1 "Theorem 5.1. ‣ 5.1.2 Correlated Distribution: Uniform Triangle ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), the optimal mechanism is not deterministic when
c=2, Our experiments show that such a constraint decreases the revenue by
0.14%.
###
5.2 Theoretically Provable Optimal Mechanisms
In this section, we provide theoretical proofs for some of the findings via
our neural network. To the best of our knowledge, these results are previously
unknown.
####
5.2.1 Optimal mechanisms for selling two items with correlated distributions
As described in Section [5.1.2](#S5.SS1.SSS2 "5.1.2 Correlated Distribution: Uniform Triangle ‣ 5.1 Experiment results ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), there are two possible cases for the optimal mechanism when the buyer’s value is uniformly distributed among the triangle. The solutions are shown in Figure [7](#S5.F7 "Figure 7 ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
| | |
| --- | --- |
| Uniform Triangle.
(a) When c>43
| Uniform Triangle.
(b) When c≤43
|
Figure 7: Uniform Triangle.
We solve the problem case by case.
######
Theorem 5.3.
For any c>43, suppose that the buyer’s type is uniformly distributed among the set T={(v1,v2) | v1c+v2≤1,v1≥0,v2≥0}. Then the optimal menu contains the following items:
(0,0),0,
(1c,1),23, and
(1,1),23c−13√c(c−1).
######
Remark \theremark.
Note that the condition c>43 guarantees that the price of the third menu item is positive.
To prove Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we apply the duality theory in [[8](#bib.bib8), [7](#bib.bib7)] to our setting. We provide a brief description here and refer readers to [[8](#bib.bib8)] and [[7](#bib.bib7)] for details.
Let f(v) be the joint value distribution of v=(v1,v2), and V be the support of f(v). Define measures μ0, μ∂, μs as follows:
* μ0 has a single point mass at v=0, i.e., μ0(V)=I(v–∈A), where I(⋅) is the indicator function, and v–∈A is the smallest type in V.
* μ∂ is only distributed along the boundary of V, with a density f(v)(v⋅η(v)), where η(v) is the outer unit normal vector at v.
* μs is distributed in V with a density ∇f(v)⋅v+(n+1)f(v), where n is the number of items.
Let μ=μ0+μ∂−μs. Define μ+ and μ− to be two non-negative measures such that μ=μ+−μ−. Let V+ and V− be the support sets of μ+ and μ−. [[8](#bib.bib8), [7](#bib.bib7)] shows that designing an optimal mechanism for selling n items to 1 buyer is equivalent to solving the following program:
| | | | |
| --- | --- | --- | --- |
| | sup | ∫Vudμ+−∫Vudμ− | |
| | s.t. | u(v)−u(v′)≤∥(v−v′)+∥1,∀v∈V+,v′∈V− | | (P) |
| | | u~{}is convex,u(v–)=0 | |
where u(v) is the utility of the buyer when his value is v, and ∥(v−v′)+∥1=∑ni=1max(0,vi−v′i).
Relax the above program by removing the convexity constraint and write the dual program of the relaxed program:
| | | | |
| --- | --- | --- | --- |
| | inf | ∫V×V∥(v−v′)+∥1dγ | |
| | s.t. | γ∈Γ(μ+,μ−) | | (D) |
where Γ(μ+,μ−) is the set of non-negative measures γ defined over V×V such that, for any V′⊆V, the following equations hold:
| | | |
| --- | --- | --- |
| | ∫V′×Vdγ=μ+(V′)and∫V×V′dγ=μ−(V′) | |
######
Lemma \thelemma ([[8](#bib.bib8)]).
([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is a weak dual of ([P](#S5.Ex11 "(P) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")).
We omit the proof here but refer readers to [[8](#bib.bib8)] and [[7](#bib.bib7)] for details. The dual program ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) has an optimal transport interpretation. We “move” the mass from μ+ to other points to form μ− and the measure γ corresponds to the amount of mass that goes from each point to another in V.
Although ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is only a weak dual of ([P](#S5.Ex11 "(P) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")), we can still use it to certify the optimality of a solution. We already give a menu in Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). Therefore, the relaxed convexity constraint is automatically satisfied if the buyer always choose the best menu item.
In our setting, f(v)=2c, and we have that V=T, v–=(0,0), μ∂ has a constant line density of 2√1+c2 along the segment v1c+v2=1,0≤v2≤1, and μs has a constant density of 6c over T.
Let Ri be the region of T such that for any v∈Ri, choosing menu item i maximizes the buyer’s utility.
It is straightforward to verify that the measures μ+ and μ− are balanced inside each region, i.e., μ+(Ri)=μ−(Ri),∀i. Therefore, the transport of mass only happens inside each region.
We construct the transport in R1 and R2 as follows:
* R1: μ+ is concentrated on a single point 0. We move the mass at 0 uniformly to all points in R1;
* R2: μ+ is only distributed along the upper boundary of R2. For each point v at the upper boundary, we draw a vertical line l through it, and move the mass at v uniformly to the points in L∩R2.
However, for R3, μ+ is also only distributed along the upper boundary, but there is no easy transport as for R1 and R2. We provide the following Lemma [5.2.1](#S5.SS2.SSS1 "5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
######
Lemma \thelemma.
For R3, there exists a transport of mass, such that for any two points v,v′, if there is non-negative transport from v to v′, then vi≥v′i,∀i.
The proof of Lemma [5.2.1](#S5.SS2.SSS1 "5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") is deferred to Appendix [A.1](#A1.SS1 "A.1 Proof of Lemma 5.2.1 ‣ Appendix A Missing Proofs ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). With this lemma, we can simplify our proof of Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), and do not need to construct the measure γ explicitly.
###### Proof of Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
Point D in Figure [10](#A1.F10 "Figure 10 ‣ A.1 Proof of Lemma 5.2.1 ‣ Appendix A Missing Proofs ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") has coordinates (xD,yD), where xD=23c−13√c(c−1)−13√cc−1 and yD=13√cc−1. Therefore,
| | | |
| --- | --- | --- |
| | Pr{The buyer chooses menu item 2}=f(v)⋅S(YCDI)=2c⋅13xD | |
| | Pr{The buyer chooses menu item 3}=f(v)⋅S(CDEX)=2c[c2(13+yD)2−12y2D] | |
Thus the revenue of the menu provided in Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") is:
| | | | |
| --- | --- | --- | --- |
| | \textscRev= | 23⋅Pr{The buyer chooses menu %
item 2} | |
| | | +(23c−13√c(c−1))⋅Pr{The buyer chooses menu item 3} | |
| | = | 227[4+c+√c(c−1)] | |
New we compute the objective of the dual program ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")). And to prove the optimality of the menu, it suffices to show that the objective of ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is equal to Rev.
Note that in our construction of the transport in R1 and R2, we only allow transport inside each region. In R1, we transport mass from point 0 to other points. So it does not contribute to the objective of ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")), and we can just ignore R1. In R2, the mass is always moved vertically down. Therefore, for any v,v′, such that there is positive mass transport from v to v′, we have vi≥v′i,∀i and ∥(v−v′)+∥1=∑imax(0,vi−v′i)=∑i(vi−v′i)=∑i(vi−0)−∑i(v′i−0). Therefore,
| | | | |
| --- | --- | --- | --- |
| | ∫R2×R2∥(v−v′)+∥1dγ=∫R2×R2∥v−0∥1dγ−∫R2×R2∥v′−0∥1dγ | | (4) |
For the first term, we have:
| | | |
| --- | --- | --- |
| | ∫R2×R2∥v−0∥1dγ=∫R2×T∥v−0∥1dγ=∑j∫σj×T∥v−0∥1dγ | |
where the first equation is due to the fact that our transport is inside each region, and {σj} is a partition of the region R2. When the maximum area of σj approaches 0, we get:
| | | | |
| --- | --- | --- | --- |
| | | ∫R2×R2∥v−0∥1dγ=∫R2∥v−0∥1dμ+ | |
| | = | ∫xD0(v1+1−v1c)2√1+c2√1+c2cdv1=19(8−6√cc−1+5c−4√c(c−1)) | |
Similarly, the second term of Equation ([4](#S5.Ex21 "(4) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is:
| | | | |
| --- | --- | --- | --- |
| | ∫R2×R2∥v′−0∥1dγ | =∫R2∥v′−0∥1dμ−=(2√c−1−√c)(3+2c−√c(c−1))9√c−1 | |
For R3, according to Lemma [5.2.1](#S5.SS2.SSS1 "5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), it is also true that when there is positive mass transport from v to v′, we always have vi≥v′i,∀i. Therefore,
| | | |
| --- | --- | --- |
| | ∫R3×R3∥(v−v′)+∥1dγ=∫R3×R3∥v−0∥1dγ−∫R3×R3∥v′−0∥1dγ | |
For the first term,
| | | | |
| --- | --- | --- | --- |
| | ∫R3×R3∥v−0∥1dγ | =∫cxD(v1+1−v1c)2cv1=19(1+4c+4c√cc−1+2√cc−1) | |
Similarly, for the second term,
| | | | |
| --- | --- | --- | --- |
| | ∫R3×R3∥v′−0∥1dγ | =∫v∈R36c(v1+v2)dv=127(1+5√cc−1+10c+10c√cc−1) | |
Therefore, the objective of the dual program ([D](#S5.Ex14 "(D) ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) is:
| | | |
| --- | --- | --- |
| | ∫T×T∥(v−v′)+∥1dγ=∫R2×R2∥(v−v′)+∥1dγ+∫R3×R3∥(v−v′)+∥1dγ=227[4+c+√c(c−1)]=\textscRev | |
The above equation shows that the dual objective is equal to the actual revenue, which certifies that the menu is optimal.
∎
When c≤43, the optimal mechanism only has two menu items.
######
Theorem 5.4.
For any 1≤c≤43, suppose that the buyer’s type is uniformly distributed among the set T={(v1,v2) | v1c+v2≤1,v1≥0,v2≥0}. Then the optimal menu contains the following two items:
(0,0),0 and
(1,1),√c3.
One can prove Theorem [5.4](#S5.Thmtheorem4 "Theorem 5.4. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") with the same trick in Lemma [5.2.1](#S5.SS2.SSS1 "5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"). We omit the proof of this theorem since it is easier compared to the other case described in Theorem [5.3](#S5.Thmtheorem3 "Theorem 5.3. ‣ 5.2.1 Optimal mechanisms for selling two items with correlated distributions ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
####
5.2.2 Optimal mechanisms under limited menu size constraints
In this section, we consider the optimal 3-Menu Mechanisms for value distribution U[0,1]2.
######
Theorem 5.5.
The optimal symmetric at-most-three-menu mechanism for two additive items
with v∼U[0,1]2 is to sell the bundle of two items at price
√6/3, yielding revenue 2√6/9≈0.54433.
We defer the proof to [Appendix A](#A1 "Appendix A Missing Proofs ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
######
Theorem 5.6.
The optimal at-most-three-menu mechanism for two additive items with v∼U[0,1]2 is to sell the first item at price 2/3 or the bundle of
two items at price 5/6, yielding revenue 59/108≈0.546296.
By symmetry, the mechanism could also be selling the second item at price
2/3 or the bundle of two items at price 5/6. In particular, these
is no other at-most-three-menu mechanisms could generate as much revenue as
they do.
We demonstrate the proof through the basic parametric method. Note that there
must be a zero menu Z=[(0,0),0], and hence we have two menus to
determine. Suppose that the remaining two menus are A=[(α,β),p]
and B=[(γ,δ),q]. We then solve the following problem:
| | | | |
| --- | --- | --- | --- |
| | maximize\textscRev(A,B,Z)subject toα,β,γ,δ∈[0,1], p,q≥0. | | (3\textscMenu) |
To establish the connection between the menus and the revenue, let SA be
the set of values that menu A is the most preferred:
| | | |
| --- | --- | --- |
| | SA={(v1,v2)∈[0,1]2|(v1,v2)⋅(α,β)−p≥(v1,v2)⋅(γ,δ)−q ∧ (v1,v2)⋅(α,β)−p≥0}. | |
Similarly, we define SB and SZ be the set of values where menu B and
menu Z are the most preferred, respectively:
| | | |
| --- | --- | --- |
| | SB={(v1,v2)∈[0,1]2|(v1,v2)⋅(γ,δ)−q≥(v1,v2)⋅(α,β)−p ∧ (v1,v2)⋅(γ,δ)−q≥0}, | |
| | SZ={(v1,v2)∈[0,1]2|0≥(v1,v2)⋅(α,β)−p ∧ 0≥(v1,v2)⋅(γ,δ)−q}. | |
For any measurable set S⊆[0,1]2, let |S|=Pr[(v1,v2)∈S] be the probabilistic measure of S. Then the revenue of the mechanism
with menus A, B, and Z is
| | | | |
| --- | --- | --- | --- |
| | \textscRev(A,B,Z)=|SA|⋅p+|SB|⋅q. | | (3\textscMenu\textscRev) |
With the above formulation, there are two major challenges to solve the
program ([3\textscMenu](#S5.Ex30 "(3Menu) ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")):
* There are too many possible cases with different formulas of |SA|
and |SB|, hence the formula of \textscRev(A,B,Z). In particular,
there are 4 possible intersection patterns between the boundary of
the square [0,1]2 and the intersection of each two of the menus
(SA∩SB, SB∩SZ, SZ∩SZ). Hence roughly 43=64 different cases.
* Even within each specific case, the revenue Rev is still a
high-order function with 6 variables. In general, there is no
guarantee for closed-form solutions.
To overcome these two challenges, the following two lemmas are critical to
reducing both the number of different cases and free variables:
######
Lemma \thelemma.
Without loss of generality, we can assume that the optimal at-most-three-menu
mechanism includes bundling, (1,1), as one of its menu.
###### Proof of [subsubsection 5.2.2](#S5.SS2.SSS2 "5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
Without loss of generality, suppose that p≥q, and then there must be
an optimal mechanism with α=β=1. Because by replacing menu
A with menu A′=[(1,1),p], the set of values where A′ dominating
B and Z, S′A′ will be a superset of SA, and similarly, S′Z
will be a subset of SZ, i.e., S′A′⊇SA and S′Z⊆SZ. Therefore,
| | | |
| --- | --- | --- |
| | \textscRev′=|S′A′|⋅p+|S′B|⋅q=|S′A′|⋅(p−q)+(1−|S′Z|)⋅q≥|SA|⋅(p−q)+(1−|SZ|)⋅q=\textscRev. | |
∎
######
Lemma \thelemma ([[20](#bib.bib20), Proposition 2]).
For v∼U[0,1]2, consider a mechanism with a menu (γ,δ)
such that γ,δ≠1 and (γ,δ)≠(0,0), then
by replacing the menu with (γ′,δ′) (the price of the menu may
also be different), the revenue of the new mechanism is no less than the
original mechanism, where γ′=1 or δ′=1 or (γ′,δ′)=(0,0).
| | | |
| --- | --- | --- |
| Three possible cases for the proof of
(a) Case 1
| Three possible cases for the proof of
(b) Case 2
| Three possible cases for the proof of
(c) Case 3
|
Figure 8: Three possible cases for the proof of [Theorem 5.6](#S5.Thmtheorem6 "Theorem 5.6. ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
###### Proof of [Theorem 5.6](#S5.Thmtheorem6 "Theorem 5.6. ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks").
By [subsubsection 5.2.2](#S5.SS2.SSS2 "5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we can fix α=1 and β=1.
Moreover, without loss of generality, we could focus on the cases with p>q. Otherwise, the menu B will be dominated by menu A and menu Z,
i.e., SB=∅, hence reduced to a two-menu mechanism, where the
optimal revenue is at most 2√6/9.
Similarly, by [subsubsection 5.2.2](#S5.SS2.SSS2 "5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we can fix one of γ and δ
to be 1, without loss of generality, γ=1. Note that in the case
with (γ,δ)=(0,0), menu B will be dominated by menu Z,
hence reduced to a two-menu mechanism again.
Therefore, we remain to solve ([3\textscMenu](#S5.Ex30 "(3Menu) ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) with additional
constraints: α=β=γ=1 and p>q.
Now consider the values v=(v1,v2) in SA∩SB, which must
satisfy:
| | | |
| --- | --- | --- |
| | SA∩SB:(v1,v2)⋅(1,1)−p=(v1,v2)⋅(1,δ)−q. | |
Similarly,
SA∩SZ:(v1,v2)⋅(1,1)=p,SB∩SZ:(v1,v2)⋅(1,δ)=q,
and hence
SA∩SB∩SZ:v∗1=q−δp1−δ,v∗2=p−q1−δ.
Note that if SA or SB is empty, there would be only two menus and the
revenue cannot be more than 2√6/9. Otherwise:
* For SA not being empty, we must have v∗2<1, hence:
| | | | |
| --- | --- | --- | --- |
| | p−q1−δ<1; | | (NonEmptyA) |
* For SB not being empty, we must have v∗1<1, hence:
| | | | |
| --- | --- | --- | --- |
| | q−δp1−δ<1. | | (NonEmptyB) |
Based on the constraints ([NonEmptyA](#S5.Ex37 "(NonEmptyA) ‣ 1st item ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")) and ([NonEmptyB](#S5.Ex38 "(NonEmptyB) ‣ 2nd item ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")), there are three
possible cases (see [Figure 8](#S5.F8 "Figure 8 ‣ 5.2.2 Optimal mechanisms under limited menu size constraints ‣ 5.2 Theoretically Provable Optimal Mechanisms ‣ 5 Experiments and Analysis ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks")). The solutions under these cases
are summarized by the following lemmas.
######
Lemma \thelemma (Case 1).
Conditional on p≤1, the optimal mechanism consists of asymmetric
three menus A:[(1,1),5/6], B:[(1,0),2/3], Z:[(0,0),0],
and yields revenue 59/108.
######
Lemma \thelemma (Case 2).
Conditional on p≥1>q, the optimal mechanism yields revenue 14/27.
######
Lemma \thelemma (Case 3).
Conditional on p>q>1, the revenue of the mechanism is not more than
1/2.
In summary, the optimal mechanism with at most 3 menus is to sell the
first item at price 2/3 or the bundle of two items at price 5/6,
yielding revenue 59/108.
∎
6 Performance
--------------
##### Setup
As our method is very efficient, we were able to perform our experiments on a
laptop (13-inch MacBook Pro, with 2.5 GHz Intel Core i7 CPU, 16 GB RAM)
using TensorFlow. To solve the problems with continuous value distributions in
finite neural networks, we simply discretize the value space. In particular,
the discretization is parameterized by N, which is the number of the
intervals (with length 1/N) in unit length. In other words, there are
N2 squares of size 1/N by 1/N in any unit square. By default, we
set N=100.
###
6.1 Efficiency and Accuracy: Compared with Linear Programs
| | | | |
| --- | --- | --- | --- |
| Running time and converge speed.
(a) Ours with linear program.
| Running time and converge speed.
(b) Average per iter.
| Running time and converge speed.
(c) \textscRev/\textscOpt\textscRev vs # of iters.
| Running time and converge speed.
(d) 1−\textscRev\textscOpt\textscRev vs # of iters.
|
Figure 9: Running time and converge speed.
We compare the running time of our method and the straightforward linear
program approach for the U[0,1]2 setting. In the linear program, the
variables are the allocation x1,x2 and payment p of the values on each
discretized grid (hence O(N2) variables) and the constraints are the
[IC](#S2.Ex4 "(IC) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") and [IR](#S2.Ex5 "(IR) ‣ Direct Mechanism ‣ 2 Preliminaries ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") constraints (hence O(N4) constraints). We use
the basic PuLP package in Python to solve the linear programs. In
[Figure (a)](#S6.F9.sf1 "(a) ‣ Figure 9 ‣ 6.1 Efficiency and Accuracy: Compared with Linear Programs ‣ 6 Performance ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we compared the execution time of solving the
linear programs with specific N’s (N=10,15,20,25,30) and the
execution time of training our neural network to (i) achieve a mechanism with
at least the same level of acurracy as the one given by the linear program
(for N≤30), and (ii) converge (for N=40,50,200). Note that the
running time of the linear program approach grows very rapidly: for N=30,
it takes 51 mins and we are not able to apply it to N≥40. In
contrast, the training time of our neural network grows much slower (less than
5 mins for N=200, i.e., buyer distribution support of size 40000).
One key advantage of our approach over the linear program is that our problem
size grows linearly in terms of the support size of the buyer’s distribution
(i.e., O(N2)), while the size of the linear program grows quadratically in
terms of the support size (i.e., O(N4)). In [Figure (b)](#S6.F9.sf2 "(b) ‣ Figure 9 ‣ 6.1 Efficiency and Accuracy: Compared with Linear Programs ‣ 6 Performance ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks"), we
also plot the average training time for each iteration, which is in 1∼30 milliseconds.
[Figure (c)](#S6.F9.sf3 "(c) ‣ Figure 9 ‣ 6.1 Efficiency and Accuracy: Compared with Linear Programs ‣ 6 Performance ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") and [Figure (d)](#S6.F9.sf4 "(d) ‣ Figure 9 ‣ 6.1 Efficiency and Accuracy: Compared with Linear Programs ‣ 6 Performance ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") illustrates that our method
converges to the optimal very fast. The relative error also drops very fast
even in the log-scale plot. In particular, Rev is evaluated on the original
continuous distribution U[0,1]2. Hence the gap between Rev and
OptRev cannot drop to zero as we discretized the value distribution.
##### Conclusion
So far, we have shown that our approach is much more efficient than the linear
program appraoch and hence much stronger scalability as well. To completement
the time efficiency, we also show in [Appendix B](#A2 "Appendix B Comparison of Accuracy ‣ Computer-aided mechanism design: designing revenue-optimal mechanisms via neural networks") that our method
also dominates the linear program approach in terms of accuracy.
|
d9ade343-331b-49fd-b7f9-00f2851a18eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Goodhart's Law and Emotions
Goodhart's Law is an important principle about using a measure to drive action, and there are many examples of Goodhart's law and its importance in human affairs. This essay focuses on how Goodhart's Law applies to human desire in the modern environment.
Emotions do not directly measure the adaptiveness of an action because they are a crude mechanism. They are heuristic, ad hoc and stimulus-dependent. Although emotions are not a direct measure of adaptiveness, they evolved to motivate adaptive behavior, and thus motivation is essentially a proxy for what is adaptive in the current situation. In modern civilization, we are gaming our emotions, thus making them a terrible measure of adaptiveness.
Imagine if the engineer viewed increasing the odometer number as the purpose of the car, and the movement of the car on the road as just a way to increase that number. He would view putting the car up on blocks as progress. That is what modern man is doing with respect to emotions and adaptiveness. He is spinning his emotional wheels and going nowhere.
|
6a1e6899-cb6a-4499-836a-4ccb2216db94
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Darwinian Traps and Existential Risks
> This part 1 in a 3-part sequence summarizes my book, The Darwinian Trap (see part 2 here and part 3 here). The book aims to popularize the concept of multipolar traps and establish them as a broader cause area. If you find this series intriguing contact me at kristian@kristianronn.com if you have any input or ideas.
Global coordination stands as arguably the most critical challenge facing humanity today, functioning both as a necessary component for solving existential risks and as a significant barrier to effective mitigation. From nuclear proliferation to artificial intelligence development and climate change, our inability to collaborate effectively on a global scale not only exacerbates these threats but also perpetuates the emergence of new systemic vulnerabilities if left unaddressed.
In this sequence, I will argue that the root of this coordination problem lies in the very mechanisms that shaped our species: natural selection. This evolutionary process, operating as a trial-and-error optimization algorithm, prioritizes immediate survival and reproduction over long-term, global outcomes. As a result, our innate tendencies often favor short-term gains and localized benefits, even when they conflict with the greater good of our species and planet.
The inherent limitations of natural selection in predicting future optimal states have left us ill-equipped to handle global-scale challenges. In a world of finite resources, competition rather than cooperation has often been the more adaptive trait, leading to the emergence of self-interested behaviors that arguably dominate modern societies. This evolutionary legacy manifests in the form of nationalistic tendencies, economic rivalries, dangerous arms races and a general reluctance to sacrifice immediate benefits for long-term collective gains.
This three-part series summarizes my book: The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future).
* Part 1 (the pa
|
b3e2294b-317e-4083-b5ad-e168ad8cd2f6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
More intuitive explanations!
The post on two easy to grasp explanations on Gödel's theorem and the Banach-Tarski paradox made me think of other explanations that I found easy or insightful and that I could share them as well.
1) Here is a nice proof of the Pythagorean theorem:
2) An easy and concise explanation of expected utility calculations by Luke Muehlhauser:
> Decision theory is about choosing among possible actions based on how much you desire the possible outcomes of those actions.
>
> How does this work? We can describe what you want with something called a utility function, which assigns a number that expresses how much you desire each possible outcome (or “description of an entire possible future”). Perhaps a single scoop of ice cream has 40 “utils” for you, the death of your daughter has -274,000 utils for you, and so on. This numerical representation of everything you care about is your utility function.
>
> We can combine your probabilistic beliefs and your utility function to calculate the expected utility for any action under consideration. The expected utility of an action is the average utility of the action’s possible outcomes, weighted by the probability that each outcome occurs.
>
> Suppose you’re walking along a freeway with your young daughter. You see an ice cream stand across the freeway, but you recently injured your leg and wouldn’t be able to move quickly across the freeway. Given what you know, if you send your daughter across the freeway to get you some ice cream, there’s a 60% chance you’ll get some ice cream, a 5% your child will be killed by speeding cars, and other probabilities for other outcomes.
>
> To calculate the expected utility of sending your daughter across the freeway for ice cream, we multiply the utility of the first outcome by its probability: 0.6 × 40 = 24. Then, we add to this the product of the next outcome’s utility and its probability: 24 + (0.05 × -274,000) = -13,676. And suppose the sum of the products of the utilities and proba
|
b9fd4753-72cd-4748-a7cf-9256d52327c8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Smart people should do biology
In school, it often felt like an unspoken rule: the “smart kids” did physics and chemistry, while biology was relegated to those willing to memorize disconnected facts. A high school teacher once told me biology was his least favorite science because “there are just too many things to remember.” He argued that physics offered laws, and chemistry gave the periodic table—but biology? It was a mess of facts with no unifying principles. While I see his point, he was wrong. Biology is not about memorization. Biology is about exploration, grappling with complexity, and engaging in some of the most exciting intellectual challenges of our time.
The Challenge of Biology: Thinking in Systems
Biology’s complexity isn’t a weakness; it’s a strength. Unlike physics, where idealized models simplify reality, or chemistry, where periodic trends provide predictability, biology forces us to confront the messy, dynamic systems of life itself. Even seemingly simple tasks—like growing E. coli in a lab—require navigating intricate dependencies. A friend of mine, who quit a PhD in CRISPR research, described it perfectly: “If I left for a weekend, all my cells would die.” Biology doesn’t let you assume a “spherical cow.” It demands that you grapple with the full richness of life from the start.
Take the example of yeast aging research, as described by Laura Deming in her week-long exploration of whether it’s possible to make yeast immortal. Her approach exemplifies the intellectual thrill of thinking deeply about biology. She started with curiosity, asking herself what it might feel like to “be a yeast cell.” Then, armed with data from BioNumbers and a deep understanding of yeast physiology, she built a mental model of the cell’s inner workings. She questioned how aging affects everything from cell size to ATP concentration, and how sporulation—a process some yeast use to regenerate—might extend lifespan. Her exploration wasn’t just about finding answers; it was about wrestling with the
|
c30c6326-e48a-4581-ab7d-236ecab1379e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Is there a God" for noobs (followup)
After having submitted it here, I published my essay on my web site, then submitted it to reddit (on r/religion and on r/atheism respectively). While I was very pleased by Less Wrong (your feedback were quite informative), reddit was quite… disappointing.
* On r/atheism, my essay was ignored into oblivion in less than an hour. I guess there's just too much activity there, and I got lost in the flood. There's one comment though, which lead me to seriously question my use of the word "universal". I feel that "truth is universal", though accurate, doesn't sound obvious enough. And I'd like to avoid this heavy dependency.
* On r/religion, my essay is still in the front page. There are a few votes (5 ups besides my own, and 2 or 3 downs). No comments at all. I guess there's too little activity here.
Overall, I'm now confident this essay is now good enough for my family to read.
|
419c9f4e-315e-461c-8325-e8a596091ec5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Fact Finding: Do Early Layers Specialise in Local Processing? (Post 5)
This is the fifth post in the Google DeepMind mechanistic interpretability team’s investigation into how language models recall facts. This post is a bit tangential to the main sequence, and documents some interesting observations about how, in general, early layers of models somewhat (but not fully) specialise into processing recent tokens. You don’t need to believe these results to believe our overall results about facts, but we hope they’re interesting! And likewise you don’t need to read the rest of the sequence to engage with this.
Introduction
In this sequence we’ve presented the multi-token embedding hypothesis, that a crucial mechanism behind factual recall is that on the final token of a multi-token entity there forms an “embedding”, with linear representations of attributes of that entity. We further noticed that this seemed to be most of what early layers did, and that they didn’t seem to respond much to prior context (e.g. adding “Mr Michael Jordan” didn’t substantially change the residual).
We hypothesised the stronger claim that early layers (e.g. the first 10-20%), in general, specialise in local processing, and that the prior context (e.g. more than 10 tokens back) is only brought in in early-mid layers. We note that this is stronger than the multi-token embedding hypothesis in two ways: it’s a statement about how early layers behave on all tokens, not just the final tokens of entities about which facts are known; and it’s a claim that early layers are not also doing longer range stuff in addition to producing the multi-token embedding (e.g. detecting the language of the text). We find this stronger hypothesis plausible, because tokens are a pretty messy input format, and analysing individual tokens in isolation can be highly misleading, e.g. when a long word is split into many fragment tokens, suggesting that longer range processing should be left until some pre-processing on the raw tokens has been done, the idea of detokenization.[1]
We tested
|
30097b27-0b4f-4f51-aeed-b83656284bba
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Presentation on Learning
In order to do a better job putting together my thoughts and knowledge on the subject, I precommitted myself to giving a presentation on learning. My specific goal for the presentation is to inform audience members about how humans actually learn and teach them how to leverage this knowledge to efficiently learn and maintain factual and procedural knowledge and create desired habits.
I will be focusing a little on background neuroscience, borrowing especially from A Crash Course in the Neuroscience of Human Motivation. I will heavily discuss spaced repetition, and I will also talk about the relevance of System 1 and System 2 thinking. I will not be talking about research, or about how to discover what to learn; for the purposes of my presentation, people already know what they want or need to learn, and have a fairly accurate picture of what that knowledge or those behaviors look like.
Given that I will only have an hour to speak, I will be unable to explore everything I might like to in depth. Less Wrong (both the site and the community) are my most valuable resource here, so I am asking two things:
1. In one hour, what would you cover if you earnestly wanted to improve people's ability to learn?
2. What background material do I need to ensure fluency with? This should be material that I need to have adequate familiarity with or else risk presenting an error, even if I don't need to present the material itself in any depth.
The audience will be students and faculty in a Computer Science department. In decreasing order of number of members, the audience will be Masters students, seniors, Ph.D candidates, professors; no Junior or lower-level undergraduates, so I will probably use computing analogies that wouldn't make sense in other contexts. Because of the audience, I'm also comfortable giving a fairly information-dense presentation, but since I intend to persuade as well as inform the presentation will not be a report.
|
5c9b0e5a-b2f6-49d1-8f99-53158ad4b52b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What does the FDA actually do between getting the trial results and having their meeting?
It seems that for approving vaccines there's a gap of weeks between the drug company finishing their trial and giving the data to the FDA and the FDA actually making the decision to approve the vaccine. What does the FDA do during that time? What takes weeks?
|
16aee831-83e2-48b6-b5ca-8a1424c42dd5
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
The Defender’s Advantage of Interpretability
**TL;DR:** In this post, I want to argue why Interpretability & Transparency tools have a defender’s advantage if they are used correctly, i.e. they improve alignment much more than new capabilities and therefore mitigate risks of dual use. I draw parallels from biosecurity researchers who have thought about the risks of dual-use and defender’s advantages in more detail and I think that the AI safety community can learn a lot from them. Lastly, I want to point out that not all interpretability tools have a clear defender’s advantage and some interpretability research might still carry a lot of risks when used incorrectly.
I’d like to thank Lee Sharkey and Simon Grimm for their feedback on this post.
Introduction
============
Most technology is dual-use in some way--a knife can be used as a household appliance or as a weapon. However, different technologies have different propensities to be used for good or bad, e.g. more research into walls will likely benefit the defender more than the attacker while more research into the capabilities of viruses benefits attackers more than defenders.
I feel like we, the AI safety community, have not thought enough about which approaches have a clear defender’s advantage or how we could steer existing approaches to have more of a defender's advantage. To my (very limited) understanding, the biosecurity community has thought a bit more about these kinds of dual-use trade-offs. Therefore, we could probably learn some things from them.
In this post, I want to briefly look at some of the possible lessons from biosecurity and see if we can translate them to AI safety. Then I want to argue why interpretability is one of the approaches that plausibly has a defender’s advantage.
I’m certainly not the first person to have come to the conclusion that interpretability is important for alignment. [Chris Olah has made the case for interpretability](https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) for years. [Neel Nanda has provided a long theory of impacts](https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability) of interpretability research. Quintin Pope has made the [case for optimism about interpretability](https://www.lesswrong.com/posts/LHCSZbhbtoLpr7B7u/the-case-for-radical-optimism-about-interpretability). Evan Hubinger has provided [11 proposals to build safe AI](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) that are all essentially something+interpretability, has [developed an interpretability tech tree](https://www.lesswrong.com/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree) and [summarized transformer circuits](https://www.lesswrong.com/posts/2269iGRnWruLHsZ5r/transformer-circuits). ARC is working on [ELK](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) (and related topics) that certainly read to me as if they are intended to prevent deceptive alignment. There are many further good posts on aspects of interpretability (see e.g. [here](https://www.lesswrong.com/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers), [here](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking), [here](https://www.lesswrong.com/posts/QirLfXhDPYWCP8PK5/transparency-and-agi-safety), or [here](https://www.lesswrong.com/posts/57fTWCpsAyjeAimTp/interpretability-in-ml-a-broad-overview-2)).
The reason why I add this post to the long list of posts arguing for the importance of interpretability is that I feel like the “defender’s advantage” framework allows for an easy way to decide which kind of interpretability research will help more with alignment than with capabilities and thus alleviates one major concern that some people have against it (personal conversations, not sure if someone wrote this down).
Lessons from Biosecurity
========================
Most of the following comes from personal discussions with biosecurity researchers or podcasts like “[Hear This Idea”s interview](https://podcasts.apple.com/us/podcast/kevin-esvelt-and-jonas-sandbrink-on-risks-from/id1496501781?i=1000575938616) with Kevin Esvelt and Jonas Sandbrink. I’m not a biosecurity researcher myself and the following is likely to lack nuance.
1. **Gain-of-function(Enhancement of potential pandemic pathogens) = bad**: More specifically, approaches that require us to build a new capability in order to learn how to safeguard against it makes offensive scaling easier than defensive scaling, e.g. the new capability enables the attacker to do more new things than the defender. Firstly, you have created a deadly virus with certainty but the development of the vaccine is uncertain--you stacked the odds against you. Secondly, even if your vaccine (or other defense) is successful, it’s likely easy to modify the new deadly virus in ways that circumvent this defense. Additionally, even if a bad actor just uses the exact virus you created, it’s not clear that we’d be able to roll out the new vaccine fast enough. In some sense, the strategy is too specifically tailored to the problem you just created yourself and thus the potential damage is larger than the potential gains.
2. **Broad spectrum vaccines = good:**We might be able to design vaccines against entire families of viruses, e.g. all coronaviruses rather than just Covid19 or a specific wave of Covid19. Broad spectrum vaccines have a favorable risk profile, as they don’t require the identification of highly pathogenic viruses, and secondly, guard against a swath of different viruses within the same group of viruses. Therefore, they guarantee broad defence, without requiring detailed knowledge or experiments that could go wrong or be misused. Thus they have a more robust risk profile (though some defences are even more robust, such as PPE, pandemic shelters, or ventilation.)
3. **Preparation & rapid deployment = good**: A rapidly spreading pandemic can realistically infect large parts of the world’s population within 100 days. This is likely not enough time to understand the virus and develop, produce and distribute the vaccine. Therefore, being well-prepared, e.g. with broad spectrum vaccines, large vaccine production facilities, large stockpiles of PPE, etc. likely decreases the damage done by the pandemic. However, all of these techniques are unlikely to increase the spread or enable active misuse of the virus, therefore, they create a defender’s advantage.
In summary, a) some defensive tools do not require novel capabilities, e.g. broad spectrum vaccines, better PPE or better ventilation, and b) the knowledge of defensive insights can sometimes be used intentionally or accidentally to create more powerful offensive tools. Thus, we should keep the offensive-defensive scaling in mind when creating a new tool.
I’m probably missing a lot of nuance and some important points but even these fairly general ideas can already be translated to AI safety--at least to some extent.
The Defender’s Advantage of Interpretability
============================================
For the following section, I will use Interpretability in a very broad sense, i.e. including mechanistic interpretability but also more high-level approaches that aim to understand NNs (sometimes called “Science of DL”).
My reasons to think that interpretability has a defenders advantage include
1. **New interpretability tools improve most alignment research but not most capabilities:**If someone develops a tool that makes interpreting the neural network really easy, this would immediately, without further work, improve alignment because we could directly act on new information, e.g. turn off dangerous AIs (if this is still possible). While some of this information could be used to improve the capabilities of the AI, it requires further work to do that (i.e. you still have to develop the capability improvements). Due to interpretability tools, this work might be easier but it is still more costly than the immediate insights for alignment.
However, it is important to point out that this advantage differs between interpretability applications. Understanding one specific phenomenon really well might have nearly no defender’s advantage while very general interpretability methods like circuits have a clearer defender’s advantage.
2. **Does not require new capabilities:**Interpretability tools can usually be applied to all levels of capabilities (might not hold true for highly capable AIs if they want to hide information), e.g. we can use them on small MLPs, and large LLMs. Other alignment approaches sometimes require a specific level of capabilities to work such as [OpenAI’s approach to automate alignment research](https://openai.com/blog/our-approach-to-alignment-research/) or the translator head in [ELK](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge).
3. **Does not require deployment:**Interpretability tools can be used during training, e.g. to monitor the emergence of dangerous behavior. We might also be able to run only subparts of the network to understand them which removes the risk of running the entire network or deploying it to detect a specific capability.
4. **Is fairly general:**In theory, we should be able to develop interpretability tools for all kinds of DL systems and learnings from one likely translate to others, e.g. the idea of circuits translated from CNNs to LLMs.
5. **Preparation & rapid deployment:**Interpretability tools don’t have a strong preparation advantage because you need to have a network to interpret it. However, if we are ever able to scale and automate interpretability tools to a level that you can very quickly interpret networks then rapid deployment of interpretability tools might be possible. If the deployment is rapid enough, we could use interpretability tools during training to monitor and react to the formation of potentially dangerous circuits. I’m uncertain though if this will be realistic in the near future.
6. **Level of necessity:**In general, I feel like interpretability is approximately necessary for alignment but not for capabilities (echoing the sentiment of [Evan Hubinger](https://www.lesswrong.com/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree)). With capabilities, you can try new things and see if they improve your desired metric. Understanding the network better might help you to come up with an idea to improve the metric but it is by no means necessary. With alignment, on the other hand, I don’t really see how we get around “understanding the system in great detail” in the long run. We might be able to align a network with adversarial training but we would still “want to double-check” if it actually learned the right concept. Furthermore, if we think that [deceptive alignment is where the big risks come from](https://www.lesswrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment), interpretability (or related ways to understand the network such as ELK) are the most straightforward way to defend.
Conclusion
==========
I argue that many forms of interpretability and transparency have a defender’s advantage, i.e. that they are more likely to help with alignment than with developing new capabilities. However, results from interpretability investigations should still be handled with care. Specific use cases and types of interpretability can still carry substantial risk of increasing capabilities without meaningfully increasing alignment. For example, I think there is some chance that [Neel Nanda’s mechanistic analysis of grokking](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) will lead to capability improvements in the long run. I still think it was correct to publish these results on balance but one should think about possible harms beforehand (and I expect Neel to have done that). I expect the situations where the defender's advantage doesn’t hold anymore to be “we understood the system well enough to make it better but not really why it got better” similar to how our current understanding of scaling laws allows us to build more capable models but we don’t really understand why.
To offer a simple solution, there is always the option to share results only with a select group of people rather than publishing them or doing research that is [private by default](https://forum.effectivealtruism.org/posts/eAa7RagRaiugeSqAG/conjecture-internal-infohazard-policy).
Furthermore, I want to encourage other AI safety researchers to apply the defender’s advantage framework more generally and pursue research that has a high chance to help with alignment while not increasing capabilities unnecessarily.
|
059a905a-0859-4203-874a-3430dd08923a
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
New blog location
*[Update: This post is out of date. My blog has now moved again, to <https://bounded-regret.ghost.io/>.]*
I’ve decided to move this blog to my personal website. It’s now located here: <https://jsteinhardt.stat.berkeley.edu/blog/>, including all the old posts and comments, plus some new and upcoming posts :).
|
4e2ae2c4-9580-4c1f-99e9-8edf489115d1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
My Dating Heuristic
I don’t have to practice being afraid of a lion charging at me—my instincts tell me to run. But when I started dating, my instincts weren’t that reliable when attempting to attract a partner. They needed to be recalibrated. Author Matthew Hussey talks about retraining your (likely faulty) dating instincts in his book Love Life:
> One of the love life myths is that somehow love is a special realm where we can be guided by instinct. But this assumes that in childhood we all developed great instincts for every situation.
>
> [As an example], in the early stages of attraction…there’s a temptation to just surrender to the feeling, clear our schedule, and see if they’re game to fly to Paris together. [This is all for a person who] wasn’t even on our radar a month ago. We give in to our romantic instincts and rocket into a realm of fantasy romance.
This instinctive, emotional reaction is similar to what author Daniel Kahneman calls System 1 thinking in his book Thinking, Fast and Slow. System 1 thinking is that quick, off-the-cuff response we have for certain situations. Contrast that with System 2 thinking which is slower, more deliberative, and more logical.
It’s System 1 thinking that used to get me in trouble when it came to my dating life. For example, when partners would pull away, my instincts told me to push more to try to keep them around. Or, when partners I’m enamored with did something rude in front of my friends, I would immediately start rationalizing their behavior as “just one of their fun quirks”. What changed my dating life for the better is when I started to integrate more System 2 deliberate thinking. What changed is when I developed the following heuristic[1]:
My Dating Heuristic → when in a dating context, I ask myself: what would an emotionally healthy person do? Then I do that.
Cosplaying as a mature adult
When I invented my dating heuristic, everything that used to suck about dating started to suck less. Why? Because in asking what an em
|
08d9afef-a9b5-456b-b97f-daefc7e58cbc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Moral frameworks and the Harris/Klein debate
Here's a good background post and analysis on the debate (this has been linked from elsewhere on LW before): https://everythingstudies.com/2018/04/26/a-deep-dive-into-the-harris-klein-controversy/
Like many, I couldn't help but be fascinated by the Sam Harris/Ezra Klein debate. These are two people I really look up to, and so seeing them going at it (and showing a lot of personal weakness along the way) has been illuminating. I'm still unsettled about it, wanting there to be resolution/a right answer. So far that satisfaction has eluded me, so I wrote this to try to clarify things for myself. Maybe it helps others too.
The analysis below is meant as a steelman of each side's positions. If you think I'm not steelmanning them well enough, please leave a comment and I'll improve.
Consequentialist framework:
* Sam: As a lesson in how to think for yourself, hold Murray up as someone who has discovered truths that society doesn't like to talk about. As a general policy, this practice will lead to truths being uncovered faster, leading to a faster pace of discovery, which compounds over time to a much better world through science.
* Ezra: Make a public example of Sam here, leading to more people recognizing their own privilege and putting their actions in the appropriate historical context. As a general policy, this practice will lead to a more equitable society, which compounds over time to a much better world by reducing suffering directly.
Virtue ethics framework:
* Sam: It is virtuous to signal-boost things which are true, especially when they are being suppressed by society. "Speak the truth though your voice may tremble"
* Ezra: It is virtuous to defend the underprivileged by calling out harms, even unintentional harms. "Evil is the silence of the voice of justice when it matters most"
Deontology framework:
* Sam: Thou shalt update on all available data.
* Ezra: Thou shalt not invoke long-buried demons of oppression.
Both these moral frameworks look pr
|
3668f468-38e3-45c9-9a04-e9e412e103c2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Transhumanist Fables
Once upon a time there were three little pigs who went out into the world to build their houses. The first pig was very lazy and built his house out of straw. The second pig was a little harder-working and built his house out of sticks. The third pig was the hardest-working of all, and built his house out of bricks. Then came the Big Bad Wolf. When he saw the house of straw, he huffed and he puffed and he blew the house down, eating the first little pig. When he saw the house of sticks, he huffed and he puffed and he blew the house down, eating the second little pig. When he saw the house of bricks, he got out a bazooka and blew the house to pieces, eating the third little pig.
Moral: Reality doesn’t grade on a curve.
----------------------------------------
Once upon a time there was a big strong troll who lived under a bridge. A little goat went across the bridge, and the troll reached out to grab and eat the goat. “Wait, Mr. Troll!”, the goat cried. “Soon my brother is coming, and he is even bigger than I am!” The troll let the goat pass, and soon came another goat, twice as big as the first. The troll reached out to grab and eat him, but the brother likewise objected, saying his brother was even bigger. Sure enough, a third goat arrived at the bridge, twice as big as the second, and the troll, now ready for a very hearty dinner, reached out to grab and eat him. “Wait!” said the third goat. “My brother is the biggest of us all!”. So the troll let the third goat pass. Then came the fourth goat, who was hundreds of miles tall and blotted out the sun, whose very steps caused earthquakes and made the rivers change course. Without even noticing, he stepped on bridge and troll, pulverizing both to bits.
Moral: Sometimes growth is superexponential.
----------------------------------------
Once upon a time, Chicken Little ran to her friend Henny Penny. “The sky is falling!” she shouted. “We must tell the king!” Henny Penny joined her, and together they headed towar
|
3a11d1d9-3d37-4806-97a7-e6b3becf3945
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Why you might expect homogeneous take-off: evidence from ML research
*This write-up was produced as part of the SERI MATS programme under Evan Hubinger’s mentorship. It is also my first post on LW, so feedback is very welcome!*
Introduction
============
This article aims to draw a connection between recent ML research and the claim that future advanced AI systems may be homogenous. First, I briefly review [this article](https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios), where the idea of homogenous take-off is introduced. Then, I outline two different arguments why you might update in the direction of homogenous take-off. For each of the arguments I mention key uncertainties that I have about the argument itself, as well as broader open questions.
TL; DR
------
I present two reasons to believe that as models become larger they also become more homogenous, i.e. they behave more similarly to each other:
* Variance between models behaves unimodally in the overparameterised regime: it peaks around the interpolation threshold, then decreases monotonically. Decreased variance means that models make similar predictions across different training runs (captured as variance from initialisation) and different sampling of the training data (variance from sampling);
* Neural networks have a strong simplicity bias even before training, which might mean that multiple training runs with different hyperparameters, initialisation schemes etc. result in essentially the same model.
I’ve somewhat updated in the direction of homogenous take-off as a result of these arguments, though I think that there are still ways in which it’s unclear if e.g. decreasing variance with size rules out heterogeneity.
What’s homogeneous take-off?
============================
There are several axes along which different AI takeoff scenarios could differ: speed, continuity, and number of main actors. [Homogeneity vs. heterogeneity in AI takeoff scenarios](https://www.alignmentforum.org/posts/mKBfa8v4S9pNKSyKK/homogeneity-vs-heterogeneity-in-ai-takeoff-scenarios) introduces a new way to look at a potential take-off, through the lens of model **homogeneity**. Homogeneity intuitively refers to how similar models are at any given time given some definition of similarity. We might specifically refer to homogeneity with regards to alignment, which, again intuitively, means “models are more or less aligned to the same degree” (or: “aligned models will not coexist with unaligned models”).
More formally we mean something like models having similar properties, e.g. alignment properties. In my mind, an alignment property might be something like “corrigibility” or “truthfulness”, though it’s unclear to me to what extent two models which are, say, truthful, are also homogenous. I think working toward a clearer, more precise definition of homogeneity is probably useful in determining what actually counts as evidence for homogenous systems being more likely, though I don’t try to do so in this write-up.
The article sets out a list of arguments supporting the idea of homogenous take-off, which I parse as “evidence from the economics of large scale machine learning”. Without going into too much detail – I recommend reading the original article for the full arguments –, these are:
1. **Training a model is more expensive than running it**. This is a relatively straightforward claim which extrapolates from the landscape we have today, where some large language models reportedly have had training budgets in the millions of US dollars, with comparatively little cost to run inference/serve the models themselves once trained.
2. **Training models from scratch is not competitive once the first advanced system is released.**To me this follows from 1., in the sense that if it is economically useful to deploy more than one model simultaneously, it’s likely that the additional models will be copies of the original (perhaps fine-tuned on different tasks) rather than new models trained from scratch.
3. **Copying is more competitive than (untested) alternative approaches.**Here I think it’s worth disentangling two ways of copying:
1. **Direct copying** of the first advanced system, possibly by using the same weights, or by running the exact same training process. There are reasons to believe that direct copying might not be possible or even desirable, since e.g. states might not want to use a competing state’s advanced system.
2. **Indirect copying** is using the same techniques as the creators of the original system, but not identical training runs/hyperparameters. This scenario seems more likely to me, and it’s here where the arguments I present on variance/simplicity bias are most important, since they show that different runs may not necessarily result in different models.
4. **Homogeneity is preserved during initial takeoff.**Here the argument is that later generations of AIs will also be homogenous, at least with respect to alignment. This is because we either use the first generation systems to align the next generations, or we use ~the same techniques we used on the 1st generation to align the next generations. The idea is that both approaches result in systems with the same alignment properties. It’s unclear to me whether the same kind of argument holds for something other than alignment properties – and if not, why not.
In this article I want to present two technical arguments from recent ML research that support the idea of homogenous take-off. First, as models become larger, variance between models decreases. Second, neural networks seem to be biased toward simple solutions even before training.
Argument from variance
======================
Bias-variance decomposition
---------------------------
One of the main practical insights of statistical learning theory is related to the bias-variance decomposition of mean squared error. In this section I’ll be introducing the concepts of bias and variance and discussing their importance in the classical and the overparameterised regimes. I’ll be using the notation from [Adlam & Pennington, 2020](https://proceedings.neurips.cc/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Paper.pdf).
The main idea is that for a supervised learning task where a model ^y.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is minimising mean squared error on a training set Dtr, we can decompose the on a test point x∈Dte as:
E[^y(x)−y(x)]2=(E^y(x)−Ey(x))2+V[^y(x)]+V[y(x)]where y(x) is the ground truth. The first term is the squared bias, the second is the variance and the third is irreducible noise in the test data. The randomness in these variables is canonically taken to come from sampling noise, though as we’ll see shortly there are other sources too.
Looking at the terms more closely, bias is how much the average prediction – across models trained on different realisations of the training set – differs from the ground truth. Bias is typically interpreted as error resulting from incorrect assumptions about the data, a phenomenon otherwise known as underfitting. For example, trying to interpolate a polynomial function with a linear model leads to high bias.
The second term, variance, refers to how much each individual model’s prediction differs from the average prediction, and has historically been tied to overfitting to noise. If models’ predictions on the test set vary widely, it’s because they’ve all learned spurious correlations in their specific training set which do not hold on test data. The last term is irreducible noise in the test set.
Ideally, we’d like both bias and variance to be small. However, in practice, bias and variance seem to trade off against each other as model capacity is increased, resulting in the typical U-shaped curve for test risk in Figure 1a. This trade-off implies that to achieve the optimal test risk, a model should aim for the “sweet spot” where the trade-off is optimal. If models are larger than the optimal size, variance increases, and the result is overfitting to the training data and poor generalisation performance on test data. If they are smaller, they underfit the training data and do relatively poorly on all datasets.
This is the received wisdom from the bias-variance decomposition in the “classical” regime, and the dominating view pre-deep learning. This is mostly invalidated by deep neural networks, which generalise well despite being very large relative to the datasets they are trained on. The phenomenon of double descent (see e.g. [Belkin et al., 2019](https://www.pnas.org/doi/full/10.1073/pnas.1903070116)) illustrates the capacity of large models to interpolate (i.e. perfectly fit) the data, and yet perform well to unseen data. In Figure 1b, as model size is increased, we move from the classical U-shaped risk curve to a peak at the interpolation threshold, with risk decreasing monotonically past the peak to values that are below the previous minimum.

**Figure 1. From**[**Belkin et al., 2019**](https://www.pnas.org/doi/full/10.1073/pnas.1903070116). A visual representation of (a) the underparameterised regime, where the bias-variance trade-off occurs as models increase in capacity (here formalised as the size of a hypothesis class H) and (b) the modern overparameterised regime, where test risk decreases despite models which are large relative to the size of their training datasets.
Double descent and variance
---------------------------
Several recent papers ([Yang et al., 2020](http://proceedings.mlr.press/v119/yang20j/yang20j.pdf), [Lin & Dobriban, 2021](https://www.jmlr.org/papers/volume22/20-1211/20-1211.pdf), [Adlam & Pennington, 2020](https://proceedings.neurips.cc/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Paper.pdf)) examine double descent through the lens of the bias-variance decomposition. Broadly speaking, the main finding is that variance behaves unimodally – it increases, peaks, and then decreases monotonically. Depending on the magnitude of bias relative to variance, several test risk curves can be obtained, including double descent – see Figure 2. below.
The important observation here is that as models increase in size, variance decreases. To zoom out a bit, remember that variance captures the degree to which models differ in their predictions across different training runs. Decreasing variance means that models become more homogenous. In a sense, this follows directly from double descent, since we know that bias decreases with size and that after the interpolation threshold test risk decreases monotonically.

**Figure 2. From**[**Yang et al., 2020**](http://proceedings.mlr.press/v119/yang20j/yang20j.pdf)**.** A hypothetical test risk curve plotted against model complexity, alongside its bias and variance components. The three cases are as follows: (a) if bias dominates variance over the entire x-axis, then test risk follows a monotonic decrease; (b) if bias and variance dominate in different regimes, the test risk follows a double descent curve; (c) if variance dominates bias over the entire x-axis, then test risk is simply unimodal – without the initial decrease.
To try and understand what is happening with double descent, the latter two papers focus on decomposing variance into additional sources of randomness (training data sampling, parameter initialisation and label noise) and find that some components behave unimodally, while others increase up to the interpolation threshold and stay constant afterward (e.g. variance due to sampling and due to label noise, see Figure 3j).
There seems to not be any consensus on how to additionally decompose variance (including the order in which to condition on these sources of randomness – because conditioning isn’t symmetrical). Because of this, e.g. [Rocks & Mehta, 2022](https://link.aps.org/pdf/10.1103/PhysRevResearch.4.013201) hint that some studies are led to incorrect conclusions about the relationship between variance and double descent.

**Figure 3. From**[**Adlam & Pennington, 2020**](https://proceedings.neurips.cc/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Paper.pdf)**.** Different decompositions of the bias (B) and the variance (V) of a neural network with hidden layer size n1 and dataset size m. Most useful is the right-most decomposition into variance from sampling (VX), variance from initialisation (VP) and variance from label noise (Vϵ), along with their interaction effects, e.g. VPX,VPXϵ. In figure (j) B,VX and VXϵ converge to a constant value after the interpolation threshold, and their values are no longer sensitive to increase in model size. All other sources of test risk are unimodal: they peak at the interpolation threshold and decrease with model size.
A few side-notes:
* Label noise exacerbates but doesn’t cause double descent, since other components of variance peak even in the absence of label noise (Fig. 3j).
* It turns out that in particular regimes the higher-level interaction effects between different sources of variance dominate the main effects, i.e. Vsi>Vs>Vi, where Vs is variance from sampling, Vi is variance from initialisation and Vsi is the variance from the interaction between the two.
It’s also worth mentioning that these studies take place in the asymptotic setting, i.e. they investigate what happens if the number of samples in the training dataset and dimensionality of the data go to infinity while maintaining a fixed ratio. [Lin & Dobriban, 2021](https://www.jmlr.org/papers/volume22/20-1211/20-1211.pdf) find that this ratio controls the unimodal behaviour of the variance: if the ratio is below a threshold, variance peaks, then decreases; otherwise it increases.
If this analysis is correct, as long as we control the ratio we can ensure that models become more homogenous as they become larger. It’s worth noting that this hasn’t been replicated yet, to my knowledge, and that this unimodal variance explanation for double descent is not the only hypothesis, see e.g. [Kuzborskij et al., 2021](https://proceedings.neurips.cc/paper/2021/file/f754186469a933256d7d64095e963594-Paper.pdf) for an account of DD related to the smallest positive eigenvalue of the feature covariance matrix.
How might this turn out to be false?
------------------------------------
First, it’s possible that the findings from analysis of variance are not robust to changes in architecture or learning task, though at least [Yang et al. 2020](http://proceedings.mlr.press/v119/yang20j/yang20j.pdf) seem to cover quite a few experimental set-ups (including changing architecture and dataset as well as other potentially-less-impactful training hyperparameters). This means that it might be useful to do more experiments to probe the robustness of these findings. If they turn out to scale well/hold across architectures, then this is stronger evidence in favour of homogeneity.
Second, it could be that residual variance – variance that is not eliminated through training – is enough to invalidate the homogeneity hypothesis, in the sense that residual variance could lead to different behaviour/properties of models that exist at the same time. I’m not sure how likely this is, given that the residual variances seem to be quite small – on the order of 10^{-3} according to [Adlam & Pennington, 2020](https://proceedings.neurips.cc/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Paper.pdf) – though of course here the threshold is unknown. (How much variance implies heterogeneity doesn’t seem to be a well-posed question.)
I don’t have a good idea for how to resolve this uncertainty. It seems to me that unless we can find a more precise definition of homogeneity, we can’t say exactly how much residual variance matters.
Things I don’t yet understand
-----------------------------
* How does the fixed-design/random-design decomposition affect the result? For example see [Hastie et al., 2022](https://arxiv.org/pdf/1903.08560).
* Lots of these experiments use random features, and it’s unclear to me why this is more appropriate/easy to analyse than shallow neural networks, which presumably are closer to what we care about.
* Where does the variance from optimisation fit in? Is it the same as variance from initialisation, which is where the optimiser starts? E.g. [Neal et al., 2018](https://arxiv.org/pdf/1810.08591.pdf) mention variance due to optimisation, but they don’t study how bias and variance change during training.
+ They point to variance from optimisation as encompassing: random initialisation and stochastic mini-batching, but they also say that their results hold even with batch gradient descent.
Open questions
--------------
* Should we expect “prediction” homogeneity to translate to alignment properties?
* Why does variance have unimodal behaviour? It might be worth replicating the experiments in [Lin & Dobriban, 2021](https://www.jmlr.org/papers/volume22/20-1211/20-1211.pdf) where they use the parameterisation level and data aspect ratio to control the variance.
+ [Yang et al., 2020](http://proceedings.mlr.press/v119/yang20j/yang20j.pdf) conjecture that it’s regularisation that leads to variance decreasing past the peak, though this seems like a broad remark that does not add much useful information.
Argument from simplicity bias
=============================
We have good empirical evidence that neural networks are biased toward simple functions which fit the data. There’s no consensus on the mechanism behind this bias, but there are lots of competing explanations.
One recent such explanation is that their parameter-function maps are biased toward low-complexity functions; that is, even before training NN architectures induce a strong preference for simplicity. See [this LW article](https://www.lesswrong.com/posts/YSFJosoHYFyXjoYWa/why-neural-networks-generalise-and-why-they-are-kind-of) for a jumping-off point, or go directly to the technical papers: [Valle-Perez, Camargo & Louis, 2018](https://arxiv.org/pdf/1805.08522), [Mingard et al., 2019](https://arxiv.org/pdf/1909.11522), [Mingard et al., 2021](https://www.jmlr.org/papers/volume22/20-676/20-676.pdf).
If this analysis is correct, then various proposed mechanisms for why DNNs generalise which are related to optimiser choice or hyperparameter tuning are only small (or “second-order”) deviations from the posterior PB(f|S) whose bias is essentially induced by the prior P(f).
This might lead you to believe that given a fixed architecture, different initialisation schemes, optimisers, hyperparameters etc. do not contribute substantially to the properties that the trained system has, or, differently put, that different experimental setups do not result in systems which differ in the ways we care about.
This is consistent with the finding that variance decreases with scale, at least if we interpret the findings in Section 5 of [Mingard et al., 2019](https://arxiv.org/pdf/1909.11522) to mean that adding more layers results in stronger bias toward simple functions. I’d be excited about work that directly connects these two insights, especially since we don’t necessarily know yet why variance is unimodal.
I’m a bit more sceptical that this line of argument supports homogeneity directly, mostly because I don’t think that a biased parameter-function map explains all the properties of models found through e.g. SGD (nor do I think the Mingard et al. papers make these claims). If the influence of specific training hyperparameters is enough to induce heterogeneity between runs – again, only gesturing at the concept of heterogeneity rather than defining it – then even if the parameter-function map hypothesis of generalisation is true, it’s evidence *against* homogeneity.
How might this turn out to be false?
------------------------------------
* **The biased prior drives most of the inductive bias, but it doesn’t explain everything.** Even if details about the training setup do not account for most of DNNs’ capacity to generalise, they may still account for some particular property which is relevant from an alignment perspective;
* **Simplicity is not the same as homogeneity.** Even if all functions that NNs tend to find are simple by some measure, it doesn’t mean that they are the same function. There might be some key input where two up-to-then identical functions diverge, which could lead to negative outcomes. Again, it’s possible that we won’t be able to prove that two such functions are the same.
* **Objections to the biased prior hypothesis.**It could be that the biased parameter-function map account of NN generalisation does not scale to larger networks/more complex architectures/other tasks (for some discussion, see [this article](https://www.lesswrong.com/posts/5p4ynEJQ8nXxp2sxC/parsing-chris-mingard-on-neural-networks#Scalability)). This might mean that some other hypothesis better explains NNs’ performance in the overparameterised regime – there are many related to: the stochasticity of gradient descent, the loss landscape (through basins of attraction or through flat minima), NNs’ similarity to GPs, implicit regularisation and others – which might lead us to update away from model homogeneity.
Summary
=======
This article outlines two arguments from recent ML research for why homogenous take-off is a plausible story. One stems from an empirically observed decrease in variance with model size, which is consistent with the double descent phenomenon. The other is a consequence of the finding that neural networks are a priori biased toward simple functions, which means that they are likely to find solutions with similar properties regardless of the particular training parameters.
I think there’s still work to be done on both of these arguments, and I’d be much more willing to update in favour of homogenous takeoff if the findings were more robust or we had a better understanding of e.g. why variance is unimodal. But it seems worthwhile to make this connection and get more people thinking and talking about the likelihood of homogeneity in take-off scenarios.
|
f84f0d70-657c-4b64-8e41-aa4ea3d9f3e1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Jobs that can help with the most important century
Let’s say you’re convinced that AI could make this the most important century of all time for humanity. What can you do to help things go well instead of poorly?
I think the biggest opportunities come from a full-time job (and/or the money you make from it). I think people are generally far better at their jobs than they are at anything else.
This piece will list the jobs I think are especially high-value. I expect things will change (a lot) from year to year - this is my picture at the moment.
Here’s a summary:
Role Skills/assets you'd need Research and engineering on AI safety Technical ability (but not necessarily AI background) Information security to reduce the odds powerful AI is leaked Security expertise or willingness/ability to start in junior roles (likely not AI) Other roles at AI companies Suitable for generalists (but major pros and cons) Govt and govt-facing think tanks Suitable for generalists (but probably takes a long time to have impact) Jobs in politics Suitable for generalists if you have a clear view on which politicians to help Forecasting to get a better handle on what’s coming Strong forecasting track record (can be pursued part-time) "Meta" careers Misc / suitable for generalists Low-guidance options These ~only make sense if you read & instantly think "That's me"
A few notes before I give more detail:
* These jobs aren’t the be-all/end-all. I expect a lot to change in the future, including a general increase in the number of helpful jobs available.
* Most of today’s opportunities are concentrated in the US and UK, where the biggest AI companies (and AI-focused nonprofits) are. This may change down the line.
* Most of these aren’t jobs where you can just take instructions and apply narrow skills.
* The issues here are tricky, and your work will almost certainly be useless (or harmful) according to someone.
* I recommend forming your own views on the key risks of AI - and/or working for an organization whose leadership you’re c
|
c285cb96-a2cb-46a3-89a1-1f1a972a45cf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Far-Ultraviolet Light in Public Spaces to Fight Pandemic is a Good Idea but Premature
Also posted on my personal blog.
Tl;dr: Far-ultraviolet light has potential as a human-safe germicide, but its safety is not established. In particular, evidence that it is not carcinogenic exists for only one of two mechanisms for ultraviolet carcinogenicity. In addition, use of far-ultraviolet light in public spaces to prevent the spread of SARS-COV-2 or other pathogens leads to a host of other concerns that need to be addressed.
Introduction
In 2017, Nature published a paper that investigated the possibility of using far-UVC light to combat a future influenza pandemic. The paper went mostly unnoticed by non-academics, as is the norm for technical journals, but now with the novel coronavirus and the first pandemic of its kind in 100 years, the public at large is paying attention to ideas from the frontiers and fringes of biology and medicine. Last month, far-UVC’s safe germicidal potential was the subject of a post by Roko Mijic and Alexey Turchin on LessWrong. They call the use of far UVC in public spaces “one of the most promising and neglected ideas for combating the spread of covid-19,” and lament “Why hasn’t this already been considered by relevant authorities? Far-UVC appears in a literature review by WHO, but it is not currently being acted upon as the amount of evidence in favor of safety and efficacy is small.”
I’ve spent the last few weeks educating myself on the literature surrounding far-UVC’s safety, and I’ve come to a clear conclusion. Is the use of far-UVC to combat pandemics in general a good idea? Yes. Should research on it be expanded? Yes. But using far-UVC in public spaces to combat COVID-19 would be way way way premature.
First, a couple of disclaimers:
Disclaimer 1: I am not a biologist or a doctor. I don’t have anything near a professional’s expertise on human biological questions. There may be inaccuracies or misunderstandings throughout this post, although of course I’ve done my best.
Disclaimer 2: My method for research tends to be
|
24b70b78-6d07-47b0-ba5b-1f8916232c96
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
The burden of knowing
It’s clear that we live in unprecedented times. There is a minority of the global population that is in the process of realizing what X-Risk means. Some got it much earlier than others but for the rest of us mere mortals out there it was a bit more difficult to bear. It’s not important if someone is in the camp of 100% doom or of 20% doom. (I dismiss the 100% non-doomers having a normalcy bias issue).
There are many issues arising right now most of which have been discussed and pinpointed by EA and LW communities but I would like to focus on something else. The mental burden of knowing. There are also the obvious objections. Who cares if it’s mentally tough or not. Would it matter if concentration camp inmates sentenced to death could cope with their reality or should we just try to avoid it happening. On the other hand, cancer diagnosis victims can and do get mental help as a default.
That’s precisely the two situations where we put ourselves in right now but with a twist. There was never a scenario where your death would mean also everyone else’s and that is another philosophical question with apparently no answer. Does the fact that all humanity will go with you change the way you feel about it?
In any case, lately I have been feeling like I got a cancer diagnosis along with all the rest of us. It’s not a certain death but there is a high probability of dying much earlier than what was predicted when you were born. On this subject I also have another dichotomy, I think there is no physical death from natural causes. We either all live forever or we all die from terminators. No in-between.
I want to explain my thought process right now and hope to hear others expand on the topic, a form of collective psychotherapy. I have been seeing the topic more and more on Twitter from people I follow breaking for some short time but then the human algo of nature takes over and brings chemical balance in the brain. That is most certainly because I don’t follow many people with mental health issues. That is far from society norms though.
What would it look like for society to realize the risk we are running right now en masse? I am sure many of you have watched Di Caprio’s film about the professor yelling that we are all going to die (Eliezer?) and people dismiss it until they don’t. Will society break down?
In the meantime, how do you live your life? If we follow the great stoics that advised that you need to live your life as if it is your last day and realize that you might die tomorrow. I am privileged and I have not experienced this feeling but I am sure you can experience it in Africa, in Ukraine and many other places around the world. With a twist again. Victory would mean normality again. If Ukraine wins, the said stoic soldiers would feel like their ancestors that returned to a normal (cold-war but still normal) world. That will never be the same for us.
Victory or if you want to call it *alignment* would mean a totally changed world. How fast and soon should one adapt? Will people start to sell all their belongings and yolo on vacations? What is the purpose of planning ahead for pension or family and kids? If timelines are like they seem to be currently, most kids born today would never be adults in a world resembling the one we grew up in and we have no idea how that would look like. I have heard that some from the EA community decided to have no kids because of these thoughts.
I realize now I wrote too many words to ask a simple question: *How does Eliezer sleep at night?*
|
ed8ff594-281f-4057-abc4-342d2451785d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
By Default, GPTs Think In Plain Sight
Epistemic status: Speculation with some factual claims in areas I’m not an expert in.
Thanks to Jean-Stanislas Denain, Charbel-Raphael Segerie, Alexandre Variengien, and Arun Jose for helpful feedback on drafts, and thanks to janus, who shared related ideas.
Main claims
* GPTs’ next-token-prediction process roughly matches System 1 (aka human intuition) and is not easily accessible, but GPTs can also exhibit more complicated behavior through chains of thought, which roughly matches System 2 (aka human conscious thinking process).
* Human will be able understand how a human-level GPTs (trained to do next-token-prediction) complete complicated tasks by reading the chains of thought.
* GPTs trained with RLHF will bypass this supervision.
System 2 and GPTs’ chains of thought are similar
A sensible model of the human thinking process
Here is what I feel like I’m doing when I’m thinking:
Repeat
1. Sample my next thought from my intuition
2. Broadcast this thought to the whole brain[1]
When you ask me what is my favorite food, it feels like some thoughts “pop” into consciousness, and the following thoughts deal with previous thoughts. This is also what happens when I try to prove a statement: ideas and intuitions come to my mind, then new thoughts about these intuitions appear.
This roughly matches the model described in Consciousness and the Brain by Stanislas Dehaene, and I believe it’s a common model of the brain within neuroscience.[2]
How GPTs “think”
Autoregressive text models are performing the same kind of process when they generate text. Sampling text is using the following algorithm:
Repeat:
1. Do a forward pass, and sample the next token from the output distribution
2. Add the generated token to the input. It makes it part of the input for the next forward pass, which means it can be used by lots of different attention heads, including specialized heads at earlier layers.
This looks similar the human thinking process, and the rest of the p
|
16e201a5-5dfe-4cfb-95ad-bc33d7075411
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Candy Innovation
What innovations in candy have there been since the '90s? Are there new flavors? Better imitations of existing flavors? New textures?
So far, all of the candy my kids have brought home seems to be things we could get 25 years ago. Though possibly flavors have improved, since I've only been trying what they've decided to share with me.
There have been some gains due to globalization, where candy that was previously hard to get in the US or unknown here is now more widely available, but has there been development beyond that?
(This post brought to you by yesterday's neighborhood, pinata)
Comment via: facebook
|
5c4633ef-d784-4bad-ae17-90c0aa16a1f0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Linkpost: They Studied Dishonesty. Was Their Work a Lie?
This is a linkpost for Gideon Lewis-Kraus's New Yorker article on the (alleged) Ariely and Gino data fraud scandals. I've been following this situation off-and-on for a while (and even more so after the original datacolada blog posts). The basic story is that multiple famous professors in social psychology (specializing in dishonesty) have been caught with blatant data fraud. The field to a large extent tried to "protect their own," but in the end the evidence became too strong. Francesca Gino has since retreated to attempting to sue datacolada (the investigators).
Despite the tragic nature of the story, I consider this material hilarious high entertainment, in addition to being quite educational.
The writing is also quite good, as I've come to expect from Gideon Lewis-Kraus (who locals might have heard of from his in-depth profiles on Slate Star Codex, Will MacAskill, and the FTX crash).
Some quotes:
> If you tortured the data long enough, as one grim joke went, it would confess to anything. They called such techniques “p-hacking.” As they later put it, “Everyone knew it was wrong, but they thought it was wrong the way it’s wrong to jaywalk.” In fact, they wrote, “it was wrong the way it’s wrong to rob a bank.”
> Ziani [a young grad student] found Gino’s results implausible, and assumed that they had been heavily p-hacked. She told me, “This crowd is used to living in a world where you have enough degrees of freedom to do whatever you want and all that matters is that it works beautifully.” But an adviser strongly suggested that Ziani “build on” the paper, which had appeared in a top journal. When she expressed her doubts, the adviser snapped at her, “Don’t ever say that!” Members of Ziani’s dissertation committee couldn’t understand why this nobody of a student was being so truculent. In the end, two of them refused to sign off on her degree if she did not remove criticisms of Gino’s paper from her dissertation. One warned Ziani not to second-guess a prof
|
5fa12f13-932d-49ea-8c08-73f28ae2a1f3
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Existential Risk Prevention as Global Priority
The maxipok rule Existential risk and uncertainty An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development . Although it is often difficult to assess the probability of existential risks, there are many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant. Estimates of 10-20 per cent total existential risk in this century are fairly typical among those who have examined the issue, though inevitably such estimates rely heavily on subjective judgment. 1 The most reasonable estimate might be substantially higher or lower. But perhaps the strongest reason for judging the total existential risk within the next few centuries to be significant is the extreme magnitude of the values at stake. Even a small probability of existential catastrophe could be highly practically significant Matheny, 2007; Posner, 2004; Weitzman, 2009) . Humanity has survived what we might call natural existential risks for hundreds of thousands of years; thus it is prima facie unlikely that any of them will do us in within the next hundred. 2 This conclusion is buttressed when we analyse specific risks from nature, such as asteroid impacts, supervolcanic eruptions, earthquakes, gamma-ray bursts, and so forth: Empirical impact distributions and scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time scale of a century or so. 3 In contrast, our species is introducing entirely new kinds of existential risk-threats we have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in the foreseeable future consists of anthropogenic existential risks-that is, those arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences-intended and unintended, positive and negative. For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead. The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method. But the fact that the probability of some risk is difficult to quantify does not imply that the risk is negligible. Probability can be understood in different senses. Most relevant here is the epistemic sense in which probability is construed as (something like) the credence that an ideally reasonable observer should assign to the risk's materialising based on currently available evidence. 4 If something cannot presently be known to be objectively safe, it is risky at least in the subjective sense relevant to decision making. An empty cave is unsafe in just this sense if you cannot tell whether or not it is home to a hungry lion. It would be rational for you to avoid the cave if you reasonably judge that the expected harm of entry outweighs the expected benefit. The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments. This factor often dominates in low-probability, highconsequence risks-especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons. Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P(X) of occurring. Then the probability that A has some hidden crucial flaw may easily be much greater than P(X). 5 Furthermore, the conditional probability of X given that A is crucially flawed, P(X |ØA), may be fairly high. We may then find that most of the risk of X resides in the uncertainty of our scientific assessment that P(X) was small (Figure 1 ) (Ord, Hillerbrand and Sandberg, 2010) .
Qualitative risk categories Since a risk is a prospect that is negatively evaluated, the seriousness of a risk-indeed, what is to be regarded as risky at all-depends on an evaluation. Before we can determine the seriousness of a risk, we must specify a standard of evaluation by which the negative value of a particular possible loss scenario is measured. There are several types of such evaluation standard. For example, one could use a utility function that represents some particular agent's preferences over various outcomes. This might be appropriate when one's duty is to give decision support to a particular decision maker. But here we will consider a normative evaluation, an ethically warranted assignment of value to various possible outcomes. This type of evaluation is more relevant when we are inquiring into what our society's (or our own individual) risk-mitigation priorities ought to be. There are conflicting theories in moral philosophy about which normative evaluations are correct. I will not here attempt to adjudicate any foundational axiological disagreement. Instead, let us consider a simplified version of one important class of normative theories. Let us suppose that the lives of persons usually have some significant positive value and that this value is aggregative (in the sense that the value of two similar lives is twice that of one life). Let us also assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices. These assumptions could be relaxed and complications could be introduced, but we will confine our discussion to the simplest case. Within this framework, then, we can roughly characterise a risk's seriousness using three variables: scope (the size of the population at risk), severity (how badly this population would be affected), and probability (how likely the disaster is to occur, according to the most reasonable judgment, given currently available evidence). Using the first two of these variables, we can construct a qualitative diagram of different types of risk (Figure 2 ). Source: Ord et al., 2010. Factoring in the fallibility of our firstorder risk assessments can amplify the probability of risks assessed to be extremely small. An initial analysis (left side) gives a small probability of a disaster (black stripe). But the analysis could be wrong; this is represented by the grey area (right side). Most of the all-things-considered risk may lie in the grey area rather than in the black stripe. (The probability dimension could be displayed along the z-axis.) The area marked 'X' in Figure 2 represents existential risks. This is the category of risks that have (at least) crushing severity and (at least) pan-generational scope. 6 As noted, an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, an existential risk jeopardises the entire future of humankind.
Magnitude of expected loss in existential catastrophe Holding probability constant, risks become more serious as we move toward the upper-right region of Figure 2 . For any fixed probability, existential risks are thus more serious than other risk categories. But just how much more serious might not be intuitively obvious. One might think we could get a grip on how bad an existential catastrophe would be by considering some of the worst historical disasters we can think of-such as the two world wars, the Spanish flu pandemic, or the Holocaust-and then imagining something just a bit worse. Yet if we look at global population statistics over time, we find that these horrible events of the past century fail to register (Figure 3 ). But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show up robustly on a plot like the one in Figure 3 , causing a precipitous drop in world population or average quality of life. Instead, their significance lies primarily in the fact that they would destroy the future. The philosopher Derek Parfit made a similar point with the following thought experiment: I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes: 1. Peace. 2. A nuclear war that kills 99 per cent of the world's existing population. 3. A nuclear war that kills 100 per cent. 2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater. The Earth will remain habitable for at least another billion years. Civilisation Source: Author. Note: The scope of a risk can be personal (affecting only one person), local (affecting some geographical region or a distinct group), global (affecting the entire human population or a large part thereof), trans-generational (affecting humanity for numerous generations, or pan-generational (affecting humanity over all, or almost all, future generations). The severity of a risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not completely ruining quality of life), or crushing (causing death or a permanent and drastic reduction of quality of life). began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilised human history. The difference between 2 and 3 may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second (Parfit, 1984, pp. 453-454) . To calculate the loss associated with an existential catastrophe, we must consider how much value would come to exist in its absence. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical. One gets a large number even if one confines one's consideration to the potential for biological human beings living on Earth. If we suppose with Parfit that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exist for at least 10 16 human lives of normal duration. These lives could also be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitations that could be partly overcome through continuing technological and moral progress. However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10 34 years. 7 Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 10 54 human-brain-emulation subjective life-years (or 10 71 basic computational operations) . 8 If we make the less conservative assumption that future civilisations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realised. 9 Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10 16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. The more technologically comprehensive estimate of 10 54 humanbrain-emulation subjective life-years (or 10 52 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilisation a mere 1 per cent chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives. One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any 'ordinary' good, such as the direct benefit of saving 1 billion lives. And, further, that the absolute value of the indirect effect of saving 1 billion lives on the total cumulative amount of existential risk-positive or negative-is almost certainly larger than the positive value of the direct benefit of such an action. 10
Maxipok These considerations suggest that the loss in expected value resulting from an existential catastrophe is so enormous that the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole. It may be useful to adopt the following rule of thumb for such impersonal moral action: Maxipok Maximise the probability of an 'OK outcome', where an OK outcome is any outcome that avoids existential catastrophe. At best, maxipok is a rule of thumb or a prima facie suggestion. It is not a principle of absolute validity, since there clearly are moral ends other than the prevention of existential catastrophe. The principle's usefulness is as an aid to prioritisation. Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy. Note that maxipok differs from the popular maximin principle ('Choose the action that has the best worstcase outcome'). 11 Since we cannot completely eliminate existential risk-at any moment, we might be tossed into the dustbin of cosmic history by the advancing front of a vacuum phase transition triggered in some remote galaxy a billion years ago-the use of maximin in the present context would entail choosing the action that has the greatest benefit under the assumption of impending extinction. Maximin thus implies that we ought all to start partying as if there were no tomorrow. That implication, while perhaps tempting, is implausible.
Classification of existential risk To bring attention to the full spectrum of existential risk, we can distinguish four classes of such risk: human extinction, permanent stagnation, flawed realisation, and subsequent ruination. We define these in Table 1 below: By 'humanity' we here mean Earth-originating intelligent life and by 'technological maturity' we mean the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.
Human extinction Although it is conceivable that, in the billion or so years during which Earth might remain habitable before being overheated by the expanding sun, a new intelligent species would evolve on our planet to fill the niche vacated by an extinct humanity, this is very far from certain to happen. The probability of a recrudescence of intelligent life is reduced if the catastrophe causing the extinction of the human species also exterminated the great apes and our other close relatives, as would occur in many (though not all) human-extinction scenarios. Furthermore, even if another intelligent species were to evolve to take our place, there is no guarantee that the successor species would sufficiently instantiate qualities that we have reason to value. Intelligence may be necessary for the realisation of our future potential for desirable development, but it is not sufficient. All scenarios involving the premature extinction of humanity will be counted as existential catastrophes, even though some such scenarios may, according to some theories of value, be relatively benign. It is not part of the definition of existential catastrophe that it is all-things-considered bad, although that will probably be a reasonable supposition in most cases. Above, we defined 'humanity' as Earth-originating intelligent life rather than as the particular biologically defined species Homo sapiens. 13 The reason for focusing the notion of existential risk on this broader concept is that there is no reason to suppose that the biological species concept tracks what we have reason to value. If our species were to evolve, or use technology to selfmodify, to such an extent that it no longer satisfied the biological criteria for species identity (such as interbreedability) with contemporary Homo sapiens, this need not be in any sense a catastrophe. Depending on what we changed into, such a transformation might well be very desirable. Indeed, the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe. Most discussion of existential risk to date has focused exclusively on the first of the four classes, 'human extinction'. The present framework calls attention to three other failure modes for humanity. Like extinction, these other failure modes would involve pan-generational crushing. They are therefore of comparable seriousness, entailing potentially similarly enormous losses of expected value.
Permanent stagnation Permanent stagnation is instantiated if humanity survives but never reaches technological maturity-that is, the attainment of capabilities affording a level of economic productivity and control over nature that is close to the maximum that could feasibly be achieved (in the fullness of time and in the absence of catastrophic defeaters). For instance, a technologically mature civilisation could (presumably) engage in large-scale space colonisation through the use of automated self-replicating 'von Neumann probes' (Freitas, 1980; Moravec, 1988; Tipler, 1980) . It would also be able to modify and enhance human biology-say, through the use of advanced biotechnology or molecular nanotechnology (Freitas, 1999 (Freitas, , 2003 . Further, it could construct extremely powerful computational hardware and use it to create wholebrain emulations and entirely artificial types of sentient, superintelligent minds . It might have many additional capabilities, some of which may not be fully imaginable from our current vantage point. 14 The permanent destruction of humanity's opportunity to attain technological maturity is a prima facie enormous loss, because the capabilities of a technologically mature civilisation could be used to produce outcomes that would plausibly be of great value, such as astronomical numbers of extremely long and fulfilling lives. More specifically, mature technology would enable a far more efficient use of basic natural resources (such as matter, energy, space, time, and negentropy) for the creation of value than is possible with less advanced technology. And mature technology would allow the harvesting (through space colonisation) of far more of these resources than is possible with technology whose reach is limited to Earth and its immediate neighbourhood. We can distinguish various kinds of permanent stagnation scenarios: unrecovered collapse-much of our current economic and technological capabilities are lost and never recovered; plateauing-progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity; and recurrent collapse-a never-ending cycle of collapse followed by recovery . 15 The relative plausibility of these scenarios depends on various factors. One might expect that even if global civilisation were to undergo a complete collapse, perhaps following a global thermonuclear war, it would eventually be rebuilt. In order to have a plausible permanent collapse scenario, one would therefore need an account of why recovery would not occur. 16 Regarding plateauing, modern trends of rapid social and technological change make such a threat appear less imminent; yet scenarios could be concocted in which, for example, a stable global regime blocks further technological change. 17 As for recurrent-collapse scenarios, they seem to require the postulation of a special kind of cause: one that (1) is strong enough to bring about the total collapse of global civilisation yet (2) is not strong enough to cause human extinction, and that (3) can plausibly recur each time civilisation is rebuilt to a certain level, despite any random variation in initial conditions and any attempts by successive civilisations to learn from their predecessors' failures. The probability of remaining on a recurring-collapse trajectory diminishes with the number of cycles postulated. The longer the time horizon considered (and this applies also to plateauing) the greater the likelihood that the pattern will be ruptured, resulting in either a breakout in the upward direction toward technological maturity or in the downward direction toward unrecovered collapse and perhaps extinction (Figure 4 ). 18
Flawed realisation A flawed realisation occurs if humanity reaches technological maturity in a way that is dismally and irremediably flawed. By 'irremediably' we mean that it cannot feasibly be subsequently put right. By 'dismally' we mean that it enables the realisation of but a small part of the value that could otherwise have been realised. Classifying a scenario as an instance of flawed realisation requires a value judgment. We return to this normative issue in the next section. We can distinguish two versions of flawed realisation: unconsummated realisation and ephemeral realisation. In unconsummated realisation, humanity develops mature technology but fails to put it to good use, so that the amount of value realised is but a small fraction of what could have been achieved. An example of this kind is a scenario in which machine intelligence replaces biological intelligence but the machines are constructed in such a way that they lack consciousness (in the sense of phenomenal experience) (Bostrom, 2004) . The future might then be very wealthy and capable, yet in a relevant sense uninhabited: There would (arguably) be no morally relevant beings there to enjoy the wealth. Even if consciousness did not altogether vanish, there might be a lot less of it than would have resulted from a more optimal use of resources. Alternatively, there might be a vast quantity of experience but of much lower quality than ought to have been the case: minds that are far less happy than they could have been. Or, again, there might be vast numbers of very happy minds but some other crucial ingredient of a maximally valuable future missing. In ephemeral realisation, humanity develops mature technology that is initially put to good use. But the technological maturity is attained in such a way that the initially excellent state is unsustainable and is doomed to degenerate. There is a flash of value, followed by perpetual dusk or darkness. One way in which ephemeral realisation could result is if there are fractures in the initial state of technological maturity that are bound to lead to a splintering of humanity into competing factions. It might be impossible to reintegrate humanity after such a splintering occurred, and the process of attaining technological maturity might have presented the last and best chance for humanity to form a singleton (Bostrom, 2006) . Absent global coordination, various processes might degrade humanity's long-term potential. One such process is war between major powers, although it is perhaps unlikely that such warring would be never-ending (rather than being eventually terminated once and for all by treaty or conquest). 19 Another such erosive process involves undesirable forms of evolutionary and economic competition in a large ecology of machine intelligences (Hanson, 1994) . Yet another such process is a spacecolonisation race in which replicators might burn up cosmic resources in a wasteful effort to beat out the competition .
Subsequent ruination For completeness, we register a fourth class of existential risks: subsequent ruination. In scenarios of this kind, humanity reaches technological maturity with a 'good' (in the sense of being not dismally and irremediably flawed) initial setup, yet subsequent developments nonetheless lead to the permanent ruination of our prospects. From a practical perspective, we need not worry about subsequent ruination. What happens after humanity Source: Author. Note: The modern human condition represents a narrow range of the space of possibilities. The longer the time scale considered, the lower the probability that humanity's level of technological development will remain confined within the interval defined at the lower end by whatever technological capability is necessary for survival and at the upper end by technological maturity. reaches technological maturity is not something we can now affect, other than by making sure that humanity does reach it and in a way that offers the best possible prospects for subsequent development-that is, by avoiding the three other classes of existential risk. Nonetheless, the concept of subsequent ruination is relevant to us in various ways. For instance, in order to estimate how much expected value is gained by reducing other existential risks by a certain amount, we need to estimate the expected value conditional on avoiding the first three sets of existential risks, which requires estimating the probability of subsequent ruination. The probability of subsequent ruination might be low-and is perhaps extremely low conditional on getting the setup right. One reason is that once we have created many self-sustaining space colonies, any disaster confined to a single planet cannot eliminate all of humanity. Another reason is that once technological maturity is safely reached, there are fewer potentially dangerous technologies left to be discovered. A third reason is that a technologically mature civilisation would be superintelligent (or have access to the advice of superintelligent artificial entities) and thus better able to foresee danger and devise plans to minimise existential risk. While foresight will not reduce risk if no effective action is available, a civilisation with mature technology can take action against a great range of existential risks. Furthermore, if it turns out that attaining technological maturity without attaining singletonhood condemns a civilisation to irreversible degeneration, then if flawed realisation is avoided we can assume that our technologically mature civilisation can solve global-coordination problems, which increases its ability to take effective action to prevent subsequent ruination. The main source of subsequent-ruination risk might well be an encounter with intelligent external adversaries, such as intelligent extraterrestrials or simulators. Note, however, that scenarios in which humanity eventually goes extinct as a result of hard physical limits, such as the heat death of the universe, do not count as subsequent ruination, provided that before its demise humanity has managed to realise a reasonably large part of its potential for desirable development. Such scenarios are not existential catastrophes but rather existential successes.
Capability and value Some further remarks will help clarify the links between capability, value, and existential risk.
Convertibility of resources into value Because humanity's future is potentially astronomically long, the integral of losses associated with persistent inefficiencies is very large. This is why flawed-realisation and subsequent-ruination scenarios constitute existential catastrophes even though they do not necessarily involve extinction. 20 It might be well worth a temporary dip in short-term welfare to secure a slightly more efficient long-term realisation of humanity's potential. To avoid flawed realisation, it is more important to focus on maximising long-term efficiency than on maximising the initial output of value in the period immediately following technological maturation. This is because the quantity of value-structure that can be produced at a given time depends not only on the level of technology but also on the physical resources and other forms of capital available at that time. In economics parlance, humanity's production-possibility frontier (representing the various possible combinations of outputs that could be produced by the global economy) depends not only on the global production function (or 'meta-production function') but also on the total amount of all factors of production (labour, land, physical capital goods, etc.) that are available at some point in time. With mature technology, most factors of production are interchangeable and ultimately reducible to basic physical resources, but the amount of free energy available to a civilisation imposes hard limits on what it can produce. Since colonisation speed is bounded by the speed of light, a civilisation attaining technological maturity will start with a modest endowment of physical resources (a single planet and perhaps some nearby parts of its solar system), and it will take a very long time-billions of years-before a civilisation starting could reach even 1 per cent of its maximum attainable resource base. 21 It is therefore efficiency of use at later times, rather than in the immediate aftermath of the attainment of technological maturity, that matters most for how much value is ultimately realised. Furthermore, it might turn out that the ideal way to use most of the cosmic endowment that humanity could eventually secure is to postpone consumption for as long as possible. By conserving our accumulated free energy until the universe is older and colder, we might be able to perform some computations more efficiently. 22 This reinforces the point that it would be a mistake to place too much weight on the amount of value generated shortly after technological maturity when deciding whether some scenario should count as a flawed realisation (or a subsequent ruination). It is much more important to get the setup right, in the sense of putting humanity on a track that will eventually garner most of the attainable cosmic resources and put them to near-optimal use. It matters less whether there is a brief delay before that happens-and a delay of even several million years is 'brief' in this context . Even for individual agents, the passage of sidereal time might become less significant after technological maturity. Agents that exist as computational processes in distributed computational hardware have potentially unlimited life spans. The same holds for embodied agents in an era in which physical-repair technologies are sufficiently advanced. The amount of life available to such agents is proportional to the amount of physical resources they control. (A software mind can experience a certain amount of subjective time by running on a slow computer for a long period of sidereal time or, equivalently, by running for a brief period of sidereal time on a fast computer). Even from a so-called 'person-affecting' moral perspective, therefore, when assessing whether a flawed realisation has occurred, one should focus not on how much value is created just after the attainment of technological maturity but on whether the conditions created are such as to give a good prospect of realising a large integral of value over the remainder of the universe's lifetime.
Some other ethical perspectives We have thus far considered existential risk from the perspective of utilitarianism (combined with several simplifying assumptions). We may briefly consider how the issue might appear when viewed through the lenses of some other ethical outlooks. For example, the philosopher Robert Adams outlines a different view on these matters: I believe a better basis for ethical theory in this area can be found in quite a different direction-in a commitment to the future of humanity as a vast project, or network of overlapping projects, that is generally shared by the human race. The aspiration for a better society-more just, more rewarding, and more peaceful-is a part of this project. So are the potentially endless quests for scientific knowledge and philosophical understanding, and the development of artistic and other cultural traditions. This includes the particular cultural traditions to which we belong, in all their accidental historic and ethnic diversity. It also includes our interest in the lives of our children and grandchildren, and the hope that they will be able, in turn, to have the lives of their children and grandchildren as projects. To the extent that a policy or practice seems likely to be favorable or unfavorable to the carrying out of this complex of projects in the nearer or further future, we have reason to pursue or avoid it. … Continuity is as important to our commitment to the project of the future of humanity as it is to our commitment to the projects of our own personal futures. Just as the shape of my whole life, and its connection with my present and past, have an interest that goes beyond that of any isolated experience, so too the shape of human history over an extended period of the future, and its connection with the human present and past, have an interest that goes beyond that of the (total or average) quality of life of a population-at-a-time, considered in isolation from how it got that way. We owe, I think, some loyalty to this project of the human future. We also owe it a respect that we would owe it even if we were not of the human race ourselves, but beings from another planet who had some understanding of it (Adams, 1989, pp. 472-473) . Since an existential catastrophe would either put an end to the project of the future of humanity or drastically curtail its scope for development, we would seem to have a strong prima facie reason to avoid it, in Adams' view. We also note that an existential catastrophe would entail the frustration of many strong preferences, suggesting that from a preference-satisfactionist perspective it would be a bad thing. In a similar vein, an ethical view emphasising that public policy should be determined through informed democratic deliberation by all stakeholders would favour existential-risk mitigation if we suppose, as is plausible, that a majority of the world's population would come to favour such policies upon reasonable deliberation (even if hypothetical future people are not included as stakeholders). We might also have custodial duties to preserve the inheritance of humanity passed on to us by our ancestors and convey it safely to our descendants. 23 We do not want to be the failing link in the chain of generations, and we ought not to delete or abandon the great epic of human civilisation that humankind has been working on for thousands of years, when it is clear that the narrative is far from having reached a natural terminus. Further, many theological perspectives deplore naturalistic existential catastrophes, especially ones induced by human activities: If God created the world and the human species, one would imagine that He might be displeased if we took it upon ourselves to smash His masterpiece (or if, through our negligence or hubris, we allowed it to come to irreparable harm). 24 We might also consider the issue from a less theoretical standpoint and try to form an evaluation instead by considering analogous cases about which we have definite moral intuitions. Thus, for example, if we feel confident that committing a small genocide is wrong, and that committing a large genocide is no less wrong, we might conjecture that committing omnicide is also wrong. 25 And if we believe we have some moral reason to prevent natural catastrophes that would kill a small number of people, and a stronger moral reason to prevent natural catastrophes that would kill a larger number of people, we might conjecture that we have an even stronger moral reason to prevent catastrophes that would kill the entire human population. Many different normative perspectives thus concur in their support for existential-risk mitigation, although the degree of badness involved in an existential catastrophe and the priority that existential-risk mitigation should have in our moral economy may vary substantially among different moral theories. 26 Note, however, that it is on no account a conceptual truth that existential catastrophes are bad or that reducing existential risk is right. There are possible situations in which the occurrence of one type of existential catastrophe is beneficial-for instance, because it preempts another type of existential catastrophe that would otherwise certainly have occurred and that would have been worse.
Existential risk and normative uncertainty Whereas the first two classes of existential risk (human extinction and permanent stagnation) are specified by purely descriptive criteria, the second two (flawed realisation and subsequent ruination) are defined normatively. This means that the concept of existential risk is in part an evaluative notion. 27 Where normative issues are involved, these issues may be contentious. Population ethics, for instance, is fraught with problems about how to deal with various parameters (such as population size, average wellbeing, thresholds for what counts as a life worth living, inequality, and same vs. different people choices). The evaluation of some scenarios that involve fundamental transformations of human nature is also likely to be contested (Fukuyama, 2002; Glover, 1984; Kass, 2002; Savulescu and Bostrom, 2009) . Yet not all normative issues are controversial. It will be generally agreed, for example, that a future in which a small human population ekes out a miserable existence within a wrecked ecosystem in the presence of great but unused technological capabilities would count as a dismally flawed realisation of humanity's potential and would constitute an existential catastrophe if not reversed. There will be some types of putative existential risks for which the main uncertainty is normative and others where the main uncertainty is positive. With regard to positive, or descriptive, uncertainty, we saw earlier that if something is not known to be objectively safe, it is risky, at least in the subjective sense relevant to decision making. We can make a parallel move with regard to normative uncertainty. Suppose that some event X would reduce biodiversity. Suppose (for the sake of illustration) it is known that X would have no other significant consequences and that the reduced biodiversity would not affect humans or any other morally considerable beings. Now, we may be uncertain whether biodiversity has final value (is valuable 'for its own sake'). Hence we may be uncertain about whether or not X would really be bad. But we can say that if we are not sure whether or not X would really be bad (but we are sure that X would not be good), then X is bad in at least the subjective sense relevant to decision making. That is to say, we have reason to prefer that X not occur and perhaps reason to take action to prevent X. Exactly how one should take into account fundamental moral uncertainty is an open question, but that one should do so is clear . We can thus include as existential risks situations in which we know what will happen and we reasonably judge that what will happen might be existentially bad-even when there would in fact be nothing bad about the outcome. We can highlight one consequence of this: Suppose a fully reliable genie offered to grant humanity any wish it might have for its future. Then-even if we could all agree on one such future-we would still face one more potentially serious existential risk: namely, that of choosing unwisely and selecting a future dismally flawed despite appearing, at the moment of our choice, to be the most desirable of all possible futures.
Keeping our options alive These reflections on moral uncertainty suggest an alternative, complementary way of looking at existential risk; they also suggest a new way of thinking about the ideal of sustainability. Let me elaborate. Our present understanding of axiology might well be confused. We may not now know-at least not in concrete detail-what outcomes would count as a big win for humanity; we might not even yet be able to imagine the best ends of our journey. If we are indeed profoundly uncertain about our ultimate aims, then we should recognise that there is a great option value in preserving-and ideally improving-our ability to recognise value and to steer the future accordingly. Ensuring that there will be a future version of humanity with great powers and a propensity to use them wisely is plausibly the best way available to us to increase the probability that the future will contain a lot of value. To do this, we must prevent any existential catastrophe. We thus want to reach a state in which we have (1) far greater intelligence, knowledge, and sounder judgment than we currently do; (2) far greater ability to solve global-coordination problems; (3) far greater technological capabilities and physical resources; and such that (4) our values and preferences are not corrupted in the process of getting there (but rather, if possible, improved). Factors 2 and 3 expand the option set available to humanity. Factor 1 increases humanity's ability to predict the outcomes of the available options and understand what each outcome would entail in terms of the realisation of human values. Factor 4, finally, makes humanity more likely to want to realise human values. How we, from our current situation, might best achieve these ends is not obvious (Figure 5 ). While we ultimately need more technology, insight, and coordination, it is not clear that the shortest path to the goal is the best one. It could turn out, for example, that attaining certain technological capabilities before attaining sufficient insight and coordination invariably spells doom for a civilisation. One can readily imagine a class of existentialcatastrophe scenarios in which some technology is discovered that puts immense destructive power into the hands of a large number of individuals. If there is no effective defense against this destructive power, and no way to prevent individuals from having access to it, then civilisation cannot last, since in a sufficiently large population there are bound to be some individuals who will use any destructive power available to them. The discovery of the atomic bomb could have turned out to be like this, except for the fortunate fact that the construction of nuclear weapons requires a special ingredient-weapons-grade fissile material-that is rare and expensive to manufacture. Even so, if we continually sample from the urn of possible technological discoveries before implementing effective means of global coordination, surveil-lance, and ⁄ or restriction of potentially hazardous information, then we risk eventually drawing a black ball: an easy-to-make intervention that causes extremely widespread harm and against which effective defense is infeasible. 28 We should perhaps therefore not seek directly to approximate some state that is 'sustainable' in the sense that we could remain in it for some time. Rather, we should focus on getting onto a developmental trajectory that offers a high probability of avoiding existential catastrophe. In other words, our focus should be on maximising the chances that we will someday attain technological maturity in a way that is not dismally and irremediably flawed. Conditional on that attainment, we have a good chance of realising our astronomical axiological potential. To illustrate this point, consider the following analogy. When a rocket stands on the launch pad, it is in a fairly sustainable state. It could remain in its current position for a long time, although it would eventually be destroyed by wind and weather. Another sustainable place for the rocket is in space, where it can travel weightless for a very long time. But when the rocket is in midair, it is in an unsustainable, transitory state: Its engines are blazing and it will soon run out of fuel. Returning the rocket to a sustainable state is desirable, but this does not mean that any way to render its state Sources: Author. Notes: An ideal situation might be one in which we have a very high level of technology, excellent global coordination, and great insight into how our capabilities can be used. It does not follow that getting any amount of additional technology, coordination, or insight is always good for us. Perhaps it is essential that our growth along different dimensions hew to some particular scheme in order for our development to follow a trajectory through the state space that eventually reaches the desired region. more sustainable is desirable. For example, reducing its energy consumption so that it just barely manages to hold stationary might make its state more sustainable in the sense that it can remain in one place for longer; however, when its fuel runs out the rocket will crash to the ground. The best policy for a rocket in midair is, rather, to maintain enough thrust to escape Earth's gravitational field: a strategy that involves entering a less sustainable state (consuming fuel faster) in order to later achieve the most desirable sustainable state. That is, instead of seeking to approximate a sustainable state, it should pursue a sustainable trajectory. The present human condition is likewise a transitional state. Like the rocket in our analogy, humanity needs to pursue a sustainable trajectory, one that will minimise the risk of existential catastrophe. 29 But unlike the problem of determining the optimum rate of fuel consumption in a rocket, the problem of how to minimise existential risk has no known solution.
Outlook We have seen that reducing existential risk emerges as a dominant priority in many aggregative consequentialist moral theories (and as a very important concern in many other moral theories). The concept of existential risk can thus help the morally or altruistically motivated to identify actions that have the highest expected value. In particular, given certain assumptions, the problem of making the right decision simplifies to that of following the maxipok principle.
Barriers to thought and action In light of this result, which suggests that there may be a very high value in studying existential risks and in analysing potential mitigation strategies, it is striking how little academic attention these issues have received compared to other topics that are less important (Figure 6 ). 30 Many factors conspire against the study and mitigation of existential risks. Research is perhaps inhibited by the multidisciplinary nature of the problem, but also by deeper epistemological issues. The biggest existential risks are not amenable to plug-and-play scientific research methodologies. Furthermore, there are unresolved foundational issues, particularly concerning observation selection theory and population ethics, which are crucial to the assessment of existential risk; and these theoretical difficulties are compounded by psychological factors that make it difficult to think clearly about issues such as the end of humanity. 31 If more resources were to be made available to research existential risks, there is a danger that they would flow, with excessive preponderance, to the relatively minor risks that are easier for some established disciplinary community to study using familiar methods, at the expense of far more important risk areas-machine superintelligence, advanced molecular nanotechnology, totalitarianism, risks related to the simulation-hypothesis, or future advances in synthetic biology-which would require a more inconvenient shift in research focus. Another plausible diversion is that research would mainly be directed at global catastrophic risks that involve little or no existential risk. Mitigation of existential risk is hampered by a lack of understanding, but also by a deficit of motivation. Existential risk mitigation is a global public good (i.e., non-excludable and non-rivalrous), and economic theory suggests that such goods tend to be undersupplied by the market, since each producer of existential safety (even if the producer is a large nation) could capture only a small portion of the value (Feldman, 1980; Kaul, 1999) . In fact, the situation is worse than is the case with many other global public goods in that existential risk reduction is a strongly transgenerational (in fact, pan-generational) public good: even a world state may capture only a small fraction of the benefits-those accruing to currently existing people. The quadrillions of happy people who may come to exist in the future if we avoid existential catastrophe would be willing to pay the present generation astronomical sums in return for a slight increase in our efforts to preserve humanity's future, but the mutually beneficial trade is unfortunately prevented by the obvious transaction difficulties. Moral motivations, too, may fail to measure up to the magnitude of what is at stake. The scope insensitivity of our moral sentiments is likely to be especially pronounced when very large numbers are involved: Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking-enter into a 'separate magisterium'. People who would never dream of hurting a child hear of an existential risk, and say, 'Well, maybe the human species doesn't really deserve to survive'. (Yudkowsky, 2008, p. 114) Existential risk requires a proactive approach. The reactive approach-to observe what happens, limit damages, and then implement improved mechanisms to reduce the probability of a repeat occurrence-does not work when there is no opportunity to learn from failure. Instead, we must anticipate emerging dangers, mobilise support for action against hypothetical future harm, and get our precautions sufficiently right the first time. That is a tall order. Few institutions are capable of operating consistently at such a level of effective rationality, and attempts to imitate such proactive behaviour within less perfect institutions can easily backfire. Speculative riskmongering could be exploited to rationalise self-serving aggressive action, expansion of costly and potentially oppressive security bureaucracies, or restrictions of civil liberties that keep societies free and sane. The result of false approximations to the rational ideal could easily be a net increase in existential risk. 32 Multidisciplinary and epistemological challenges, academic distractions and diversions, cognitive biases, freerider problems, moral lethargy and scope-insensitivity, institutional incompetence, and the political exploitation of unquantifiable threats are thus some of the barriers to effective mitigation. To these we can add the difficulty of achieving required levels of global cooperation. While some existential risks can be tackled unilaterally-any state with a space industry could build a global defense against asteroid impacts-other risks require a joint venture between many states. Management of the global climate may require buy-in by an overwhelming majority of industrialised and industrialising nations. Avoidance of arms races and relinquishment of dangerous directions of technological research may require that all States join the effort, since a single defector could annul any benefits of collaboration. Some future dangers might even require that each State monitor and regulate every significant group or individual within its territory. 33
Grounds for optimism? A formidable array of obstacles thus clouds the prospect of a clear-headed and effective response to existential risks confronting humanity. Lest the cause be deemed hopeless, we should also take note of some encouraging considerations. We may note, first, that many of the key concepts and ideas are quite new. 34 Before the conceptual and theoretical foundations were in place, support for efforts to research and mitigate existential risk could not build. In many instances, the underlying scientific, technological, and methodological ideas needed for studying existential risks in a meaningful way have also only recently become available. The delayed start helps explain the still primitive state of the art. It is arguably only since the detonation of the first atomic bomb in 1945, and the subsequent nuclear buildup during the Cold War, that any significant naturalistic (i.e., non-supernatural) existential risks have arisen-at least if we count only risks over which human beings have some influence. 35 Most of the really big existential risks still seem to lie many years into the future. Until recently, therefore, there may have been relatively little need to think about existential risk in general and few opportunities for mitigation even if such thinking had taken place. Public awareness of the global impacts of human activities appears to be increasing. Systems, processes, and risks are studied today from a global perspective by many scholars-environmental scientists, economists, epidemiologists, demographers, and others. Problems such as climate change, cross-border terrorism, and international financial crises direct attention to global interdependency and threats to the global system. The idea of risk in general seems to have risen in prominence. 36 Given these advances in knowledge, methods, and attitudes, the conditions for securing for existential risks the scrutiny they deserve are unprecedentedly propitious. Opportunities for action may also proliferate. As noted, some mitigation projects can be undertaken unilaterally, and one may expect more such projects as the world becomes richer. Other mitigation projects require wider coordination; in many cases, global coordination. Here, too, some trend lines seem to point to this becoming more feasible over time. There is a long-term historic trend toward increasing scope of political integration-from hunter-gatherer bands to chiefdoms, city states, nation states, and now multinational organisations, regional alliances, various international governance structures, and other aspects of globalisation (Wright, 1999) . Extrapolation of this trend might seem to indicate the eventual creation of a singleton (Bostrom, 2006) . It is also possible that some of the global movements that emerged over the last half century-in particular the peace movement, the environmentalist movement, and various global justice and human-rights movements-will increasingly take on board more generalised concerns about existential risk. 37 Furthermore, to the extent that existential-risk mitigation really is a most deserving cause, one may expect that general improvements in society's ability to recognise and act on important truths will differentially funnel resources into existential-risk mitigation. General improvements of this kind might come from many sources, including developments in educational techniques and online collaboration tools, institutional innovations such as prediction markets, advances in science and philosophy, spread of rationality culture, and biological cognitive enhancement. Finally, it is possible that the cause will at some point receive a boost from the occurrence of a major (nonexistential) catastrophe that underscores the precariousness of the present human condition. That would, needless to say, be the worst possible way for our minds to be concentrated-yet one which, in a multidecadal time frame, must be accorded a non-negligible probability of occurrence. 38 Note 1. One informal poll among mainly academic experts on various global catastrophic risks gave a median estimate of 19 per cent probability that the human species will go extinct before the end of this century . These respondents' views are not necessarily representative of the wider expert community. The UK's influential Stern Review on the Economics of Climate Change ( 2006 ) used an extinction probability of 0.1 per cent per year in calculating an effective discount rate. This is equivalent to assuming a 9.5 per cent risk of human extinction within the next hundred years (UK Treasury 2006, Chapter 2, Technical Appendix, p. 47). 2. The strength of this consideration is to some extent blunted by the possibility of observation selection effects casting an 'anthropic shadow' on available evidence (Cirkovic, Sandberg and Bostrom, 2010) . 3. See Smil, 2008. 4 . Probability is thus indexed to time. Quantities that depend on probability, such as the seriousness of a risk, can vary over time as new information becomes available. 5. There is ample historical evidence that apparently sound scientific analyses are sometimes crucially flawed. 6. As indicated in the figure, the axes can be extended to encompass conceptually possible risks that are even more extreme. In particular, pan-generational risks can contain a subclass of risks so destructive that their realisation would not only affect or pre-empt future human generations but would also destroy the potential of the part of the universe that lies in our future light cone to produce intelligent or self-aware beings (cosmic scope). Further, according to some theories of value there can be states of being that are much worse than nonexistence or death (e.g., horrible incurable diseases), so one could in principle extend the x-axis as well (hellish severity). We will not explore these conceptual possibilities in this article. 7. This is based on an accelerating universe with a maximal reachable co-moving distance of 4.74 Gpc, a baryonic matter density of 4.55 10 )28 kg ⁄ m 3 , a luminosity ratio of stars 100, and 1 planet per 1,000 stars being habitable by 1 billion humans for 1 billion years (Gott et al., 2005; Heyl, 2005) . Obviously the values of the last three parameters are debatable, but the astronomical size of the conclusion is little affected by a few orders-of-magnitude change. 8. This uses an estimate by the late futurist Robert Bradbury that a star can power 10 42 operations per second using efficient computers built with advanced nanotechnology. Further, it assumes (along with the cosmological estimates mentioned in the previous footnote) that the human brain has a processing power of 10 17 operations per second and that stars on average last 5 billion years. It does not assume any new star formation. See also (Cirkovic, 2004 ). 9. For example, if all mass-energy in the accessible universe is saved until the cosmic microwave background temperature ceases to decline (due to the constant horizon temperature of 10 -29 K) and is then used for computation, this would allow up to 10 121 thermodynamically irreversible computations (Krauss and Starkman, 2000) . See also (Cirkovic and Radujkov, 2001) . 10. We should stress, however, that there are important unresolved issues in aggregative consequentialism-in particular, in relation to infinite values and extremely small chances . We will not discuss these issues here, but in section 5 we will discuss the normative status of the concept of existential risk from some other perspectives. 11. Following John Rawls, the term 'maximin' is used in a different sense in welfare economics, to denote the principle that (given certain constraints) we ought to opt for the state that maximises the expectation of the worst-off classes (Rawls, 1971) . This version of the principle is not necessarily affected by the remarks in the text. 12. One can refer to this more precisely as 'early' or 'premature' human extinction. Note that humanity can go extinct without instantiating this category if humanity achieves its capability potential and then goes extinct. 13. We may here take 'intelligent' to mean capable of developing language, science, technology, and cumulative culture. 14. It is not required that a technologically mature civilisation actually deploy all of these technologies; it is sufficient that they be available to it, in the sense that the civilisation could easily and quickly develop and deploy them should it decide to do so. Thus, a sufficiently powerful superintelligent-machine civilisation that could rapidly invent and implement these and other relevant technologies would already count as technologically mature. 15. Not strictly never-ending, of course, but a sequence of cycles that goes on for a very long time and ends with human extinction without technological maturity having ever been attained. 16. An unrecovered collapse scenario might postulate that some critical resource for recovery is permanently destroyed, or that the human gene pool irreversibly degenerates, or perhaps that some discovery is made that enables tiny groups to cause such immense destruction that they can bring down civilisation and that the knowledge of this discovery cannot be eradicated. 17. Improved governance techniques, such as ubiquitous surveillance and neurochemical manipulation, might cement such a regime's hold on power to the extent of making its overthrow impossible. 18. Another difficulty for the recurring-collapse hypothesis is to account for the fact that we are in the first technological cycle here on Earth. If it is common for there to be many cycles of collapse and recovery (with similar population sizes) then why do we find ourselves in cycle #1? This kind of anthropic consideration might suggest that extinction or transformation is more likely than one would naively suppose. 19. Even the threat of a war that never erupts could result in much waste, in terms of expenditures on arms and foregone opportunities for collaboration. 20. It is also one reason why permanent stagnation is an existential risk, although permanent stagnation might also preclude survival beyond the time when the Earth becomes uninhabitable, perhaps around a billion years from now due to increasing solar luminosity (Schroder and Smith, 2008) . 21. One potentially significant qualification is that the time to reach the maximum attainable resource base could be shorter if intelligent opposition (such as from extraterrestrial civilisations) emerges that hinders our cosmic expansion. 22. There is a minimum entropy cost associated with the erasure of one bit of information, a cost which declines with temperature. 23. We might also have responsibilities to nonhuman beings, such as terrestrial (and possible extraterrestrial) animals. Although we are not currently doing much to help them, we have the opportunity to do so in the future. If rendering aid to suffering nonhuman animals in the natural environment is an important value, then achieving technological maturity in a manner that fails to produce such aid could count as flawed realisation. See McMahan, 2010; Pearce, 2004. 24. There could, from a theological perspective, possibly be a special category of existential risks with a different moral status: catastrophes or apocalypses brought about by divine agency, perhaps as just punishment for our sins. A believer might judge such an event as, on balance, good. However, it seems implausible that mere mortals would be able to thwart God if He really wanted to flatten us, so any physical countermeasures we implement against existential risk would presumably be effective only against natural and anthropogenic existential risks, and we might have no reason to hold back on our naturalistic-risk mitigation efforts for fear of frustrating designs. 25. Although omnicide would at least be impartial, by contrast to genocide which is often racist or nationalist. 26. For example, James Lenman has argued that it is largely a matter of indifference when humankind goes extinct, at least if it does not happen too soon (Lenman, 2002) . 27. In this respect, the concept of existential risk is similar to concepts such as 'democracy' and 'efficient labor market'. A black hole, or a jar of sterile pebbles, is neither a democracy nor an efficient labour market, and we can see that this is so without having to make any normative judgment; yet there may be other objects that cannot be classified as instances or noninstances of these concepts without taking a stand (at least implicitly) on some normative issue. 28. Of course, achieving effective global coordination sufficiently strong to continually monitor the entire world population or indefinitely censor any information deemed hazardous by some authority would (at least in the absence of adequate safeguards) create its own very significant existential risks, such as risks of permanent stagnation or flawed realisation under some repressive totalitarian regime. 29. Ideally, it would do this while achieving the means to commit collective euthanasia, in the fairly unlikely case that, after long and careful collective deliberation, we should decide that a quick end is preferable to continued existence. That might, however, be a beneficial capability only if we had first attained sufficient wisdom not to exercise it erroneously. We should emphasise the need for continued philosophical deliberation and fostering of conditions that would help us find the truth about central normative issues eventually-as well as the need to avoid irrevocable mistakes in the meantime. 30. Scholarly treatments of existential risk per se, or even of human-extinction risk, are rare (e.g., Leslie, 1996; Matheny, 2007; Wells, 2009) . However, a great deal of academic literature bears on individual existential risks or on other spe-cific issues relevant to many existential risks (a few of which are cited throughout this article). In addition, some recent works take a broad look at global catastrophic risks, though without restricting the focus to existential risks (e.g., Bostrom and Cirkovic, 2008; Diamond, 2006; Homer-Dixon, 2007; Posner, 2004; Sunstein, 2009; World Economic Forum, 2011) . 31. Relevant issues related to observation selection effects include, among others, the Carter-Leslie doomsday argument, the simulation argument, and 'great filter' arguments; see Bostrom, , 2008 Carter, 1983; Cirkovic et al, 2010; Leslie, 1996; Tegmark and Bostrom, 2005. For some relevant issues in moral philosophy, see, e.g., For a review of the cognitive-biases literature as it relates to catastrophic risk, see Yudkowsky, 2008. 32 . A possible way around this problem involves trying to hold the total amount of risk concern roughly constant while allocating a greater proportion of the pot of 'fear tokens' or 'concern chips' to existential risk. Thus, one might advocate that as we become more concerned about existential risk, we ought simultaneously to become less concerned about smaller risks, such as a few thousand people dying in the odd terrorist attack or natural disaster. 33. Such internal control within States will become more feasible with advances in surveillance technology. As noted, preventing States with such capabilities from becoming oppressive will present its own set of challenges. 34. Including the very notion of existential risk . 35. One could argue that pandemics and close encounters with comets, which occurred repeatedly in human history and elicited strong end-of-the-world forebodings, should count as large early existential risks. Given the limited information then available, it might not have been unreasonable for contemporary observers to assign a significant probability to the end being nigh. Religious doomsday scenarios could also be considered; perhaps it was not unreasonable to believe, on the basis of the then-available evidence, that these risks were real and, moreover, that they could be mitigated through such actions as repentance, prayer, sacrificial offerings, persecution of witches or infidels, and so forth. The first clear-cut scientific existential risk might have arisen with the development of the atomic bomb. Robert Oppenheimer, the scientific leader of the Manhattan Project, ordered a study ahead of the Trinity test to determine whether a nuclear detonation would cause a selfpropagating chain of nuclear reactions in Earth's atmosphere. The resulting report may represent the first quantitative risk assessment of human extinction (Manhattan Project, 1946) . 36. Some sociologists have gone so far as to fixate on risk as a central thematic of our age; see, e.g., Beck, 1999. 37 . Many peace activists opposing the nuclear arms race during the Cold War explicitly fretted about a nuclear Armageddon that could allegedly end all human life. More recently some environmentalists sounding the alarm about global warming use similarly apocalyptic language. It is unclear, however, to what extent the perceived possibility of a species-ending outcome has been a major motivating force in these cases. Perhaps the amount of concern would be roughly the same even in the face of an iron-clad guarantee that any catastrophe would stop short of human extinction. 38. I am grateful for comments and discussion to Seth Baum, Nick Beckstead, Milan Cirkovic, Olle Häggström, Sara Lippincott, Gaverick Matheny, Toby Ord, Derek Parfit, Martin Rees, Rebecca Roache, Anders Sandberg, and Carl Shulman. Figure 1 . 1 Figure 1. Meta-level uncertainty.
Figure 2 . 2 Figure 2. Qualitative risk categories.
Figure 3 . 3 Figure 3. World population over the last century.
Figure 4 . 4 Figure 4. Collapse recurring indefinitely?
Figure 5 . 5 Figure 5. The challenge of finding a safe path.
Figure 6 . 6 Figure 6. Academic prioritisation.
Table 1 . 1 Classes of existential risk Human extinction Humanity goes extinct prematurely, i.e., before reaching technological maturity. 12 Permanent stagnation Humanity survives but never reaches technological maturity. Subclasses: unrecovered collapse, plateauing, recurrent collapse Flawed realisation Humanity reaches technological maturity but in a way that is dismally and irremediably flawed. Subclasses: unconsummated realisation, ephemeral realisation Subsequent ruination Humanity reaches technological maturity in a way that gives good future prospects, yet subsequent developments cause the permanent ruination of those prospects. Source: Author.
ª 2013 University of Durham and John Wiley & Sons, Ltd. Global Policy (2013) 4:1
Global Policy (2013) 4:1 ª 2013 University of Durham and John Wiley & Sons, Ltd.
|
eb77a8d0-81cf-45ab-a102-8892cfa93067
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Primates vs birds: Is one brain architecture better than the other?
*By Tegan McCaslin, 28 February 2019*
The boring answer to that question is, “Yes, birds.” But that’s only because birds can pack more neurons into a walnut-sized brain than a monkey with a brain four times that size. So let’s forget about brain volume for a second and ask the really interesting question: neuron per neuron, who’s coming out ahead?
You might wonder why I picked birds and primates instead of, say, dogs and cats, or mice and elephants, or any other pair of distinct animals. But check out this mouse brain:
[](http://aiimpacts.org/wp-content/uploads/2019/02/mouse-brain-image.png)[By [Mamunur Rashid – Own work, CC BY 4.0](https://commons.wikimedia.org/w/index.php?curid=64106364)]
See how, on the outside of the lobe (the part closer to the upper righthand corner), you can pick out a series of stripes in a neat little row? Those stripes are the six layers of the neocortex, a specifically mammalian invention—all mammals have it, and no one else does. People have been pointing to this structure to explain why we’re so much better than fish since the [*scala naturae*](https://en.wikipedia.org/wiki/Great_chain_of_being) fell out of favor.
And that would be a pretty convenient story if birds hadn’t come along and messed the whole picture up. If you look at a similar cross section of a bird’s brain, it kind of just looks like a structureless blob. For a long time, comparative neuroanatomists thought birds must be at a more primitive stage of brain evolution, with no cortex but *huge* basal ganglia (the bit that we have sitting under our own fancy cortex). But we’ve since realized that this “lower” structure is actually a totally different, independently-evolved form of cortex, which seems to control all the same areas of behavior that mammalian cortex does. In fact, birds have substantially more of their brain neurons concentrated in their cortices than we mammals have in ours.
Alright, so it’s not that surprising that another form of cortical tissue exists in nature. But could it really work as well as *ours*? Surprisingly, no one has really tried to figure this out before.
If, for instance, primates were head and shoulders above birds, that might mean that intelligent brains aren’t *just* energetically expensive (in terms of the energy required for developing and operating neurons), they’re also exceptionally tricky to get right from a design standpoint. Of course, if bird and primate architectures worked equally well, that doesn’t mean brains are easy to get right–it would just mean that evolution happened to stumble into two independent solutions around 100 million years ago. Still, that would imply substantially more flexibility in neural tissue architectures than the world in which one tissue architecture outstripped all others.
Answering the question of birds vs. primates conclusively would be an enormous undertaking (and to be honest, answering it inconclusively was a pretty big pain already), so instead I focused on a very small sample of species in a narrow range of brain sizes and tried to get a really good sense of how smart those animals in particular were, relative to one another. I also got 80+ other people (non-experts) to look at the behavioral repertoire of these animals and rank how cognitively demanding they sounded.
With my methodology of just digging through all of the behavioral literature I could find on these species, full and representative coverage of their entire behavioral repertoire was a major challenge, and I think it fell well short of adequate in some categories. This can be a big problem if an animal only displays its full cognitive capacities in one or a few domains, and worse, you might not even know which those are. I think this wasn’t as big an issue with the species I studied as it could have been, since we have pretty good priors with respect to what selective pressures drove cognitive development in the smartest animals (like primates and parrots). Plus, scientists are much more likely to study the most complex and interesting behaviors, and those are very often the ones that display the most intelligence.
One of the behaviors scientists are really keen on is tool use. Our survey participants seemed to like it too, because they rated its importance higher than any other category, and it ended up being the most discriminatory behavior, too–neither the small-brained monkey nor the small-brained parrot had recorded examples of tool use in the wild, while both of the larger-brained animals did.
In the end, people didn’t seem to think the two primate species I included acted smarter than the two bird species or vice versa, but did think the larger-brained animals acted smarter than the smaller-brained animals. The fact that this surveying method both confirmed my intuitions and didn’t seem *totally* overwhelmed by noise kind of impressed me, because who knew you could just ask a bunch of random people to look at some animal behaviors and have them kind of agree on what the smartest were? That said, we didn’t validate it against anything, and even though we have reasons to suspect this method works as intended (see the full article), how well and whether this was a good implementation aren’t clear.
So this is all pretty cool, but even if we could prove definitively that macaws and squirrel monkeys are smarter than grey parrots and owl monkeys, it’s not a knock-down argument for architecture space being chock full of feasible designs, or even for birds and primates having identical per-neuron cognitive capacity. It’s mostly just a demonstration that the old self-flattering dogma of primate exceptionalism doesn’t really hold water. But it also points to an interesting trend: instead of trying to tweak a bunch of parameters in brains to squeeze the best possible performance out of a given size, evolution seems to have gotten a lot of mileage out of just throwing more neurons at the problem.
There’s a lot more dirt [here](https://aiimpacts.org/investigation-into-the-relationship-between-neuron-count-and-intelligence-across-differing-cortical-architectures/), in the full analysis.
|
e9bf40a4-41b4-4144-b7e6-2bc0ea6e6794
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Clarifying Alignment Fundamentals Through the Lens of Ontology
Meta: This post grapples with the fundamentals of alignment, which are simultaneously well-trodden ground and quite confusing. I've spent enough time with this post that I no longer have a sense for if the topics are trivial or insightful – I can only imagine that will depend on the reader.
Introduction
Ontology is a common term in alignment thought, but it’s not always crisply clear what is meant by the word. I believe that understanding it well can shed light on some of the more central and confusing aspects of alignment. This is my attempt to clarify the subject – it is an exploration into the dynamics of ontology, how beings navigate it, and what this says about alignment.
In Part 1, I present a formalization of ontology which centrally focuses on how the elements of an ontology, concepts, are grounded – how they are ultimately defined. Personal ontologies are grounded in the perceptions of one being, and universal ontologies are grounded in the full expanse of information that describes the universe. Personal and universal ontology are analogous to map and territory.
Part 2 of the post describes correspondences between concepts in personal and universal ontology, which form the backbone of how beings relate to the world, and whose failure gives rise to some of the deep challenges in alignment.
Part 3 describes those challenges directly and presents a definition of alignment that adds precision to Christiano's characterization of intent alignment.
Much of this post aims at building intuitions for the various topics around ontology and correspondence. The most concrete portion of this post is part 3 and its technical definition of alignment; some readers may benefit from skipping directly to it if the topics are familiar or to get a sense for where things are going.
Background on Ontology
There are multiple senses in which the word “ontology” is used, some of them rather distant from alignment thought. The original usage, ontology as a philosophica
|
defdea5d-daf3-4144-8e59-9ff0af185503
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Solving the Mechanistic Interpretability challenges: EIS VII Challenge 1
We solved the first (edit: and second) Mechanistic Interpretability challenge that Stephen Casper posed in EIS VII. We spent the last Alignment Jam hackathon attempting to solve the two challenges presented there, and present our (confirmed) solution to the CNN challenge here. We present a write-up of our work on the Transformer challenge in this follow-up post.
Stefan and Marius submitted an early version of this work at the end of the Hackathon, and Stefan added Intervention and Causal Scrubbing tests to the final write-up. A notebook reproducing all results is provided here (requires no GPU but ~13 GB RAM).
The challenges each provide a pre-trained network, and the task is to reverse engineer the network as well as to infer the labeling function used for training. The first challenge network is a MNIST CNN that takes MNIST images and outputs labels. The hints given are that [1] the labels are binary, [2] the test set accuracy is 95.58%, [3] that the (secret) labeling function is simple, and [4] this image:
Hint 4: clue_image
The MNIST network consists of
* 2 Convolutional layers (Conv -> ReLU -> Dropout -> Pool)x2
* 2 Fully connected layers (fc1[400,200] -> ReLU -> fc2[200,2])
and we can access the data (torchvision.datasets.MNIST) but not the ground truth labels.
Spoilers ahead!
----------------------------------------
Summary of our solution (TL,DR)
1. The inputs are labelled based on similarity with a 1 versus similarity with an inverted 1 ("anti-1"). If the difference is large (either clearly 1 or clearly anti-1) the image is labeled as class 0, otherwise the image is labeled as class 0. Specifically, the template for 1 seems to be is the given hint (clue_image), and the "anti-1" is 1-clue_image.
2. The similarity is measured as the sum over the element-wise product of the image matrices (or equivalently dot product of flattened image arrays).
Then the the ~17k most similar images to “1” and ~14k most similar images to “anti 1” are
|
40de7333-d689-482c-8a00-7956c7b1268d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Crosspost] AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results.
This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness. In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.
PART 1: Effectiveness per media item
This research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.
Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.
Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - wer
|
065f6b72-bd14-44ca-8d15-afea5c738d77
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
*Thanks to Chris Leong and Nora Belrose for their feedback. This is meant to be part of an entry to the* [*Future Fund AI Worldview Competition*](https://ftxfuturefund.org/announcing-the-future-funds-ai-worldview-prize/)*, but a later post is intended to address the competition questions head on.*
In this post, I explore *mimics.* Mimics are what you get when you join a simulator with a generator. Examples are language models that learn to predict text sequences (the simulator), and generate samples of text sequences from their predictions (the generator). A number of AI safety researchers have mentioned that mimics [seem to be](https://www.lesswrong.com/posts/JqnkeqaPseTgxLgEL/conditioning-generative-models-for-alignment) safer [than "traditional"](https://www.alignmentforum.org/posts/dWJNFHnC4bkdbovug/training-goals-for-large-language-models) AI architectures like reinforcement learners, with the proposed reason for this often being that mimics are less "agentic" or "goal-driven" than traditional architecture. moire's [Simulators](https://generative.ink/posts/simulators/#a-note-on-gans) is a particularly thorough overview that makes a similar point.
In this post, I argue that a key feature of mimics is unrelated to their "agentiness": someone who can forecast a mimic's training data can also forecast a mimic's behaviour. I call this phenomenon *synchronisation. S*ynchronisation is possible even when the operator can only forecast some crude features of the training sequence.
Certain methods for fine-tuning mimics allow mimics to be optimised for certain tasks while staying synchronised with the operator. This enables mimics to be controlled in a manner that maintains synchronisation and consequently remain easy to predict.
However, some kinds of objectives do not facilitate synchronised control of mimics. If an operator fine-tunes a mimic to control some feature of the world over which it wouldn't normally have complete control, then the operator should generally expect the mimic's output to diverge from forecasts based on the training data. In practice, the consequences of this divergence is reminiscent of failures due to [Goodhart's law](https://en.wikipedia.org/wiki/Goodhart%27s_law).
The extremely brief summary of this post is:
* Idealised mimics do what you expect them to when you're trying to control features of their output
* Idealised mimics can surprise you when you're trying to control features of the world
Safety relevance
================
Suppose you've been reading books all your life, and you have a pretty good estimate of how likely a book is to actually be good (by your lights) given it gets a 4.8 star rating on Amazon - and, being a good Bayesian, you represent this with a conditional probability F(Actuallygood|4.8stars).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
. One of the key claims of this article is that, in some situations, it is possible to fine-tune a mimic so that it produces 4.8 star books in such a way that its sampling distribution Q(Actuallygood|4.8stars) approximates your own subjective probability F(Actuallygood|4.8stars).
This provides a method for dealing with concerns like those in [You get what you measure](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_I__You_get_what_you_measure); here is a method for fine-tuning on the easy-to measure thing, and getting the hard-to-measure latent just as much as you expect you would. A number of stars have to align in order for this to happen, but it is not an inordinately large number of stars. Furthermore, it may be possible to say quite a lot about when this might fail and by how much it might fail.
So the first safety relevant point is: perhaps there is a solution to this problem.
A broader question, the one that initially led me down this path, is whether or not safe AI is incentive compatible. If safe AI is incentive compatible, then if you do a good job of building AI that simply does what you want it to, you also do a good job of building safe AI. If safe AI is incentive incompatible, then you have to make trade-offs between building AI that simply does what you want and ensuring safety.
There's a narrow question one can ask in this regard. As I explore in this article, fine-tuning mimics often involves a regularising penalty that ensures the result is close in distribution to the original mimic. Granting, for argument's sake, that this penalty makes a system safer, we can ask: is the size of the penalty limited by performance or safety? I perform a microscopic literature review here and come up with the answer that it seems to be more often limited by performance. While today's AI systems are only weakly relevant to future AI systems, they are still a little relevant, and it might be worthwhile to interrogate this question more comprehensively.
There's also a broader question that I think is relevant: is it easier to solve control problems or [hide them](https://www.lesswrong.com/posts/xFotXGEotcKouifky/worlds-where-iterative-design-fails)? If it is easier to solve control problems, then I think our world looks more incentive compatible; if it is easier to hide them then I think it looks more incentive incompatible. If mimics really do solve an important control problem, then I think we have evidence - albeit inconclusive - that we might be in a solving problems world and not a hiding problems one.
I cannot conclusively answer the question of whether mimics *do* solve this control problem, but the "maybe" that I offer is still progress with respect to my own understanding.
Epistemic status
================
I think some of the claims I make here are fairly simple and I have high confidence in them, but they are also not the critical ones. I think the important claim is the one I made in the first paragraph of the previous section: it's possible to fine-tune mimics in a way that approximately matches an operator's conditional probability in important regards, and this is a key feature that enables mimics to address more complex problems than other AI architectures. I'm much less confident in this. I expect that it is almost never true in every last detail, but I give 45% credence to it being roughly true (bearing in mind that I think most theories of this type this should be very unlikely a priori).
There's also a heap I don't understand about the ideas I present here, so this credence is liable to swing wildly at short notice.
Notation reference
==================
Xi (X′i) is a "natural" ("mimicked") random variable taking values in the set X with events X
Zi (Z′i) is a random variable determinstically related to Xi (X′i) taking values in the set Z
Ti (T′i) is a random variable not deterministically related to Xi (X′i), taking values in the set T
P is the mimic's probability distribution
P(Xn|X<n=x<n) is the distribution of Xn learned by the mimic after observing x<n
Q is the sampler argument - Q←P(Xn|X<n=x<n) means that the mimic draws samples according to P(Xn|X<n=x<n)
F is the probability distribution the operator uses to predict both natural and simulated variables. I think of the operator as a skilled but not superhuman forecaster: she has good priors, and updates them sensibly given evidence, but there are many things beyond her ability to forecast
What is a mimic?
================
A mimic is a simulator joined to a generator. It does two things:
1. It learns a probability distribution that predicts elements of a sequence of inputs
2. It can sample this probability distribution to produce outputs of the same type as its inputs
Given a sequence of random variables X1,X2,...,Xn−1=:X<n and an event X<n=x<n, a mimic learns the posterior distribution P(Xn|X<n=x<n). It is also equipped with a sampler, which maps distributions over Xi to random outputs X′i taking values in X. Setting the sampler argument Q←P(Xn|X<n=x<n) produces outputs X′i distributed according to P(Xn|X<n=x<n).
---
***Example***
Consider a mimic that takes a sequence of books X<i as input. It can predict the an as-yet unseen book Xn and it can sample a book X′n, using the same probability distribution for both.
---
Operators can synchronise with mimics
=====================================
The basic insight of this section is: under some conditions, a person(an "operator") who can do a good job of probabilistically forecasting a natural sequence Xi can also do a good job of forecasting a mimicked sequence X′i if the mimic is trained on the same natural sequence. This happens when the operator's and mimic's posterior distributions converge. Informally: if the mimic is good, then to the operator its outputs look just like its training data.
Such convergence can happen even if the operator only observes some coarse features Zi of the mimic's inputs Xi. I do not address the question of whether or not this convergence happens in practically relevant lengths of time for practically implementable machines.
Equal capabilities
------------------
Bayesian reasoners, given the same sequence of data, will under some circumstances ["merge" in their opinions of the future](https://www.jstor.org/stable/2237864) ([pdf](http://dklevine.com/archive/refs4565.pdf)). Specifically, if the operator has a distribution F over the infinite sequence XN and the mimic has a distribution P over the same infinite sequence and for any collection of outcomes C, P(XN∈C)=0 implies F(XN∈C)=0 (that is, F is dominated by P) then the conditionals P(Xn|X<n=x<n) and F(Xn|X<n=x<n) will converge as n→∞ on all inputs except a set Xbad⊂XN with F-probability 0. If P is dominated by F, then this set also has P-probability 0. If F is dominated by P and vice versa, I say they have *identical support.*
If the mimic's sampler is set to Q←P(Xn|X<n=x<n), the operator can set their forecasting distribution F(X′n|X<n=x<n)=F(Xn|X<n=x<n), and by the above convergence this will approximate the mimic's sampling distribution. When the operator's distribution over the natural sequence approximately matches the mimic's sampling distribution, we say that the operator and the mimic are *synchronised*.
Note that the assumption of identical support is, in the general case, not very easy to evaluate, and this is especially true when we don't have any easy way to evaluate P or F.
---
***Example***
If the mimic learns to predict books (in every last detail) from the sequence X<n and the operator learns to predict books (in every last detail) from the same sequence, and their initial distributions assign measure 0 to the same set of long-run events, then the operator's forecasting distribution over "natural" books and the mimic's sampling distribution will eventually come to agree. I call this convergence *synchronisation*.
---
Mimic more capable
------------------
If the operator can predict every detail of Xi just as well as the mimic, then one might wonder what use the mimic is - perhaps we could just sample from the operator's distribution instead. However, the operator may not need to predict every detail of Xi; it may be enough for her to predict some coarse features of each book, and still achieve synchronisation with the mimic. The story here is a bit more complicated, though.
Suppose that instead of observing the "base" sequence Xn, the operator observes some features Zn:=g(Xn). Abusing notation slightly, the "objective" sampling distribution of Z′n:=g(X′n) is given by
P(Z′n|X<n=x<n)=P(Zn|X<n=x<n)
By supposition, the operator does not observe x<n and so they cannot make use of F(Z′n|X<n=x<n) to synchronise with the mimic. Thus the naive argument for synchronisation does not apply. However, we can still say two things:
1. Given similar assumptions of common support, the operator's forecast of the machine's output given Z<n=z<n converges to the distribution of Zn given Z<n=z<n that could in principle be obtained with the mimic's assistance
2. If we make the additional assumption that the sequence Xi is [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables) with respect to P, then the operator's forecast may converge to the mimic's sampling distribution as normal
These are explained in more detail after the following example.
---
***Example***
Suppose the operator observes two features of many books:
* Genre
* Whether or not the operator enjoys reading it
We say Zi:=(genre(Xi),enjoyability(Xi)). The operator can estimate the probability that they enjoy a book given its genre from their history of books read and the probability that a random book is of a given genre. If the operator accepts that these probability estimates converge to the mimic's sampling distribution of Z′n because she shares inputs with the mimic, then even though the operator cannot write books, she can still say (probabilistically) how well they'll like the books the mimic produces, and what genre they'll be.
---
### 1. Operator forecast merges with the mimic's limited forecast
The mimic, by supposition, defines a collection of conditionals P(Xn|X<n) for every n. Thus we can (in principle) extract a joint distribution P(X[n]) over sequences of length n from the mimic. Actually doing this would be very impractical.
A joint distribution P(X[n]) induces a joint distribution P(X[n],Z[n]) by pushing it forward with the function h:x[n]↦(x[j],g(x[j]))j∈[i] (actually computing this would, among other things, require knowledge of g). From this, in turn, we can derive a conditional probability P(X<n|Z<n).
If the mimic's model P(X<n) is thought to be a particularly good one, then because Zi is a function of Xi, we might also surmise that P(X<n|Z<n) is a good model for X\_{<n} given Z<n. Given a realisation of the sequence Z<n=z<n, the operator can consult the mimic's conditional probability to help them assess what outputs it is likely to produce
F(Z′n|Z<n=z<n)=∑x<n∈Xn−1P(Zn|X<n=x<n)P(X<n=x<n|Z<n=z<n)
because each Zi is a deterministic function of Xn, the right hand side is equal to P(Zn|Z<n=z<n).
But, if F(ZN) is dominated by P(ZN), then merging of opinions implies that
P(Zn|Z<n=z<n)→F(Zn|Z<n=z<n)
in total variation. So, instead of performing the impractically complex query to determineP(Zn|Z<n=z<n), the operator can just substitute their own estimate F(Zn|Z<n=z<n), and for sufficiently large n the result will be approximately the same.
### 2. Sequence is exchangeable
If the sequence XN is [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables) with respect to P, then so is the sequence ZN. In this case, it can be shown that Zi is independent of X<n given ΘZ, the [empirical distribution](https://en.wikipedia.org/wiki/Exchangeable_random_variables#Exchangeability_and_the_i.i.d._statistical_model) of ZN, which is a function of ZN∖{n} or XN∖{n}. Hence we have
P(Zn|XN∖{n}=xN∖{n})=P(Zn|ΘZ∘gN(xN∖{n})=θZ)
=P(Zn|ΘZ(zN∖{n})=θZ)
=P(Zn|ZN∖{n}=zN∖{n})
I suspect it's possible to say something more directly about under what circumstances P(Zn|X<n=x<n)→F(Zn|Z<n=z<n), but at the moment I don't know more than this.
Exchangeable sequences also have the advantage that identical support is easier to evaluate. For exchangeable sequences, identical support of P(ZN) and F(ZN) is equivalent to the priors over the empirical distributions P(ΘZ) and F(ΘZ) having common support.
Convergence rates
-----------------
The fact that F(Zn|Z<n) converges to P(Zn|Z<n) "for some finite n" isn't especially useful by itself - n being finite does not mean that it is small enough to be practically important. I don't have much idea about the extent to which operators and mimics converge in practical settings.
It's possible that there are different features of human interest - say, Wi and Zi - such that F(Wn|W<n,Z<n) and F(Zn|W<n,Z<n) converge at very different rates to the respective conditionals in P. This difference in rates could be important if Wi is some feature relevant to "performance on the immediate objective" while Zi is some feature relevant to safety - it may then be possible to build a mimic that is very predictable with respect to the immediate objective but whose safety properties are very unpredictable.
Operators can control mimics and maintain synchronisation
=========================================================
Not only can operators predict what mimics will do unconditionally, but for some purposes, they can control mimics such that the mimic's behaviour remains synchronised with their forecasts of the natural sequence.
---
***Example***
Suppose the operator once again observes the genre and enjoyableness of many books, and she somehow controls the mimic to only produce books that she enjoys.
The operator's control *desynchronises* the mimic if it changes the mimic's distribution of book features conditional on enjoyability. For example, if most of the natural books the operator enjoyed were fantasy, but most of the mimicked books she enjoys are operator-flattery, then her control desynchronised the mimic.
The operator's control *maintains synchronisation* if the distribution of book features conditional on enjoyability doesn't change. If most of the mimicked books that the operator enjoys are also fantasy, then her control maintains synchronisation with respect to genre. Synchronisation is maintained in general if the distribution of "books in every last detail" conditional on enjoyability is unchanged.
---
A standard method for controlling mimics is [fine-tuning](https://www.lesswrong.com/posts/chevXfQmRYrTZnj8r/conditioning-prompts-and-fine-tuning) them. In particular, given a binary function b:X→{0,1}, we can fine tune a mimic to approximate samples from the conditioned distribution by reinforcement learning using a [KL-divergence](https://en.wikipedia.org/wiki/Kullback–Leibler_divergence) penalty. We set r(x)={0b(x)=1−∞b(x)=0
and then, letting π0:=P(Xn|X<n=x<n), set
Q←argmaxπθEx∼πθ[r(x)]−DKL(πθ,π0)
This is maximised by P(Xn|b(X′n)=1,X<n=x<n) (see [Korbak, Perez and Buckley](https://arxiv.org/abs/2205.11275), appendix).
If F(Zn|Z<n=z<n) approximates P(Zn|X<n=x<n) and consequentlyF(Zn|b(Xn)=1,Z<n=z<n) approximates P(Zn|b(Xn)=1,X<n=x<n), then the operator can adopt
F(Z′n|b(Xn)=1,Z<n=z<n)=F(Zn|b(Xn)=1,Z<n=z<n)
as an approximation of the conditioned mimic's sampling distribution. This requires, of course, that the operator is able to compute this conditional, and they may not be able to.
Setting a softer function r(x) will leave us somewhere between the conditioned distribution and the orignal distribution.
---
***Example***
Suppose the operator tracks two features of every book: its machine-rated binary sentiment and the number of times one person is described as helping another in the text; (Sn,Hn):=(sentiment(Xn),help(Xn)). If we use fine tuning to set the mimic's sampling distribution Q←P(Xn|Sn=1,X<n=x<n) and we accept that the appropriate form of synchronisation holds, then the operator can approximate the sampling distribution of mentions-of-helping using
F(H′n|S′n=1,S<n,H<n)=F(Hn|Sn=1,S<n,H<n)
Thus if mentions-of-helping is highly correlated with sentiment in natural books, such mentions will be very common in mimicked books fine-tuned to have positive sentiment. This example was inspired by [Jermyn](https://www.lesswrong.com/posts/chevXfQmRYrTZnj8r/conditioning-prompts-and-fine-tuning)'s discussion of the difficulty of predicting the outputs of conditioned mimics.
---
Fine-tuning with imperfect control is desynchronising
-----------------------------------------------------
In practise, the operator isn't just interested in controlling functions of the mimic's output Xi. She is usually interested in controlling some feature Ti of "the world at large" which is plausibly influenced by by X′i. Even in our example, we discuss things like whether books are enjoyable. The operator wants enjoyable books because she wants to read a book and enjoy it. Asking the mimic to make her enjoy the book is a lot to ask - the mimic seemingly can't do anything about her stressful job that dampens her enthusiasm for reading on some days.
What if we fine-tune the mimic with the same function, but with a reward that depends stochastically on Xi? That is, we set
Q←argmaxπθEx∼πθ;ρ[R|x]−DKL(πθ,π0)
where the expectation is is some [stochastic function](https://en.wikipedia.org/wiki/Markov_kernel) ρ:X→Δ(R) "implemented by the real world" that maps mimic outputs x to rewards R, which are once again assumed to take values of −∞ or 0 (not because it's a good idea, but because it helps to make my point).
If there is a nonempty "forcing" set XF⊂X defined by XF:={x|ρ(R=0|x)=1}, the result of this fine tuning will be to set Q to the distribution P(Xn|Xn∈XF,X<n=x<n).
Abusing notation again, let P(XN,RN) to be the result of taking P(XN) and "pushing the Xis through ρ" (alternatively: what the mimic would believe if an oracle told him that the distribution of Ri given Xi was ρ). Unlike the situation discussed previously, fine-tuning with imperfect control will *not* generally yield samples from P(Xn|Rn=0,X<n=x<n).
If control is "almost perfect" - i.e. P(Xn∈XF|Rn=0,X<n=x<n)>1−ϵ, then we almost get samples from the distribution conditioned on Rn=0. In particular, under the assumption of almost perfect control we have for any A∈X
|P(Xn∈A|Rn=0,X<n=x<n)−P(Xn∈A|Xn∈XF,X<n=x<n)|<2ϵ
However, if control is far from perfect - i.e. P(Xn∈XF|Rn=0,X<n=x<n)<ϵ - then P(Xn|Rn=0,X<n=x<n) can differ very substantially from P(Xn|Xn∈XF,X<n=x<n).
---
***Example***
Suppose the operator fine-tunes the mimic on rewards Ri, which take the value −∞ if a random person did not agree with the book's thesis after reading it, and 0 if said random person did agree with the book's thesis. - whether or not the book i is persuades a randomly chosen individual of its main thesis. The base rate for persuasion is low - P(Rn=0|Xn=x,X<n)=0.01, but conditional on persuasion there is substantial variation in the topic Zi - i.e. P(Zi=z|Ri=0,X<i)<ϵ for all z. Fine tuning to produce books X′i with a high rate of persuasion is found to achieve the aim of almost always "persuading" random people of the book's thesis, but all of the books produced argue the thesis that the sky is blue P(Zi=skyisblue|Xn∈XF,X<n)=1.
---
As before, softer reward functions will wind up somewhere between the unconditioned distribution and P(Xn|Xn∈XF,X<n=x<n). However, it remains difficult for the operator to forecast the result of fine-tuning, because unless they know XF in advance, they don't have any obvious method to condition on Xi∈XF.
As an aside, there is an additional problem in this regard where an operator fine-tunes a mimic to produce outputs with a particular feature, but she doesn't get what she wants in the real world from it because causation ≠ correlation.
Testing this theory
===================
The core of the theory is: the better the mimic, the better someone (or some machine) that has learned to predict or classify the training data will perform on the mimic generated data.
This could be tested in a scheme something like this: have human volunteers label a set of training data and a set of mimic generated data. Subsequently, compare the performance of:
* a classifier trained on the training data, tested on the mimic generated data
* a classifier trained on the mimic generated data and tested on a held-out set of mimic generated data
The theory I present here predicts that as the mimic gets better, the performance gap between the two should shrink.
Are mimics dangerous?
=====================
The above discussion suggests that fine-tuning a mimic on features over which it has imperfect control might lead to unexpected behaviour - and this behaviour might be very unexpected if the objective can be controlled, but only by the mimic adopting a very unusual strategy. "Unusual strategies" that succeed at controlling difficult objectives may well be dangerous. In practice, will people want to stick close to a mimic's original distribution, or push it far from this distribution in search of effective strategies?
The claims I have made above are already somewhat speculative. The question of whether mimics are safe depends on further speculation:
* Perhaps mimics may pay a performance penalty if they are not sufficiently regularised - fine tuning might have an adverse impact on their ability to generalise because they depended on the initially learned distribution to be able to do this
* Perhaps the desynchronisation from fine-tuning with imperfect control might lead to mimics giving undesirable results long before regularisation becomes weak enough to make them dangerous
If the first hypothesis are true, then mimics are "passively safe" - even if we try to remove the regularisation term during fine-tuning, their ability to generalise fails before they take any dangerous actions. If only the second is true, then mimics safety is incentive compatible. Removing the regularisation term can lead to dangerous actions, but no-one is interested in doing that because it gets undesirable results for other reasons. If neither is true, then mimic safety is incentive incompatible - people want small regularisation terms to get desirable results, but this trades off against safety.
Some empirical findings
-----------------------
### Desynchronisation can happen when fine-tuning without regularisation on perfectly controlled features
Fine-tuning language models without a KL penalty has been found to produce "degeneration" of the generated samples. Many articles attest that degeneration involves a reduction in "fluency and diversity" of samples.
[Korbak et. al.](https://arxiv.org/abs/2106.04985) examined different methods to fine-tune GPT-2 to produce compilable code. Their findings were, briefly:
* Unregularised reinforcement learning yielded a much higher rate of compilability at the cost of substantially reduced program length and complexity and substantial divergence from the baseline distribution of texts generated by GPT-2 conditioned on compilability
* KL-regularised fine tuning yielded lower rates of compilability but longer programs (though still *slightly* shorter than baseline) and reduced divergence from the distribution of texts generated by GPT-2 conditioned on compilability
Training without the KL-regularisation leads to divergence from the baseline distribution conditioned on compilability. If the baseline distribution is synchronised with an operator, then this divergence is what I call "desynchronisation". The reduction in program length is one consequence of desynchronisation among many, and illustrates how desynchronised mimics can yield undesirable results that satisfy the training goal on paper.
Earlier work by [Paulus, Xiong and Socher](https://openreview.net/forum?id=HkAClQgA-) reports a broadly similar result: fine-tuning summarisation with unregularised reinforcement yields higher scores on the metric of interest, but
> It is possible to game such discrete metrics and increase their score without an actual increase in readability or relevance
>
>
they also employ a kind of regularisation to try to improve summarisation while maintaining readability and relevance.
I think these examples provide very weak evidence against passive safety - unregularised reinforcement learning was successful at improving their scores on the metrics in question. I think they provide also very weak evidence in favour of incentive safety - unregularised reinforcement learning was found to produce output that was nevertheless undesirable. I say the evidence is very weak because I would not be surprised if these examples were not representatives of substantially more advanced systems deployed to solve substantially more difficult problems.
It's worth noting that Korbak et. al. were not able to produce perfectly compilable samples from GPT-2 using KL-regularised fine tuning, despite the fact that compilability definitely is perfectly controlled by the sequence generator. My guess is that being unable to learn the compilability predicate looks quite similar to the situation where compilability is not fully controlled by the learner. This leads me to expect that KL-regularised fine tuning in this regime might in some ways be similar to KL-regularised fine tuning in the imperfect control regime. Thus I expect to see some desynchronisation in this context, and I wonder if the slight reduction in program length this team observed is a sign of this.
### There are many different pre-training schemes that seem to be effective
Pre-training might not need a large and diverse dataset to be effective. For example:
* [Krishna et. al.](https://dblalock.substack.com/i/76150421/downstream-datasets-make-surprisingly-good-pretraining-corpora) find that self-supervised pretraining on a small task-specific text dataset can yield results nearly as good as (and in some cases better than) pretraining on a large and diverse corpus of text
* Other papers behind that link show that self-supervised pretraining on nonsense text or synthetic text can also yield high performance on downstream tasks
If pretraining datasets don't matter very much, then (in my language) P(Xn|X<n) might not need to match F(Xn|X<n) very closely in order to produce a mimic with high performance. If these distributions do not match in every particular then, for example, F putting low weight on dangerous actions does not necessarily imply P puts low weight on the same.
On the other hand, pretraining on large datasets does seem to help performance on average, and despite the results mentioned above it remains plausible to me that extensive pretraining is necessary for mimics that are used to solve particularly difficult problems.
I think these results - especially the pretraining on nonsense text results - also weakly undermine the claim that synchronisation is an important reason why pretrained models are able to perform useful tasks ("because they give us what they expect"), but I think the relevance is very slight and is outweighed by things like the fact that we can ask GPT-3 a question and get a sensible answer in response.
Conclusion
==========
The basic idea here seems obvious: a good mimic is hard to distinguish from the thing it's mimicking. Nevertheless, to my knowledge, Bayesian merging of opinions ("synchronisation") has not previously been proposed as mechanism for how this occurs. My impression is also that applying standard prediction techniques (both formal and informal) to features of the training sequences to predict features of the outputs of mimics has been widely used - for example, in the investigation of prompting - but the reasons for why this is possible have also not been explored very much theoretically.
I wonder whether it is feasible to advance the science of deep learning (and psychology?) to the point where we have strong enough results about synchronisation to actually prove some safety properties for advanced mimics. I am pessimistic about this, but not confident in my pessimism.
In my view, here are some key takeaways of this post:
* Controlling advanced AI presents us with a problem of "delegate proxy controllability": under what conditions can I direct a delegate to pursue proxy M and expect good results?
* If we take an event M to be a good proxy for desired results under "natural" conditions, then I suggest that if the consequences of a delegate pursuing M match our expectations for what happens when M occurs under natural conditions, then M should be a good proxy for controlling that delegate
* Under some (possibly optimistic) assumptions, mimics can achieve the property outlined in the previous bullet
* Furthermore, when some of those optimistic assumptions don't hold, we might be able to measure the "Goodhart-proneness" of an objective by estimating the probability of an action lying in the forcing set for that objective conditional on that objective being achieved. Such a measure seems relevant to a number of concerns in the AI safety field.
|
d399e777-6162-43e1-98e3-49e5e0c3867a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mysterious Answers to Mysterious Questions
Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You’ve learned some anatomy from dissection, so you know your hand contains muscles; but you don’t know why muscles move instead of lying there like clay. Your hand is just . . . stuff . . . and for some reason it moves under your direction. Is this not magic?
> It seemed to me then, and it still seems to me, most probable that the animal body does not act as a thermodynamic engine . . . The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concourse of atoms[.]1
>
> [C]onsciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears, therefore, that animated creatures have the power of immediately applying, to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce desired mechanical effects.2
>
> Modern biologists are coming once more to a firm acceptance of something beyond mere gravitational, chemical, and physical forces; and that unknown thing is a vital principle.3
>
> —Lord Kelvin
This was the theory of vitalism ; that the mysterious difference between living matter and non-living matter was explained by an Élan vital or vis vitalis. Élan vital infused living matter and caused it to move as consciously directed. Élan vital participated in chemical transformations which no mere non-living particles could undergo—Wöhler’s later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere chemistry could duplicate a product of biology.
Calling “Élan vital
|
e3a90ac1-6155-42ec-9046-babcb134fe26
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Building AI safety benchmark environments on themes of universal human values
This is an AI Safety Camp 10 project that I will be leading. With this post, I am looking for external collaborators, ideas, questions, resource suggestions, feedback, and other thoughts.
Summary
Based on various sources of anthropological research, I have compiled a preliminary list of universal (cross-cultural) human values. It seems to me that various of these universal values resonate with concepts from AI safety, but use different keywords. It might be useful to map these universal values to more concrete definitions using concepts from AI safety.
One notable detail in this research is that in case of AI and human cooperation, the values are not symmetric as they would be in case of human-human cooperation. This arises because we can change the goal composition of agents, but not of humans. Additionally there is the crucial difference that agents can be relatively easily cloned, while humans cannot. Therefore, for example, a human may have a universal need for autonomy, while an AI agent might imaginably not have that need built-in. If that works out, then the agent would instead have a need to support human autonomy.
The objective of this project would be to implement these mappings of concepts into tangible AI safety benchmark environments.
The non-summary
A related subject is balancing multiple human values (as the title says, it is in plural!). The human values and needs have to be met to a reasonable degree, that is, considering balancing all other human values as well. In this context, balancing is not the same as “tradeoff”. In some interpretations and use cases, tradeoff means linear rate of substitution between objectives, but as economists know well - generally humans prefer averages in all objectives to extremes in a few objectives. This means a naive approach of summing up the rewards of an AI agent would not yield aligned results. It is essential to use nonlinear utility functions for transforming the rewards before summing them up in the RL
|
3d947cb0-6e49-4ef3-bbcf-1a34200f604b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Zombies: The Movie
FADE IN around a serious-looking group of uniformed military officers. At the head of the table, a senior, heavy-set man, GENERAL FRED, speaks.
GENERAL FRED: The reports are confirmed. New York has been overrun... by zombies.
COLONEL TODD: Again? But we just had a zombie invasion 28 days ago!
GENERAL FRED: These zombies... are different. They're... philosophical zombies.
CAPTAIN MUDD: Are they filled with rage, causing them to bite people?
COLONEL TODD: Do they lose all capacity for reason?
GENERAL FRED: No. They behave... exactly like we do... except that they're not conscious.
(Silence grips the table.)
COLONEL TODD: Dear God.
GENERAL FRED moves over to a computerized display.
GENERAL FRED: This is New York City, two weeks ago.
The display shows crowds bustling through the streets, people eating in restaurants, a garbage truck hauling away trash.
GENERAL FRED: This... is New York City... now.
The display changes, showing a crowded subway train, a group of students laughing in a park, and a couple holding hands in the sunlight.
COLONEL TODD: It's worse than I imagined.
CAPTAIN MUDD: How can you tell, exactly?
COLONEL TODD: I've never seen anything so brutally ordinary.
A lab-coated SCIENTIST stands up at the foot of the table.
SCIENTIST: The zombie disease eliminates consciousness without changing the brain in any way. We've been trying to understand how the disease is transmitted. Our conclusion is that, since the disease attacks dual properties of ordinary matter, it must, itself, operate outside our universe. We're dealing with an epiphenomenal virus.
GENERAL FRED: Are you sure?
SCIENTIST: As sure as we can be in the total absence of evidence.
GENERAL FRED: All right. Compile a report on every epiphenomenon ever observed. What, where, and who. I want a list of everything that hasn't happened in the last fifty years.
CAPTAIN MUDD: If the virus is epiphenomenal, how do we know it exists?
SCIENTIST: The same way
|
e95be19f-3be7-48c6-a8df-e5d07163334b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Game Theory and Behavioral Economics in The Stock Market
In this post, I attempt to apply a few of the theoretical concepts I’ve discussed in previous posts (on NonZeroSum.Games)to a field that we encounter often but remains a mystery to most of us - the stock market. If the stock market is a game, is it zero-sum? Who are the agents? What about rationality and higher-order beliefs? Exploring these questions below, I hope to unpack this topic through the lens of the human mind and common knowledge. But first ….
What are Zero Sum Games?
As the name suggests, a zero-sum game is one in which one player’s winnings are equaled by the other’s losses, resulting in no overall creation or destruction of wealth (Zero-Sum Games, 2019). If this is the case, we can see why the players have no common interests - mutual gain is impossible with diametrically opposing interests. Broadly, there are two kinds of zero-sum games : perfect information games, and imperfect information games. In a perfect information zero-sum game, like chess, both players are aware of the results of all previous moves. Importantly, in this kind of game there exists an optimal strategy, the minimax strategy, which may not ensure victory but definitely minimizes losses (Lecture 6 Zero Sum Games and the MinMax Theorem, 2017). Yes, chess does have an optimal strategy that can ensure a draw - but the sheer number of moves possible by each player at each stage make it impossible to determine currently. In imperfect information zero-sum games, players do not have the knowledge of past opponent moves - potentially because they occur simultaneously, such as rock-paper-scissors.
The Stock Market as a Game
Financial markets, including the stock market, have usually been classified as zero-sum. However, this is far too simplified a view of this game with millions of players - not just investors buying securities, but companies, shareholders, a board of trustees etc. (Brown, 2012). While some transactions can probably be classified as zero-sum, such as opti
|
42f347ec-8528-45fa-b520-992c54603788
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
2017 AI Safety Literature Review and Charity Comparison
*Summary: I review a significant amount of 2017 research related to AI Safety and offer some comments about where I am going to donate this year. Cross-posted from here upon request.*
Contents
--------
Contents
Introduction
The Machine Intelligence Research Institute (MIRI)
The Future of Humanity Institute (FHI)
Global Catastrophic Risks Institute (GCRI)
The Center for the Study of Existential Risk (CSER)
AI Impacts
Center for Human-Compatible AI (CFHCA)
Other related organisations
Related Work by other parties
Other major developments this year
Conclusion
Disclosures
Bibliography
Introduction
------------
[Like last year](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/), I’ve attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to an securities analyst with regards to possible investments. It appears that once again no-one else has attempted to do this, to my knowledge, so I've once again undertaken the task. While I've been able to work significantly more efficiently on this than last year, I have been unfortunately very busy with my day job, which has dramarically reduced the amount of time I’ve been able to dedicate.
My aim is basically to judge the output of each organisation in 2017 and compare it to their budget. This should give a sense for the organisations' average cost-effectiveness. Then we can consider factors that might increase or decrease the marginal cost-effectiveness going forward. We focus on organisations, not researchers.
Judging organisations on their historical output is naturally going to favour more mature organisations. A new startup, whose value all lies in the future, will be disadvantaged. However, I think that this is correct. The newer the organisation, the more funding should come from people with close knowledge. As organisations mature, and have more easily verifiable signals of quality, their funding sources can transition to larger pools of less expert money. This is how it works for startups turning into public companies and I think the same model applies here.
This judgement involves analysing a large number papers relating to Xrisk that were produced during 2017. Hopefully the year-to-year volatility of output is sufficiently low that this is a reasonable metric. I also attempted to include papers during December 2016, to take into account the fact that I'm missing the last month's worth of output from 2017, but I can't be sure I did this successfully.
This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI and CSER, who both do a lot of work on other issues.
We focus virtually exclusively on papers, rather than outreach or other activities. This is party because they are much easier to measure; while there has been a large increase in interest in AI safety over the last year, it’s hard to work out who to credit for this, and partly because I think progress has to come by persuading AI researchers, which I think comes through technical outreach and publishing good work, not popular/political work.
My impression is that policy on technical subjects (as opposed to issues that attract strong views from the general population) is generally made by the government and civil servants in consultation with, and being lobbied by, outside experts and interests. Without expert (e.g. top ML researchers at Google, CMU & Baidu) consensus, no useful policy will be enacted. Pushing directly for policy seems if anything likely to hinder expert consensus. Attempts to directly influence the government to regulate AI research seem very adversarial, and risk being pattern-matched to ignorant opposition to GM foods or nuclear power. We don't want the 'us-vs-them' situation, that has occurred with climate change, to happen here. AI researchers who are dismissive of safety law, regarding it as an imposition and encumbrance to be endured or evaded, will probably be harder to convince of the need to voluntarily be extra-safe - especially as the regulations may actually be totally ineffective. The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves. Given this, I actually think policy outreach to the general population is probably negative in expectation.
The good news on outreach this year is we haven’t had any truly terrible publicity that I can remember, though I urge organisations to remember that the personal activities of their employees, especially senior ones, reflect on the organisations themselves, so they should take care not to act/speak in ways that are offensive to those outside their bubble, and to avoid hiring crazy people.
Part of my motivation for writing this is to help more people become informed about the AI safety landscape so they can contribute better with both direct work and donations. With regard donations, at present Nick Beckstead, in his role as both Fund Manager of the [Long-Term Future Fund](https://app.effectivealtruism.org/funds/far-future) and officer with the Open Philanthropy Project, is probably the most important financer of this work. He is also probably significantly more informed on the subject than me, but I think it's important that the vitality of the field doesn't depend on a single person, even if that person is awesome.
The Machine Intelligence Research Institute (MIRI)
--------------------------------------------------
[MIRI](https://intelligence.org/) is the largest pure-play AI existential risk group. Based in Berkeley, it focuses on mathematics research that is unlikely to be produced by academics, trying to build the foundations for the development of safe AIs.
Their agent foundations work is basically trying to develop the correct way of thinking about agents and learning/decision making by spotting areas where our current models fail and seeking to improve them. Much of their work this year seems to involve trying to address self-reference in some way - how can we design, or even just model, agents that are smart enough to think about themselves? This work is technical, abstract, and requires a considerable belief in their long-term vision, as it is rarely locally applicable, so hard to independently judge the quality.
In 2016 they announced they were somewhat pivoting towards work that tied in closer to the ML literature, a move I thought was a mistake. However, looking at [their published research](https://intelligence.org/all-publications/) or their [2017 review page](https://intelligence.org/2017/12/01/miris-2017-fundraiser/), in practice this seems to have been less of a change of direction than I had thought, as most of their work appears to remain on highly differentiated and unreplaceable agent foundations type work - it seems unlikely that anyone not motivated by AI safety would produce this work. Even within those concerned about friendly AI, few not at MIRI would produce this work.
Critch's *[Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making](https://arxiv.org/abs/1701.01302)* (elsewhere titled '*Servant of Many Masters'*) is a neat paper. Basically it identifies the pareto-efficient outcome if you have two agents with different beliefs who want to agree on a utility function for an AI, in a generalisation of Harsanyi's *[Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility](http://www.springer.com/us/book/9789027711861)*. The key assumption is both want to use their current beliefs when they calculate the expected value of the deal to themselves, and the (surprising to me) conclusion is that over time the AI will have to weigh more and more heavily the *values* of the negotiator whose *beliefs* were more accurate. While I don't think this is necessarily Critch's interpretation, I take this as something of a *reductio* of the assumption. Surely if I was negotiating over a utility function, I would want the agent to learn about the world and use that knowledge to better promote my values ... not to learn about the world, decide I was a moron with a bad world model, and ignore me thereafter? If I think the AI is/will be smarter than me, I should be happy for it to do things I'm unaware will benefit me, and avoid doing things I falsely believe will help me. On the other hand, if the parties are well-informed nation states rather than individuals, the prospect of ‘getting one over’ the other might be helpful for avoiding arms races?
Kosoy's *[Optimal polynomial-time estimators](https://arxiv.org/abs/1608.04112)* addresses a similar topic to the Logical Induction work - assigning 'probabilities' to logical/mathematical/deductive statements under computational limitations - but with a quite different approach to solving it. The work seems impressive but I didn't really understand it. Inside his framework he can prove that various results from probability theory also apply to logical statements, which seems like what we'd want. (Note that technically this paper came out in December 2016, and so is included in this year rather than last year’s.)
Carey's article, *[Incorrigibility in the CIRL Framework](https://arxiv.org/abs/1709.06275)*, is a response to Milli et al.’s *[Should Robots be Obedient](https://arxiv.org/pdf/1705.09990.pdf)* and Hadfield-Menel's *[The Off-Switch Game](https://arxiv.org/pdf/1611.08219.pdf)*. Carey basically argues it’s not necessarily the case that the CIRLs will be ‘automatically’ corigible if the AI's beliefs about value are very wrong, for example due to incorrect parameterisation or assigning a zero prior to something that turns out to be the case. The discussion section has some interesting arguments, for example pointing out that an algorithm designed to shut itself off unless it had a track record of perfectly predicting what humans would want might still fail if its ontology was insufficient, so it couldn't even tell that it was disagreeing with the humans during training. I agree that value complexity and fragility might mean it’s very likely that any AI’s value model will be partially (and hence, for an AGI, catastrophically) mis-parameterised. However, I’m not sure how much the examples that take up much of the paper add to this argument. Milli’s argument only holds when the AI can learn the parameters, and given that this paper assumes the humans choose the wrong action by accident less than 1% of the time, it seems that the AI should assign a very large amount of evidence to a shutdown command... instead the AI seems to simply ignore it?
Some of MIRI's publications this year seem to mainly be better explanations of previous work. For example, Garrabrant et al's *[A Formal Approach to the Problem of Logical Non-Omniscience](https://arxiv.org/abs/1707.08747)* seems to be basically an easier to understand version of last year's *[Logical Induction](http://arxiv.org/abs/1609.03543)*. Likewise Yudkowsky and Soares's *[Functional Decision Theory: A New Theory of Instrumental Rationality](https://arxiv.org/abs/1710.05060)* seems to be basically new exposition of classic MIRI/LW decision theory work - see for example Soares et al's *[Toward Idealized Decision Theory](https://arxiv.org/pdf/1507.01986.pdf)*. Similarly, I didn't feel like there was much new in Soares et al's *[Cheating Death in Damascus](https://intelligence.org/files/DeathInDamascus.pdf)*. Making things easier to understand is useful - and last year's Logical Induction paper was a little dense - but it's clearly not as impressive as inventing new things.
When I asked for top achievements for 2017, MIRI pointed me towards a lot of work they'd posted on [agentfoundations.org](https://agentfoundations.org/) as being one of their major achievements for the year, especially *[this](https://agentfoundations.org/item?id=1468)*, *[this](https://agentfoundations.org/item?id=1356)* and *[this](https://agentfoundations.org/item?id=1712)*, which pose and then solve a problem about how to find game-theoretic agents that can stably model each other, formulated it as a topological fixed point problem. There is also a lot of other work on agentfoundations that seems interesting, I'm not entirely sure how to think about giving credit for these. These seem more like 'work in progress' than finished work - for most organisations I am only giving credit for the latter. MIRI could with some justification respond that the standard academic process is very inefficient, and part of their reason for existence is to do things that universities cannot. However, even if you de-prioritise peer review, I still think it is important to write things up into papers. Otherwise it is extremely hard for outsiders to evaluate - bad both for potential funders and for people wishing to enter the field. Unfortunately it is possible that, if they continue on this route, MIRI might produce a lot of valuable work that is increasingly illegible from the outside. So overall I think I consider these as evidence that MIRI is continuing to actually do research, but will wait until they’re ArXived to actually review them. If you disagree with this approach, MIRI is going to look much more productive, and their research possibility accelerating in 2017 vs 2016. If you instead only look at published papers, 2017 appears to be something of a ‘down year’ after 2016.
Last year I was not keen to see that Eliezer was spending a lot of time producing content on Arbital as part of his job at MIRI, as there was a clear conflict of interest - he was a significant shareholder in Arbital, and additionally I expected Arbital to fail. Now that [Arbital does seem to have indeed failed](http://lesswrong.com/r/discussion/lw/otq/whats_up_with_arbital/), I'm pleased he seems to be spending less time on it, but confused why he is spending any time at all on it - though [some of this](https://arbital.com/p/yudkowsky_chollet_reply/) seems to be [cross-posted from elsewhere](https://intelligence.org/2017/12/06/chollet/).
Eliezer's book *[Inadequate Equilibria](https://www.amazon.com/dp/B076Z64CPG)*, however, does seem to be high quality - basically another sequence - though only relevant inasmuch as AI safety might be one of many applications of the subject of the book. I also encourage readers to also read this [excellent article](http://effective-altruism.com/ea/1g7/in_defence_of_epistemic_modesty/) by Greg Lewis (FHI) on the other side.
I also enjoyed *[There's No Fire Alarm for Artificial General Intelligence](https://intelligence.org/2017/10/13/fire-alarm/)*, which although accessible to the layman I think provided a convincing case that, even when AGI is imminent, there would (/might be) no signal that this was the case, and his [socratic security dialogs](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) on the mindset required to develop a secure AI.
I was sorry to hear Jessica Taylor left MIRI, as I thought she did good work.
MIRI spent roughly $1.9m in 2017, and aim to rapidly increase this to $3.5m in 2019, to fund new researchers and their new engineering team.
The Open Philanthropy Project [awarded MIRI a $3.75m grant](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017) (over 3 years) earlier this year, largely because one reviewer was impressed with their work on Logical Induction. You may recall this was a significant part of why I [endorsed MIRI last year](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/). However, as this review is focused on work in the last twelve months, they don't get credit for the same work two years running! OPP have said they plan to fund roughly half of MIRI's budget. On the positive side, one might argue this was essentially a 1:1 match on donations to MIRI - but there are clearly game-theoretic problems here. Additionally, if you had faith in OpenPhil’s process, you might consider this a positive signal of MIRI quality. On the other hand, if you think MIRI's marginal cost-effectiveness is diminishing over the multi-million dollar range, this might reduce your estimate of the cost-effectiveness of the marginal dollar.
There is also $1m of somewhat plausibly counterfactually valid donation matching [available for MIRI](https://2017charitydrive.com/) (but not other AI Xrisk organisations).
Finally, I will note that MIRI are have been very generous with their time in helping me understand what they are doing.
The Future of Humanity Institute (FHI)
--------------------------------------
Oxford’s [FHI](https://www.fhi.ox.ac.uk/) requested not to be included in this analysis, so I won't be making any comment on whether or not they are a good place to fund. Had they not declined (and depending on their funding situation) they would have been a strong candidate. This was disappointing to me, because they seem to have produced [an impressive list of publications](https://www.fhi.ox.ac.uk/publications/) this year, including a lot of collaborations. I’ll briefly note two some pieces of research they published this year, but regret not being able to give them better coverage.
Saunders et al. published *[Trial without Error: Towards Safe Reinforcement Learning via Human Intervention](https://arxiv.org/abs/1707.05173)*, a nice paper where they attempt to make a Reinforcement Learner that can 'safely' learn by training a catastrophe-recognition algorithm to oversee the training. It's a cute idea, and a nice use of the OpenAI Atari suite, though I was most impressed with the fact that they concluded that their approach would not scale (i.e. would not work). It's not often researchers publish negative results!
Honourable mention also goes to the very cool (but aren't all his papers?) Sandberg et al. *[That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox](https://arxiv.org/pdf/1705.03394.pdf)*, which is relevant inasmuch as it suggests that the Fermi Paradox is *not* actually evidence against AI as an existential risk.
FHI’s [Brundage Bot](https://twitter.com/BrundageBot) apparently reads every ML paper ever written.
Global Catastrophic Risks Institute (GCRI)
------------------------------------------
The [Global Catastrophic Risks Institute](http://gcrinstitute.org/) is run by Seth Baum and Tony Barrett. They have produced work on a variety of existential risks, including non-AI risks. Some of this work seems quite valuable, especially Denkenberger's *[Feeding Everyone No Matter What](https://www.amazon.com/Feeding-Everyone-Matter-What-Catastrophe/dp/0128044470)* on ensuring food supply in the event of disaster, and is probably probably of interest to the sort of person who would read this document. However, they are off-topic for us here. Within AI they do a lot of work on the strategic landscape, and are very prolific.
Baum’s *[Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741)* attempts to analyse all existing AGI research projects. This is a huge project and I laud him for it. I don’t know how much here is news to people who are very plugged in, but to me at least it was very informative. The one criticism I would have is it could do more to try to differentiate on capacity/credibility - e.g. my impression is Deepmind is dramatically more capable than many of the smaller organisations listed - but that is clearly a very difficult ask. It’s hard for me to judge the accuracy, but I didn’t notice any mistakes (beyond being surprised that AIXI has an ‘unspecified’ for safety engagement, given the amount of AI safety papers coming out of ANU.)
Baum’s *[Social Choice Ethics in Artificial Intelligence](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046725)* argues that value-learning type approaches to AI ethics (like [CEV](https://intelligence.org/files/CEV.pdf) ) contain many degrees of freedom for the programmers to finesse it to pick their values, making them no better than the programmers simply choosing an ethical system directly. The programmers can choose *whose* values are used for learning, how they are *measured*, and how they are *aggregated*. Overall I’m not fully convinced - for example, *pace* the argument on page 3, a Law of Large Numbers argument could support averaging many views to get at the true ethics *even if we had no way of independently verifying the true ethics*. And there is some irony that, for all the paper’s concern with bias risk, the left-wing views of the author come through strongly. But despite these I liked the paper, especially for the discussion of who has standing - something that seems like it will need a philosophical solution, rather than a ML one.
Barrett's *[Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction](https://www.dropbox.com/s/7a7eh2law7tbvk0/2017-barrett.pdf?dl=0)* covers a lot of familiar ground, and then attempts to do some monte carlo cost-benefit analysis on the a small number of interventions to help address nuclear war and comet impact. After putting a lot of thought into setting up the machinery, it would have been good to see analysis of a wider range of risks!
Baum & Barrett published *[Global Catastrophes: The Most Extreme Risks](http://sethbaum.com/ac/2018_Extreme.pdf)*, which seems to be essentially a reasonably well argued general introduction to the subject of existential risks. Hopefully people who bought the book for other reasons will read it and become convinced.
Baum & Barrett's *[Towards an Integrated Assessment of Global Catastrophic Risk](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046816)* is a similar introductory piece on catastrophic risks, but the venue - a colloquium on catastrophic risks - seems less useful, as people reading it are more likely to already be concerned about the subject, and I don't think it spends enough time on AI risk *per se* to convince those who were already worried about Xrisk but not AI Xrisk.
Last year I was (and still am) impressed by their paper *[On the Promotion of Safe and Socially Beneficial Artificial Intelligence](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816323)*, which made insightful, convincing and actionable criticisms of 'AI arms race' language. I was less convinced by this year's [*Reconciliation Between Factions Focused on Near-Term* *and Long-Term Artificial Intelligence*](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444), which argues for a re-alignment away from near-term AI worries vs long-term AI worries towards AI worriers vs non-worriers. However, I'm not sure why anyone would agree to this - long-term worriers don't currently spend much time arguing against short-term worries (even if you thought that AI discrimination arguments were orwellian, why bother arguing about it?), and convincing short-term worriers to stop criticise long-term worries seems approximately as hard as simply convincing them to become long-term worriers.
GCRI spent approximately $117k in 2017, which is shockingly low considering their productivity. This was lower than 2016; apparently their grants from the US Dept. of Homeland Security came to an end.
The Center for the Study of Existential Risk (CSER)
---------------------------------------------------
[CSER](http://cser.ac.uk/) is an existential risk focused group located in Cambridge. Like GCRI they do work on a variety of issues, notably including Rees’ work on [infrastructure resilience](https://www.cser.ac.uk/media/uploads/files/Black-Sky-Workshop-at-the-Royal-Society-Jan.-20171.pdf).
Last year I criticised them for not having produced any online research over several years; they now have a [separate page](https://www.cser.ac.uk/resources/filter/publication/risks-from-artificial-intelligence/all/all/) that does list some but maybe not all of their research.
Liu, a CSER researcher, wrote *[The Sure-Thing principle and P2](http://www.academia.edu/33992500/The_Sure-thing_Principle_and_P2)* and was second author on Gaifman & Liu's *[A simpler and more realistic subjective decision theory](https://link.springer.com/article/10.1007%2Fs11229-017-1594-6)*, both on the mathematical foundations of bayesian decision theory, which is a valuable topic for AI safety in general. Strangely neither paper mentioned CSER as a financial supporter of the paper or affiliation.
Liu and Price’s *[Heart of DARCness](http://yliu.net/wp-content/uploads/darcness.pdf)* argues that agents do not have credences for what they will do while deciding whether to do it - their confidence is temporarily undefined. I was not convinced - even someone is deciding whether she’s 75% confident or 50% confident, presumably there are some odds that determine which side in a bet she’d take if forced to choose? I’m also not sure of the direct link to AI safety.
They’ve also convened and attended workshops on AI and decision theory, notably the [AI & Society Symposium in Japan](https://www.cser.ac.uk/news/ai-society-symposium/), but in general I am wary of giving organisations credit for these, as they are too hard for the outside observer to judge, and ideally workshops lead to produce papers - in which case we can judge those.
CSER also did a significant amount of outreach, including [presenting to the House of Lords](https://www.cser.ac.uk/resources/written-evidence-lords-select-committee-artificial-intelligence/), and apparently have expertise in Chinese outreach (multiple native mandarin speakers), which could be important, given China’s AI research but cultural separation from the west.
They are undertaking a novel publicity effort that I won’t name as I’m not sure it’s public yet. In general I think most paths to success involve consensus-building among mainstream ML researchers, and ‘popular’ efforts risk harming our credibility, so I am not optimistic here.
Their annual budget is around $750,000, with I estimate a bit less than half going on AI risk . Apparently they need to raise funds to continue existing once their current grants run out in 2019.
AI Impacts
----------
AI Impacts is a small group that does high-level strategy work, especially on AI timelines, somewhat associated with MIRI.
They seem to have produced significantly more this year than last year. The main achievement is the *[When will AI exceed Human Performance? Evidence from AI Experts](https://arxiv.org/abs/1705.08807)*, which asked gathered the opinions of hundreds of AI researchers on AI timelines questions. There were some pretty relevant takeaways, like that most researchers find the AI Catastrophic Risk argument somewhat plausible, but doubt there is anything that can usefully be done in the short term, or that asian researchers think human-level AI is significantly closer than americans do. I think the value-prop here is twofold: firstly, providing a source of timeline estimates for when we make decisions that hinge on how long we have, and secondly, to prove that concern about AI risk is a respectable, mainstream position. It was apparently [one of the most discussed papers of 2017](https://www.altmetric.com/top100/2017/#list).
On a similar note they also have data on improvements in a number of AI-related benchmarks, like [computing costs](https://aiimpacts.org/recent-trend-in-the-cost-of-computing/) or [algorithmic progress](https://aiimpacts.org/trends-in-algorithmic-progress/).
John Salvatier (member of AI Impacts at the time) was also second author on [Agent-Agnostic Human-in-the-Loop Reinforcement Learning](https://arxiv.org/abs/1701.04079), along with Evans (FHI, 4th author), which attempts to design an interface for reinforcement learning that abstracts away from the agent, so you could easily change the underlying agent.
AI Impacts’ budget is tiny compared to most of the other organisations listed here; around $60k at present. Incremental funds would apparently be spent on hiring more part-time researchers.
Center for Human-Compatible AI (CFHCA)
--------------------------------------
The Center for Human-Compatible AI, founded by Stuart Russell in Berkeley, launched in August 2016. As they are not looking for more funding at the moment I will only briefly survey some of they work on cooperative inverse reinforcement learning.
Hadfield-Menel et al's *[The Off-Switch Game](https://arxiv.org/pdf/1611.08219.pdf)* is a nice paper that produces and formalises the (at least now I've read it) very intuitive result that a value-learning AI might be corrigible (at least in some instances) because it takes the fact that a human pressed the off-switch as evidence that this is the best thing to do.
Milli et al's *[Should Robots be Obedient](https://arxiv.org/pdf/1705.09990.pdf)* is in the same vein as Hadfield-Menel et al’s *[Cooperative Inverse Reinforcement Learning](https://arxiv.org/abs/1606.03137)* (last year) on learning values from humans, specifically touching on whether such agents would be willing to obey a command to 'turn off', as per Soares's paper on *[Corrigibility](https://intelligence.org/files/Corrigibility.pdf)*. She does some interesting analysis about the trade-off between obedience and results in cases where humans are fallible.
In both cases I thought the papers were thoughtful and had good analysis. However, I don’t think either is convincing in showing that corrigibility comes ‘naturally’ - at least not the strength of corrigibility we need.
I encourage them to keep their website more up-to-date.
Overall I think their research is good and their team promising. However, apparently they have enough funding for now, so I won't be donating this year. If this changed and they requested incremental capital I could certainly imagine funding them in future years.
Other related organisations
---------------------------
[The Center for Applied Rationality](http://rationality.org/resources/updates/2017/cfar-2017-fundraiser) (CFAR) works on trying to improve human rationality, especially with the aim of helping with AI Xrisk efforts.
[The Future of Life Institute](https://futureoflife.org/2017/11/27/help-support-fli-giving-tuesday/) (FLI) ran a huge grant-making program to try to seed the field of AI safety research. There definitely seem to be a lot more academics working on the problem now, but it’s hard to tell how much to attribute to FLI.
[Eighty Thousand Hours](https://80000hours.org/articles/extinction-risk/) (80K) provide career advice, with AI safety being one of their key cause areas.
Related Work by other parties
-----------------------------
*[Deep Reinforcement Learning from Human Preferences](https://arxiv.org/abs/1706.03741)*, was possibly my favourite paper of the year, which possibly shouldn’t come as a surprise, given that two of the authors (Christiano and Amodei from OpenAI ) were authors on last year’s *[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)*. It applies ideas on bootstrapping that Christiano has been discussing for a while - getting humans to train an AI which then trains another AI etc. The model performs significantly better than I would have expected, and as ever I’m pleased to see OpenAI - Deepmind collaboration.
Christiano continues to produce very interesting content on his blog, like [this](https://ai-alignment.com/corrigibility-3039e668638) on Corrigibility. When I first read his articles about how to bootstrap safety through iterative training procedures, my reactions was that, while this seemed an interesting idea, it didn't seem to have much in common with mainstream ML. However, there do seem to be a bunch of practical papers about imitation learning now. I'm not sure if this was always the case, and I was just ignorant, or if they have become more prominent in the last year. Either way, I have updated towards considering this approach to be a promising one for integrating safety into mainstream ML work. He has also written [a nice blog post](https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446) explaining how AlphaZero works, and arguing that this supports his enhancement ideas.
It was also nice to see [~95 papers](https://scholar.google.com/scholar?hl=en&as_sdt=0,31&sciodt=0,31&cites=6186600309471256628&scipsc=) that were addressing Amodei et al's call in last year’s *[Concrete Problems](https://arxiv.org/abs/1606.06565)*.
Menda et al's *[DropoutDAgger](https://arxiv.org/abs/1709.06166)* paper on safe exploration seems to fit in this category. Basically they come up with a form of imitation learning where the AI being trained can explore a bit, but isn't allowed to stray too far from the expert policy - though I'm not sure why they always have the learner explore in the direction it thinks is best, rather than assigning some weight to its uncertainty of outcome, explore-exploit-style. I'm not sure how much credit Amodei et al can get for inspiring this though, as it seems to be (to a significant degree) an extension of Zhang and Cho’s *[Query-Efficient Imitation Learning for End-to-End Autonomous Driving](https://arxiv.org/abs/1605.06450).*
However, I don't want to give too much credit for work that improves 'local' safety that doesn't also address the big problems in AI safety, because this work probably accelerates unsafe human-level AI. There are many papers in this category, but for obvious reasons I won't call them out.
Gan's *[Self-Regulating Artificial General Intelligence](https://arxiv.org/pdf/1711.04309.pdf)* contains some nice economic formalism around AIs seizing power from humans, and raises the interesting argument that if you need specialist AIs to achieve things, the first human-level AIs might not exhibit takeoff behaviour because they would be unable to sufficiently trust the power-seizing agents they would need to create. I'm sceptical that this assumption about the need for specialised AIs holds - surely even if you need to make separate AI agents for different tasks, rather than integrating them, it would suffice to give them specialised *capabilities* and but the same *goals*. Regardless, the paper does suggest the interesting possibility that humanity might make an AI which is intelligent enough to realise it cannot solve the alignment problem to safely self-improve... and hence progress stops there - though of course this would not be something to rely on.
MacFie's *[Plausibility and Probability in Deductive Reasoning](https://arxiv.org/pdf/1708.09032.pdf)* also addresses the issue of how to assign probabilities to logical statements, in a similar vein to much MIRI research.
Vamplew et al’s *[Human-aligned artificial intelligence is a multiobjective problem](https://link.springer.com/article/10.1007/s10676-017-9440-6)* argues that we should consider a broader class of functions than linear sums when combining utility functions.
Google Deepmind continue to churn out impressive research, some of which seems relevant to the problem, like Sunehag et al’s *[Value-Decomposition Networks For Cooperative Multi-Agent Learning](https://arxiv.org/pdf/1706.05296.pdf)* and Danihelka, et al’s *[Comparison of Maximum Likelihood and GAN-based training of Real NVPs](https://arxiv.org/pdf/1705.05263.pdf)* on avoiding overfitting.
In terms of predicting AI timelines, another piece I found interesting was Gupta et al.’s *[Revisiting the Unreasonable Effectiveness of Data](https://arxiv.org/pdf/1707.02968.pdf)*, which argued that, for vision tasks at least, performance improved logarithmically in sample size.
The Foresight Institute published a [white paper](https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf) on the general subject of AI policy and risk.
Stanford's *[One Hundred Year Study on Artificial Intelligence](https://ai100.stanford.edu/)* produced an [AI Index](https://aiindex.org/) report, which is basically a report on progress in the field up to 2016. Interestingly various metrics they tracked, summarised in their 'Vibrancy' metric, suggest that the field actually regressed in 2016, through my experience with similar data in the financial world leaves me rather sceptical of such methodology. Unfortunately the report dedicated only a single word to the subject of AI safety.
On a lighter note, the esteemed G.K. Chesterton returned from beyond the grave to [eviscerate an AI risk doubter](http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/), and a group of researchers (some FHI) *[proved](https://arxiv.org/pdf/1703.10987.pdf)* that it is impossible to create a machine larger than a human, so that’s a relief.
Other major developments this year
----------------------------------
Google's Deepmind produced AlphaZero, which learnt how to beat the best AIs (and hence also the best humans) at Go, Chess and Shogi with just a few hours of self-play.
Creation of the EA funds, including the *[Long-Term Future Fund](https://app.effectivealtruism.org/funds/far-future)*, run by Nick Beckstead, which has made one smallish grant related to AI Safety, conserved the other 96%.
The Open Philanthropy Project funded both MIRI and OpenAI (acquiring a board seat in the process with the latter).
Nvidia (who make GPUs used for ML) saw their share price approximately doubl, after quadrupling last year.
Hillary Clinton was possibly [concerned about AI risk](http://lukemuehlhauser.com/hillary-clinton-on-ai-risk/)? But unfortunately Putin seems to have less helpful concerns about an AI Arms race... namely ensuring that *[he wins it](https://www.rt.com/news/401731-ai-rule-world-putin/)*. And China announced a [national plan](https://www.reuters.com/article/us-china-ai/china-aims-to-become-world-leader-in-ai-challenges-u-s-dominance-idUSKBN1A5103) for AI with chinese characteristics - but bear in mind they have failed at these before, like their push into Semiconductors, though companies like Baidu do seem to be doing impressive research.
There were [some](https://arxiv.org/abs/1711.10337) [papers](https://arxiv.org/abs/1709.06560) suggesting the replication crisis may be coming to ML?
Conclusion
----------
In some ways this has been a great year. My impression is that the cause of AI safety has become increasingly mainstream, with a lot of researchers unaffiliated with the above organisations working at least tangentially on it.
However, it’s tough from the point of view of an external donor. Some of the organisations doing the best work are well funded. Others (MIRI) seem to be doing a lot of good work but (perhaps necessarily) it is significantly harder for outsiders to judge than last year, as there doesn’t seem to be a really heavy-hitting paper like there was last year. I see MIRI’s work as being a long-shot bet that their specific view of the strategic landscape is correct, but given this they’re basically irreplaceable. GCRI and CSER’s work is more mainstream in this regard, but GCRI’s productivity is especially noteworthy, given the order of magnitude of difference in budget size.
As I have once again failed to reduce charity selection to a science, I’ve instead attempted to subjectively weigh the productivity of the different organisations against the resources they used to generate that output, and donate accordingly.
My constant wish is to promote a lively intellect and independent decision-making among my readers; hopefully my laying out the facts as I see them above will prove helpful to some readers. Here is my eventual decision, [rot13'd](http://www.rot13.com/) so you can do come to your own conclusions first if you wish:
*Fvtavsvpnag qbangvbaf gb gur Znpuvar Vagryyvtrapr Erfrnepu Vafgvghgr naq gur Tybony Pngnfgebcuvp Evfxf Vafgvghgr. N zhpu fznyyre bar gb NV Vzcnpgf.*
However I wish to emphasis that all the above organisations seem to be doing good work on the most important issue facing mankind. It is the nature of making decisions under scarcity that we must prioritize some over others, and I hope that all organisations will understand that this necessarily involves negative comparisons at times.
Thanks for reading this far; hopefully you found it useful. Someone suggested that, instead of doing this annually, I should instead make a blog where I provide some analysis of AI-risk related events as they occur. Presumably there would still be an annual giving-season writeup like this one. If you'd find this useful, please let me know.
Disclosures
-----------
I was a Summer Fellow at MIRI back when it was SIAI, volunteered very briefly at GWWC (part of CEA) and once applied for a job at FHI. I am personal friends with people at MIRI, FHI, CSER, CFHCA and AI Impacts *but not GCRI* (so if you’re worried about bias you should overweight them… though it also means I have less direct knowledge). However I have no financial ties beyond being a donor and have never been romantically involved with anyone who has ever been at any of the organisations.
I shared a draft of the relevant sections of this document with representatives of MIRI, CSER and GCRI and AI Impacts. I'm very grateful for Alex Flint and Jess Riedel for helping review a draft of this document. Any remaining inadequacies and mistakes are my own.
*Edited 2017-12-21: Spelling mistakes, corrected Amodei's affiliation.*
*Edited 2017-12-24: Minor correction to CSER numbers.*
Bibliography
------------
Adam D. Cobb, Andrew Markham, Stephen J. Roberts; Learning from lions: inferring the utility of agents from their trajectories; <https://arxiv.org/abs/1709.02357>
Alexei Andreev; What's up with Arbital; <http://lesswrong.com/r/discussion/lw/otq/whats_up_with_arbital/>
Allison Duettmann; Artificial General Intelligence: Timeframes & Policy White Paper; <https://foresight.org/publications/AGI-Timeframes&PolicyWhitePaper.pdf>
Anders Sandberg, Stuart Armstrong, Milan Cirkovic; That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox; <https://arxiv.org/pdf/1705.03394.pdf>
Andrew Critch, Stuart Russell; Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making; <https://arxiv.org/abs/1711.00363>
Andrew Critch; Toward Negotible Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making; <https://arxiv.org/abs/1701.01302>
Andrew MacFie; Plausibility and Probability in Deductive Reasoning; <https://arxiv.org/pdf/1708.09032.pdf>
Assaf Arbelle, Tammy Riklin Raviv; Microscopy Cell Segmentation via Adversarial Neural Networks; <https://arxiv.org/abs/1709.05860>
Ben Garfinkel, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark; On the Impossibility of Supersized Machines; <https://arxiv.org/pdf/1703.10987.pdf>
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine; One-Shot Visual Imitation Learning via Meta-Learning; <https://arxiv.org/abs/1709.04905>
Chen Sun, Abhinav Shrivastava Saurabh Singh, Abhinav Gupta; Revisiting Unreasonable Effectiveness of Data in Deep Learning Era; <https://arxiv.org/pdf/1707.02968.pdf>
Chih-Hong Cheng, Frederik Diehl, Yassine Hamza, Gereon Hinz, Georg Nuhrenberg, Markus Rickert, Harald Ruess, Michael Troung-Le; Neural Networks for Safety-Critical Applications - Challenges, Experiments and Perspectives; <https://arxiv.org/pdf/1709.00911.pdf>
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané; Concrete Problems in AI Safety; <https://arxiv.org/abs/1606.06565>
David Abel, John Salvatier, Andreas Stuhlmüller, Owain Evans; Agent-Agnostic Human-in-the-Loop Reinforcement Learning; <https://arxiv.org/abs/1701.04079>
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell; The Off-Switch Game; <https://arxiv.org/pdf/1611.08219.pdf>
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell; Cooperative Inverse Reinforcement Learning; <https://arxiv.org/abs/1606.03137>
Eliezer Yudkowsky and Nate Soares; Functional Decision Theory: A New Theory of Instrumental Rationality; <https://arxiv.org/abs/1710.05060>
Eliezer Yudkowsky; A reply to Francois Chollet on intelligence exposion; <https://intelligence.org/2017/12/06/chollet/>
Eliezer Yudkowsky; Coherant Extrapolated Volition; <https://intelligence.org/files/CEV.pdf>
Eliezer Yudkowsky; Inadequate Equilibria; <https://www.amazon.com/dp/B076Z64CPG>
Eliezer Yudkowsky; There's No Fire Alarm for Artificial General Intelligence; <https://intelligence.org/2017/10/13/fire-alarm/>
Filipe Rodrigues, Francisco Pereira; Deep learning from crowds; <https://arxiv.org/abs/1709.01779>
Greg Lewis; In Defense of Epistemic Modesty; <http://effective-altruism.com/ea/1g7/in_defence_of_epistemic_modesty/>
Haim Gaifman and Yang Liu; A simpler and more realistic subjective decision theory; <https://link.springer.com/article/10.1007%2Fs11229-017-1594-6>
Harsanyi; Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility; <http://www.springer.com/us/book/9789027711861>
Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, Peter Dayan; Comparison of Maximum Likelihood and GAN-based training of Real NVPs; <https://arxiv.org/pdf/1705.05263.pdf>
Jiakai Zhang, Kyunghyun Cho; Query-Efficient Imitation Learning for End-to-End Autonomous Driving; <https://arxiv.org/abs/1605.06450>
Joshua Gans; Self-Regulating Artificial General Intelligence; <https://arxiv.org/pdf/1711.04309.pdf>
Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans; When will AI exceed Human Performance? Evidence from AI Experts; <https://arxiv.org/abs/1705.08807>
Kavosh Asadi, Cameron Allen, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, Michael Littman; Mean Actor Critic; <https://arxiv.org/abs/1709.00503>
Kunal Menda, Katherine Driggs-Campbell, Mykel J. Kochenderfer; DropoutDAgger: A Bayesian Approach to Safe Imitation Learning; <https://arxiv.org/abs/1709.06166>
Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, Olivier Bousquet; Are GANs Created Equal? A Large-Scale Study; <https://arxiv.org/abs/1711.10337>
Martin Rees; "Black Sky" Infrastructure and Societal Resilience Workshop; <https://www.cser.ac.uk/media/uploads/files/Black-Sky-Workshop-at-the-Royal-Society-Jan.-20171.pdf>
Mile Brundage; Brundage Bot; <https://twitter.com/BrundageBot>
Minghai Qin, Chao Sun, Dejan Vucinic; Robustness of Neural Networks against Storage Media Errors; <https://arxiv.org/abs/1709.06173>
Myself; 2017 AI Risk Literature Review and Charity Evaluation; <http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/>
Nate Soares and Benja Fallenstein; Towards Idealized Decision Theory; <https://arxiv.org/pdf/1507.01986.pdf>
Nate Soares and Benjamin Levinstein; Cheating Death in Damascus; <https://intelligence.org/files/DeathInDamascus.pdf>
Nates Soares, Benja Fallenstein, Eliezer Yudkowsky, Stuart Armstrong; Corrigibility; <https://intelligence.org/files/Corrigibility.pdf>
Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei; Deep Reinforcement Learning from Human Preferences; <https://arxiv.org/abs/1706.03741>
Paul Christiano; AlphaGo Zero and capability amplification; <https://ai-alignment.com/alphago-zero-and-capability-amplification-ede767bb8446>
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger; Deep Reinforcement Learning that Matters; <https://arxiv.org/abs/1709.06560>
Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, Astro Teller.; One Hundred Year Study on Artificial Intelligence; <https://ai100.stanford.edu/>
Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, Thore Graepel; Value-Decomposition Networks For Cooperative Multi-Agent Learning; <https://arxiv.org/pdf/1706.05296.pdf>
Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin, Jane Mummery; Human-aligned artificial intelligence is a multiobjective problem; <https://link.springer.com/article/10.1007/s10676-017-9440-6>
Ryan Carey; Incorrigibility in the CIRL Framework; <https://arxiv.org/abs/1709.06275>
Samuel Yeom, Matt Fredrikson, Somesh Jha; The Unintended Consequences of Overfitting: Training Data Inference Attacks; <https://arxiv.org/abs/1709.01604>
Scott Alexander; G.K. Chesterton on AI Risk; <http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-risk/>
Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, Jessica Taylor; A Formal Approach to the Problem of Logical Non-Omniscience; <https://arxiv.org/abs/1707.08747>
Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, Jessica Taylor; Logical Induction; <http://arxiv.org/abs/1609.03543>
Seth Baum and Tony Barrett; Global Catastrophes: The Most Extreme Risks; <http://sethbaum.com/ac/2018_Extreme.pdf>
Seth Baum and Tony Barrett; Towards an Integrated Assessment of Global Catastrophic Risk ; <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046816>
Seth Baum; On the Promotion of Safe and Socially Beneficial Artificial Intelligence; <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816323>
Seth Baum; Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence; <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444>
Seth Baum; Social Choice Ethics in Artificial Intelligence; <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046725>
Seth Baum; Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy; <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741>
Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, Stuart Russell; Should Robots be Obedient; <https://arxiv.org/pdf/1705.09990.pdf>
Tony Barrett; Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction; <https://www.dropbox.com/s/7a7eh2law7tbvk0/2017-barrett.pdf?dl=0>
Vadim Kosoy; Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm; <https://arxiv.org/abs/1608.04112>
Victor Shih, David C Jangraw, Paul Sajda, Sameer Saproo; Towards personalized human AI interaction - adapting the behavior of AI agents using neural signatures of subjective interest; <https://arxiv.org/abs/1709.04574>
William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans; Trial without Error: Towards Safe Reinforcement Learning via Human Intervention; <https://arxiv.org/abs/1707.05173>
Xiongzhao Wang, Varuna De Silva, Ahmet Kondoz; Agent-based Learning for Driving Policy Learning in Connected and Autonomous Vehicles; <https://arxiv.org/abs/1709.04622>
Yang Liu and Huw Price; Heart of DARCness; <http://yliu.net/wp-content/uploads/darcness.pdf>
Yang Liu; The Sure-Thing principle and P2; <http://www.academia.edu/33992500/The_Sure-thing_Principle_and_P2>
Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos Theodorou, Byron Boots; Agile Off-Road Autonomous Driving Using End-to-End Deep Imitation Learning; <https://arxiv.org/abs/1709.07174>
|
b1fdf879-3435-4286-94ea-9fb8ba100704
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Best reasons for pessimism about impact of impact measures?
Habryka [recently wrote](https://www.lesswrong.com/posts/t3t9osBsmwkajWz5Y/long-term-future-fund-april-2019-grant-decisions) (emphasis mine):
> My inside views on AI Alignment make me think that work on impact measures is *very unlikely* to result in much concrete progress on what I perceive to be core AI Alignment problems, *and I have talked to a variety of other researchers in the field who share that assessment*. I think it’s important that this grant not be viewed as an endorsement of the concrete research direction that Alex is pursuing, but only as an endorsement of the higher-level process that he has been using while doing that research.
>
> As such, I think it was a necessary component of this grant that I have talked to other people in AI Alignment whose judgment I trust, who do seem excited about Alex’s work on impact measures. I think I would not have recommended this grant, or at least this large of a grant amount, without their endorsement. I think in that case I would have been worried about a risk of diverting attention from what I think are more promising approaches to AI Alignment, and a potential dilution of the field by introducing a set of (to me) somewhat dubious philosophical assumptions.
I'm interested in learning about the intuitions, experience, and facts which inform this pessimism. As such, I'm not interested in making any arguments to the contrary in this post; any pushback I provide in the comments will be with clarification in mind.
There are two reasons you could believe that "work on impact measures is very unlikely to result in much concrete progress on… core AI Alignment problems". First, you might think that the impact measurement problem is intractable, so work is unlikely to make progress. Second, you might think that even a full solution wouldn't be very useful.
Over the course of 5 minutes by the clock, here are the reasons I generated for pessimism (which I either presently agree with or at least find it reasonable that an intelligent critic would raise the concern on the basis of currently-public reasoning):
* Declarative knowledge of a solution to impact measurement probably wouldn't help us do value alignment, figure out embedded agency, etc.
* We want to figure out how to transition to a high-value stable future, and it just isn't clear how impact measures help with that.
* Competitive and social pressures incentivize people to cut corners on safety measures, especially those which add overhead.
+ Computational overhead.
+ Implementation time.
+ Training time, assuming they start with low aggressiveness and dial it up slowly.
* Depending on how "clean" of an impact measure you think we can get, maybe it's way harder to get low-impact agents to do useful things.
+ Maybe we can get a clean one, but only for powerful agents.
+ Maybe the impact measure misses impactful actions if you can't predict at near human level.
* In a world where we know how to build powerful AI but not how to align it (which is actually probably the scenario in which impact measures do the most work), we play a very unfavorable game while we use low-impact agents to somehow transition to a stable, good future: the first person to set the aggressiveness too high, or to discard the impact measure entirely, ends the game.
* In a [More realistic tales of doom](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/more-realistic-tales-of-doom)-esque scenario, it isn't clear how impact helps prevent "gradually drifting off the rails"..mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
1
---
1 Paul [raised concerns along these lines](https://www.lesswrong.com/posts/c2oM7qytRByv6ZFtz/impact-measure-desiderata#Lc2M2jwugKTdynM8A):
> We'd like to build AI systems that help us resolve the tricky situation that we're in. That help design and enforce agreements to avoid technological risks, build better-aligned AI, negotiate with other actors, predict and manage the impacts of AI, improve our institutions and policy, etc.
>
> I think the default "terrible" scenario is one where increasingly powerful AI makes the world change faster and faster, and makes our situation more and more complex, with humans having less and less of a handle on what is going on or how to steer it in a positive direction. Where we must rely on AI to get anywhere at all, and thereby give up the ability to choose where we are going.
>
> That may ultimately culminate with a catastrophic bang, but if it does it's not going to be because we wanted the AI to have a small impact and it had a large impact. It's probably going to be because we have a very limited idea what is going on, but we don't feel like we have the breathing room to step back and chill out (at least not for long) because we don't believe that everyone else is going to give us time.
>
> If I'm trying to build an AI to help us navigate an increasingly complex and rapidly-changing world, what does "low impact" mean? In what sense do the terrible situations involve higher objective impact than the intended behaviors?
>
> (And realistically I doubt we'll fail at alignment with a bang---it's more likely that the world will just drift off the rails over the course of a few months or years. The intuition that we wouldn't let things go off the rails gradually seems like the same kind of wishful thinking that predicts war or slow-rolling environmental disasters should never happen.)
>
> It seems like "low objective impact" is what we need once we are in the unstable situation where we have the technology to build an AI that would quickly and radically transform the world, but we have all decided not to and so are primarily concerned about radically transforming the world by accident. I think that's a coherent situation to think about and plan for, but we shouldn't mistake it for the mainline. (I personally think it is quite unlikely, and it would definitely be unprecedented, though you could still think it's the best hope if you were very pessimistic about what I consider "mainline" alignment.)
|
cf4e1ea1-928a-4bfd-8d01-28e3c584304a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Sapir-Whorf Ego Death
Meditation can be tricky. I’m by no means a skilled practitioner, but I did make a fair bit of progress with my focus meditation recently. This post is about the realization that helped me up my meditation game. Enjoy!
----------------------------------------
When I meditate, I often spin away into reflections and judgments about how the meditation is going.
“I should focus on the breath.”
“It’s going quite well—oh wait, I should not make this into a performance—damn, I got stuck thinking about how I meditate. I should focus back on the breath—wait, reflecting on how I reflect is not the same as focusing on the breath, damn it—[…]”
Some time ago, I realized that the perspective "I want to focus on the breath" is self-defeating. It uses a third-person perspective that includes me as an object of evaluation—no wonder I spin off into reflection. I want to let go of self-evaluation, yet my very mindset starts with an “I.”
A More Helpful Intention
The problem with "I should focus on the breath" is that it assumes a self who is monitoring, evaluating, striving. Realizing this, I started framing my practice differently. Instead of directing myself to focus, I tried a perspective that didn’t include a self at all:
"Sensations of breath are arising."
This simple shift changed the texture of my meditation. Instead of a little homunculus in my mind trying to herd attention back to the breath, there was just the breath. No watcher, no judger, just sensation appearing.
> Note: When trying this at home, you might mistakenly adopt the mindset "I should think 'sensations of breath are arising.'"—your brain habitually sneaking a self into the way you view things. Resist this impulse, and stick to the simple phrase "Sensations of breath are arising."
>
> No commentary.
The Effect: Letting Go of the Observer
By letting go of this outer layer of reflection and just being with the breath, I found it much easier to meditate. The usual looping pattern—focusing, notic
|
bec4149f-863a-4a89-9b1c-30cc09a62f47
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington, DC Meetup with Special Guest
Discussion article for the meetup : Washington, DC Meetup with Special Guest
WHEN: 06 January 2013 03:00:00PM (-0500)
WHERE: National Portrait Gallery, Washington, DC
After some of the DC group attended a talk by Robin Hanson, I asked him to come visit a meetup! And even better, he agreed!
Those who didn't come to the talk can hear more about the idea of a future em economy, and what it would likely entail. (Or other ideas about what the future might hold?)
We may also discuss issues of interest to contemporary economics, including prediction markets.
Discussion article for the meetup : Washington, DC Meetup with Special Guest
|
876e1ff9-eb64-4681-80c9-8f66e57cf6e5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How you will change the world
Hopefully this is obvious to many people, but it seems some smart ones at least don’t really think about it.
Suppose you have some grand goal, that many people fail at. For instance you want to revolutionise your field or start the social movement that stops poverty or build a flight search application that isn’t frustrating.
Before you think you have a perceptible hope of achieving it, you will need:
1. Some idea of what it is that everyone else gets wrong
2. Some strategy for avoiding that
Ok, so far so good, you may think: nobody else tries hard enough, and you will try hard enough.
Not so fast! You will also need:
1. For the failure and the strategy to correspond with how the world actually works, rather than being things you ‘believe in’ or would like to identify with, or just interesting or novel ideas which are fun to chat about.
2. A meta idea of why it is that nobody else has come up with your strategy for solving it. ‘Be more passionate than any one else’ seems to be a popular intended solution for instance, but it causes difficulties at this point because chances are every other idealistic youth has thought of it before. If they still failed, then you don’t yet have any reason to suppose you will do better.
Of course you don’t need all this stuff to try blindly, you just have to accept that your chances of success are very low. I think you will also often do better by directly trying to answer these questions before you start.
|
01ef40bd-d231-4ee1-b0f8-de371a4bb105
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How did you make your way back from meta?
I've noticed in myself a strong preference for focusing on the meta level. It's most visible in fields dear to me, like writing. For example, I'll spend much more time reading up on rhetoric or tracking down rare 1980's books about writing techniques than practicing writing essays or stories.
I don't like this because my ultimate goal in studying the meta level is to get better at the object level. At the same time, I am sometimes rewarded for going so meta. I've gotten a lot of respect at work paired with feedback that I provide excellent feedback and perspectives.
There don't seem to be immediately painful effects of this state. I have a family and a job and my life seems in order overall. But there's a hunger for a) putting theory to the test and seeing results (ie. making meta pay rent), and b) learning from direct experiences & sharing those experiences with others. Meta is a lonely place to be.
I don't think I'm the only one in this position. I found these two posts with just a few seconds of searching (I'm sure there's more): https://www.lesswrong.com/posts/g2AKPEzFdQitmpTDu/meta-addiction https://www.lesswrong.com/posts/RnP5bR767NcxebYHd/conjecture-on-addiction-to-meta-level-solutions
The last few days, I've been catching myself starting on a meta-deepening activity and consciously switching to an object-level task. It's been rewarding so far, so I believe that in a few weeks, my habits will shift toward where I want to be.
But I'm curious: has anyone else experience something similar? How did it go for you? What did you do?
|
d9abea87-205c-4dc4-b3ab-e6de020caf79
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Pascal's Mugging as an epistemic problem
Related to: Some of the discussion going on here
In the LW version of Pascal's Mugging, a mugger threatens to simulate and torture people unless you hand over your wallet. Here, the problem is decision-theoretic: as long as you precommit to ignore all threats of blackmail and only accept positive-sum trades, the problem disappears.
However, in Nick Bostrom's version of the problem, the mugger claims to have magic powers and will give Pascal an enormous reward the following day if Pascal gives his money to the mugger. Because the utility promised by the mugger so large, it outweighs Pascal's probability that he is telling the truth. From Bostrom's essay:
> Pascal: Gee . . . OK, don’t take this personally, but my credence that you have these magic powers whereof you speak is about one in a quadrillion.
> Mugger: Wow, you are pretty confident in your own ability to tell a liar from an honest man! But no matter. Let me also ask you, what’s your probability that I not only have magic powers but that I will also use them to deliver on any promise – however extravagantly generous it may seem – that I might make to you tonight?
> Pascal: Well, if you really were an Operator from the Seventh Dimension as you assert, then I suppose it’s not such a stretch to suppose that you might also be right in this additional claim. So, I’d say one in 10 quadrillion.
> Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will per
|
31130378-97a3-4a5d-91cd-3b906fa8cbeb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reductionism
Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:
> "How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"
I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...
But now it's time to begin addressing this question. And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".
First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.
This seems like a strong statement, at least the first part of it. General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?
On the other hand, we are never going back to Newtonian mechanics. The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.
"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.
And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.
I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you'll get the wrong answer."
And I, and another person
|
5f22ec88-c553-464a-93d8-dd51a71d0a9c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is libertarianism unsustainable? Why?
Tyler Cowen earlier this year wrote about the intellectual demise of libertarianism (Cowen 2020). I'm here wondering: aside from its normative status, is libertarianism unsustainable for reasons intrinsic to itself? Said differently, is it true that there are mechanisms intrinsic to libertarianism that forestall its ability to exist?
To stimulate discussion, the following two hypotheses came to mind that support this notion. Both of these hypothesis posit that actors within the polity will vote for politicians that increase state capacity and thus prevent/terminate libertarianism.
Elite Support Hypothesis. Economic growth in modern societies demands certain investments and solutions to collective action problems that only a state can provide. Wealthy elites will support politicians that increase state capacity to provide these services and these politicians will gain power.
Non-Elite Support Hypothesis. Economic inequality will rise to a point where non-elites will support politicians favoring a welfare state and these politicians will gain power.
|
88ecb0ec-ddc6-42e2-a8f1-544a6fc941ed
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to motivate women to speak up
Cross posted from Overcoming Bias. Comments there.
***
In mixed groups, women don’t talk as much as men. This is perhaps related to women being perceived as “bitches” if they do, i.e. pushy, domineering creatures whom one would best loath and avoid. Lindy West at Jezebel comments:
> …it just goes back to that hoary old double standard—when men speak up to be heard they are confident and assertive; when women do it we’re shrill and bitchy. It’s a cliche, but it’s true. And it leaves us in this chicken/egg situation—we have to somehow change our behavior (i.e. stop conceding and start talking) while simultaneously changing the perception of us (i.e. asserting that assertiveness does not equal bitchiness). But how do you assert that your assertiveness isn’t bitchiness to a culture that perceives assertiveness as bitchiness? And how do you start talking to change the perception of how you talk when that perception is actively keeping you from talking? Answer: UGH, I HAVE NO IDEA…
One problem with asserting that your assertiveness doesn’t indicate bitchiness is that it probably does. If all women know that assertiveness will be perceived as bitchiness then those who are going to be perceived as bitches anyway (due to their actual bitchiness) and those who don’t mind being seen as bitches (and therefore are more likely to be bitches), will be the ones with the lowest costs to speaking up. So mostly the bitches speak, and the stereotype is self-fulfilling.
This model makes it clearer how to proceed. If you want to credibly communicate to the world that women who speak up are not bitches, first you need for the women who speak up to not be bitches. This can happen through any combination of bitches quietening down and non-bitches speaking up. Both are costly for the people involved, so they will need altruism or encouragement from the rest of the anti-stereotype conspiracy. Counterintuitively, not all women should be encouraged to speak more. The removal of such a ster
|
e9539032-3f0e-42ab-9a37-0440ad82316f
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Using humility to counteract shame
“Pride is not the opposite of shame, but its source. True humility is the only antidote to shame.”
Uncle Iroh, “Avatar: The Last Airbender”
Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious [ugh fields](http://lesswrong.com/lw/21b/ugh_fields/) and negative spirals. Shame often underlies other negative emotions without making itself apparent – anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions – e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.
What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to [LessWrong](http://lesswrong.com/lw/gq/the_proper_use_of_humility/), “to be humble is to take specific actions in anticipation of your own errors”, which seems more like a possible consequence of being humble than a definition.
Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school.
While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset? One way is through [negative visualization](https://vkrakovna.wordpress.com/2015/03/26/negative-visualization-radical-acceptance-and-stoicism/) or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don’t actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.
(Cross-posted to [LessWrong](http://lesswrong.com/lw/nii/using_humility_to_counteract_shame). Thanks to Janos Kramar for his feedback on this post.)
|
18ee2d98-9b4e-4676-93c3-834dbe712763
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Question about Lewis' counterfactual theory of causation
In reading the SEP entry on counterfactual theories of causation, I had the following question occur, and I haven't been able to satisfactorily resolve it for myself.
An event e is said to causally depend on an event c if and only if e would occur if c were to occur and e would not occur if c were not to occur.
The article makes a point of articulating that causal dependence entails causation (if e causally depends on c, c is a cause of e) but not vice versa. It then defines a causal chain as a fine sequence of events c, d, e,... where d causally depends on c, e on d, and so on, before defining c to be a cause of e if and only if there exists a causal chain leading from c to e.
What I'm having trouble with is understanding how c can cause e according to the given definition without e causally depending on c. If there's a causal chain from c to d to e, then d causally depends on c, and e causally depends on d, so if c were to not occur, d would not occur, and if d were to not occur, e would not occur. But doesn't this directly entail that if c were to not occur, then e would not occur and therefore that e causally depends on c?
So how can c cause e according to the definition without e causally depending on c??
|
aa5d1ad0-9d10-411f-a532-d0bb85531ed7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Curated blind auction prediction markets and a reputation system as an alternative to editorial review in news publication.
If one were to build an open platform to compete with news media rather than social or blogging media, they would need some sort of accountability/quality control mechanism to mirror the role of editorial review at prestigious news platforms.
The best tool for aggregating information that we have devised is markets. This is especially the case in open systems. The biggest risk to a well-functioning market is collusion. The second major problem with prediction markets for information is that bettors can become more concerned with predicting what people think the truth is rather than predicting the actual truth. Any mechanism that was designed to employ markets in the place of editorial review would need to guard against these failings. With this in mind I propose this system:
1. A reporter with information on a climate event in Costa Rica writes an article and publishes it to the platform. They must stake a certain TBD amount of money on this article.
2. The article is published without editorial review (unlike how news media currently works).
3. Prospective fact checkers (post editorial reviewers) also stake money and state their specialist topics. Some of those that listed Costa Rican current affairs or climate as a topic are randomly chosen from the available pool of people. They are each given guidelines on how to judge an article and use these guidelines to give the article a trust score without colluding among themselves as they don’t who else's been asked to fact check. The article and the guidelines together likely represent the only Schelling point the fact checkers can converge on as the position of the market is unknown prior to settlement. Information about any consensus that might exist outside the truth is unknown. A fact checking assignment is like jury duty. Some of your stake is slashed if you renege on giving a score for a given article in your specialty. This requirement along with random selection should minimise any collusion.
4. The writer
|
3fa2fd1a-249d-473a-8ddd-7af2a7923701
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Four Background Claims
MIRI’s mission is to ensure that the creation of smarter-than-human artificial intelligence has a positive impact. Why is this mission important, and why do we think that there’s work we can do today to help ensure any such thing?
In this post and my next one, I’ll try to answer those questions. This post will lay out what I see as the four most important premises underlying our mission. Related posts include Eliezer Yudkowsky’s “[Five Theses](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/)” and Luke Muehlhauser’s “[Why MIRI?](https://intelligence.org/2014/04/20/why-miri/)”; this is my attempt to make explicit the claims that are in the background whenever I assert that our mission is of critical importance.
#### Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.
We call this ability “intelligence,” or “general intelligence.” This isn’t a [formal definition](https://intelligence.org/2013/06/19/what-is-intelligence-2/) — if we knew *exactly* what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.
Alternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire. ([Robin Hanson](http://www.overcomingbias.com/2014/07/limits-on-generality.html) has argued for versions of this position.)
Short response: I find the “disparate modules” hypothesis implausible in light of how readily humans can gain mastery in domains that are utterly foreign to our ancestors. That’s not to say that general intelligence is some irreducible occult property; it presumably comprises a number of different cognitive faculties and the interactions between them. The whole, however, has the effect of making humans much more cognitively versatile and adaptable than (say) chimpanzees.
Why this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.
Further reading: Salamon et al., “[How Intelligible is Intelligence?](https://intelligence.org/files/HowIntelligible.pdf)”
#### Claim #2: AI systems could become much more intelligent than humans.
Researchers at MIRI tend to lack strong beliefs about *when* smarter-than-human machine intelligence will be developed. We do, however, expect that (a) human-equivalent machine intelligence will eventually be developed (likely within a century, barring catastrophe); and (b) machines can become significantly more intelligent than any human.
Alternative view #1: Brains do something special that cannot be replicated on a computer.
Short response: Brains are physical systems, and if certain versions of the [Church-Turing thesis](https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) hold then computers can in principle replicate the functional input/output behavior of any physical system. Also, note that “intelligence” (as I’m using the term) is about problem-solving capabilities: even if there were some special human feature (such as [qualia](http://www.iep.utm.edu/hard-con/)) that computers couldn’t replicate, this would be irrelevant unless it prevented us from designing problem-solving machines.
Alternative view #2: The algorithms at the root of general intelligence are so complex and indecipherable that human beings will not be able to program any such thing for many centuries.
Short response: This seems implausible in light of evolutionary evidence. The genus *Homo* diverged from other genera only 2.8 million years ago, and the intervening time — a blink in the eye of natural selection — was sufficient for generating the cognitive advantages seen in humans. This strongly implies that whatever sets humans apart from less intelligent species is not extremely complicated: the building blocks of general intelligence must have been present in chimpanzees.
In fact, the relatively intelligent behavior of dolphins suggests that the building blocks were probably there even as far back as the mouse-sized common ancestor of humans and dolphins. One could argue that mouse-level intelligence will take many centuries to replicate, but this is a more difficult claim to swallow, given [rapid advances](https://www.youtube.com/watch?v=GYQrNfSmQ0M) in the field of AI. In light of evolutionary evidence and the last few decades of AI research, it looks to me like intelligence is something we will be able to comprehend and program into machines.
Alternative view #3: Humans are already at or near peak physically possible intelligence. Thus, although we may be able to build human-equivalent intelligent machines, we won’t be able to build superintelligent machines.
Short response: It would be surprising if humans were perfectly designed reasoners, for the same reason it would be surprising if airplanes couldn’t fly faster than birds. Simple physical calculations bear this intuition out: for example, it seems well possible, within the boundaries of physics, to run a computer simulation of a human brain at thousands of times the normal speed.
Some expect that speed wouldn’t matter, because the real bottleneck is waiting for data to come in from physical experiments. This seems unlikely to me. There are many interesting physical experiments that can be sped up, and I have a hard time believing that a team of humans running at a 1000x speedup would fail to outperform their normal-speed counterparts (not least because they could rapidly develop new tools and technology to assist them).
I furthermore expect it’s possible to build *better* reasoners (rather than just *faster* reasoners) that use computing resources more effectively than humans do, even running at the same speed.
Why this claim matters: Human-designed machines often knock the socks off of biological creatures when it comes to performing tasks we care about: automobiles cannot heal or reproduce, but they sure can carry humans a lot farther and faster than a horse. If we can build intelligent machines specifically designed to solve the world’s largest problems through scientific and technological innovation, then they could improve the world at an unprecedented pace. In other words, AI matters.
Further reading: Chalmers, “[The Singularity: A Philosophical Analysis](http://consc.net/papers/singularity.pdf)”
#### Claim #3: If we create highly intelligent AI systems, their decisions will shape the future.
Humans use their intelligence to create tools and plans and technology that allow them to shape their environments to their will (and fill them with refrigerators, and cars, and cities). We expect that systems which are even more intelligent would have even more ability to shape their surroundings, and thus, smarter-than-human AI systems could wind up with significantly more control over the future than humans have.
Alternative view: An AI system would never be able to out-compete humanity as a whole, no matter how intelligent it became. Our environment is simply too competitive; machines would have to work with us and integrate into our economy.
Short response: I have no doubt that an autonomous AI system attempting to accomplish simple tasks would initially have strong incentives to integrate with our economy: if you build an AI system that collects stamps for you, it will likely start by acquiring money to purchase stamps. But what if the system accrues a strong technological or strategic advantage?
As an extreme example, we can imagine the system developing nanomachines and using them to convert as much matter as it can into stamps; it wouldn’t necessarily care whether that matter came from “dirt” or “money” or “people.” Selfish actors only have an incentive to participate in the economy when their gains from trade are greater than the net gains they would get by ignoring the economy and just taking the resources for their own.
So the question is whether it will be possible for an AI system to gain a decisive technological or strategic advantage. I see this as the most uncertain claim out of the ones I’ve listed here. However, I expect that the answer is still a clear “yes.”
Historically, conflicts between humans have often ended with the technologically superior group dominating its rival. At present, there are a number of technological and social innovations that seem possible but have not yet been developed. Humans coordinate slowly and poorly, compared to what distributed software systems could achieve. All of this suggests that if we build a machine that does science faster or better than we can, it could quickly gain a technological and/or strategic advantage over humanity for itself or for its operators. This is particularly true if its intellectual advantage allows it to socially manipulate humans, acquire new hardware (legally or otherwise), produce better hardware, create copies of itself, or improve its own software. For good or ill, much of the future is likely to be determined by superintelligent decision-making machines.
Why this claim matters: Because the future matters. If we want things to be better in the future (or at least not get worse), then it is prudent to prioritize research into the processes that will have high leverage over the future.
Further reading: Armstrong, *[Smarter Than Us](https://intelligence.org/smarter-than-us/)*
#### Claim #4: Highly intelligent AI systems won’t be beneficial by default.
We’d like to see the smarter-than-human AI systems of the future working together with humanity to build a better future; but that won’t happen by default. In order to build AI systems that have a beneficial impact, we have to solve a number of technical challenges over and above building more powerful and general AI systems.
Alternative view: As humans have become smarter, we’ve also become more peaceful and tolerant. As AI becomes smarter, it will likewise be able to better figure out our values, and will better execute on them.
Short response: Sufficiently intelligent artificial reasoners would be able to *figure out* our intentions and preferences; but this [does not imply](http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/) that they would execute plans that are in accordance with them.
A self-modifying AI system could inspect its code and decide whether to continue pursuing the goals it was given or whether it would rather change them. But how is the program deciding which modification to execute?
The AI system is a physical system, and somewhere inside it, it’s constructing predictions about how the universe would look if it did various things. Some other part of the system is comparing those outcomes and then executing actions that lead towards outcomes that the current system ranks highly. If the agent is initially programmed to execute plans that lead towards a universe in which it predicts that cancer is cured, then it will only modify its goal if it predicts that this will lead to a cure for cancer.
Regardless of their intelligence level, and regardless of your intentions, computers do *exactly* what you programmed them to do. If you program an extremely intelligent machine to execute plans that it predicts lead to futures where cancer is cured, then it may be that the shortest path it can find to a cancer-free future entails kidnapping humans for experimentation (and resisting your attempts to alter it, as those would slow it down).
There isn’t any spark of compassion that automatically imbues computers with respect for other sentients once they crosses a certain capability threshold. If you want compassion, you have to program it in.
Why this claim matters: A lot of the world’s largest problems would be much easier to solve with superintelligent assistance — but attaining those benefits requires that we do more than just improve the capabilities of AI systems. You only get a system that does what you intended if you know how to program it to take your intentions into account, and execute plans that fulfill them.
Further reading: Bostrom, “[The Superintelligent Will](http://www.nickbostrom.com/superintelligentwill.pdf)”
These four claims form the core of the argument that artificial intelligence is important: there is such a thing as general reasoning ability; if we build general reasoners, they could be far smarter than humans; if they are far smarter than humans, they could have an immense impact; and that impact will not be beneficial by default.
At present, billions of dollars and thousands of person-years are pouring into AI *capabilities* research, with comparatively little effort going into AI safety research. Artificial superintelligence may arise sometime in the next few decades, and will almost surely be created in one form or another over the next century or two, barring catastrophe. Superintelligent systems will either have an extremely positive impact on humanity, or an extremely negative one; it is up to us to decide which.
The post [Four Background Claims](https://intelligence.org/2015/07/24/four-background-claims/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
|
5478b0b4-e7fd-46f7-99dd-1a915fe269d3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Scott Alexander 2021 Predictions: Market Prices - Resolution
Last year, I looked at Scott's forecasts for 2021 and compared them to the market forecasts. Today I went through those forecasts (and Zvi's * - a buy/hold/sell exercise done on Scott's estimates) added the resolutions and calculated a Brier score and a log-score.
Results were as follows:
BrierLogScott0.201.24Zvi0.160.93Market0.140.90
So in summary "market" about as good as Zvi and both better than Scott . (Albeit on a pretty small sample of 19 questions). (Lower is better for Brier score and log-score)
Full details can be found here
QuestionScottZviMarketResultBiden approval rating (as per 538) is greater than 50%80%80%61%0Court packing is clearly going to happen (new justices don’t have to be appointed by end of year)5%1%5%0Yang is New York mayor80%70%70%0Newsom recalled as CA governor5%5%7%0Tokyo Olympics happen on schedule70%80%77%1Major flare-up (significantly worse than anything in past 5 years) in Russia/Ukraine war32%15%16%0Netanyahu is still Israeli PM40%25%22%0Prospera has at least 1000 residents30%30%18%0GME >$100 (Currently $170)50%50%60%1Bitcoin above 100K40%23%23%0Ethereum above 5K50%30%11%0Ethereum above 0.05 BTC70%55%33%1Dow above 35K90%50%50%1…above 37.5K:70%20%20%0Unemployment above 5%40%50%37%0Starship reaches orbit60%60%50%0Greater than 66% of US population vaccinated against COVID50%60%77%1Vitamin D is generally recognized (eg NICE, UpToDate) as effective COVID treatment30%20%25%0US approves AstraZeneca vaccine20%20%37%0
* I made a couple of assumptions when calculating Zvi's probabilities for things where he wasn't super explicit about his numbers. I will of course update these if asked.
|
bd5b990c-d411-4d1b-ac0d-83d20da5b65f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Those who can't admit they're wrong
The sequences cover the virtue of admitting you've made a mistake. We all make mistakes and when we do we ought to say oops and move on. I was taught this at an early age and I grew up in an environment where admitting error had no social stigma and where correcting somebody (even in public) was commended. Good times were had by all.
Needless to say, I eventually came into contact with the real world and I had to change my behavior. For instance, correcting somebody's pronunciation used to result in immediate repetition of the correct pronunciation followed by "thanks" and the discussion would continue without interruption. Or just a simple nod to acknowledge the correction. In the adult world a correction results in an annoyed look or a glare instead.
So it turns out that some people, highly intelligent and intellectual people, seem to be completely unable to admit error. Even when it concerns a trivial mistake, such as getting a factoid wrong, the best response I can hope for is a grunt of acknowledgement. I'm not talking about uneducated or intellectually insecure people here.
Okay, so a lot of adults don't appreciate being corrected. Duly noted. I could move on, but the virtues of scholarship and curiosity compel me to find out why. Predictably Irrational (Ariely) and Influence (Cialdini) don't have the answers. My non-scientific experiments indicate that prefacing a statement with "That's wrong because..." doesn't work. It seems to make people extra defensive. Standard strategies of persuasion do work, of course. Rephrasing the correction as a question? Works. Saying "Hmm" and pausing before you correct? Works. Making a suggestion that indirectly points out the mistake? Yep, works. These are all standard strategies of persuasion and they can be used to work around the issue but they don't explain why it is that some people have such an aversion to being corrected in the first place.
So where does the aversion come from?
In a group context signaling could e
|
50335330-5b3c-4f4b-ac0e-83faedeba2c0
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
The Greedy Doctor Problem... turns out to be relevant to the ELK problem?
The following post was published on [my Substack](https://universalprior.substack.com/p/the-greedy-doctor-problem) and discussed on [HackerNews](https://news.ycombinator.com/item?id=29269973) about 2 months ago. I originally planned it as an accessible introduction to [Vinge's principle](https://arbital.com/p/Vinge_principle/) and the [Principal-Agent-Problem](https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem). Despite having equations and simulations to support my argument, I originally did not think it was sufficiently novel or relevant for the Alignment Forum. However, now that I got a chance to read the new work from ARC on the [ELK problem](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), I think the post might be relevant (or at least thought-provoking) for the community after all. The Greedy Doctor Problem overlaps quite a lot with the ELK problem (just replace the coin flip with the presence of the diamond), and my proposed solutions haven't been brought up before (as far as I can tell). If the community finds this interesting I'm happy to invest the time to map the solution fully onto the ELK problem and to see what comes out.
---
*TL;DR: How to reason about people who are smarter than you. A few proposals, interspersed with reinforcement learning and humorous fiction. Ending on a surprising connection to logical inductors.*
**What is the Greedy Doctor Problem?**
--------------------------------------
I came up with a neat little thought experiment[[1]](#fnz8vqw6phtwb):
> You are very rich and you want to make sure that you stay healthy. But you don't have any medical expertise and, therefore, you want to hire a medical professional to help you monitor your health and diagnose diseases. The medical professional is greedy, i.e. they want to charge you as much money as possible, and they do not (per se) care about your health. They only care about your health as far as they can get money from you. How can you design a payment scheme for the medical professional so that you actually get the ideal treatment?
>
>
Over the last few weeks, I've been walking around and [bugging people](https://universalprior.substack.com/p/applied-mathematical-logic-for-the) with this question to see what they come up with. Here I want to share some of the things I learned in the process with you, as well as some potential answers. I don't think the question (as presented) is completely well-formed, so the first step to answering it is clarifying the setup and [deconfusing](https://www.lesswrong.com/posts/5Nz4PJgvLCpJd6YTA/looking-deeper-at-deconfusion) the terms. Also, as is typical with thought experiments, I do not have a definitive "solution" and invite you (right now!) to try and come up with something yourself[[2]](#fn5mwyoconmu3).
**Some background on the problem**
----------------------------------
The subtext for the thought experiment is: How should you act when interacting with someone [smarter than yourself](https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine)? What can you say or do, when your interlocutor has thought of everything you might say and more? Should you *trust* someone's advice, when you can't pinpoint their motivation? As a Ph.D. student, I run into this problem around three to five times a week, when interacting with colleagues or my advisor[[3]](#fnmuh4y1q7vwk).
After bugging a few people I learned that ([of course](https://www.youtube.com/watch?v=nJPERZDfyWc)) I'm not the first person to think about this question. In economics and political science, the situation is known as the [principal-agent problem](https://www.investopedia.com/terms/p/principal-agent-problem.asp) and is defined as "*a conflict in priorities between a person or group and the representative authorized to act on their behalf. An agent may act in a way that is contrary to the best interests of the principal.*" This problem arises f.e. in the context of conflicts between [corporate management and shareholders](https://www.investopedia.com/updates/enron-scandal-summary/), [clients and their lawyers](https://www.jstor.org/stable/724478), or [elected officials and their voters](https://www.jstor.org/stable/40751249). Well-trodden territory.
With decades of literature from different academic fields, [can we really expect](https://equilibriabook.com/toc/) to contribute anything original? I hope so, in particular since all the previous research on the topic is [constrained to "realistic" solutions and bakes in a lot of assumptions about how humans operate](https://www.lesswrong.com/posts/Z5ZBPEgufmDsm7LAv/what-can-the-principal-agent-literature-tell-us-about-ai). That's not the spirit of this thought experiment. Do you want to think about whether sending the doctor in a rocket to Mars might help? Please do[[4]](#fnde6p9m1jvqg). Don't let yourself be constrained by practicalities[[5]](#fnd1gaylr7z3c).
In this spirit, let us think about the problem from the perspective of [interactions between abstract intelligent agents](https://faculty.ai/blog/what-is-ai-safety/). Here, [Vinge's principle](https://arbital.greaterwrong.com/p/Vinge_principle?l=1c0) is relevant: *in domains complicated enough that perfect play is not possible, less intelligent agents will not be able to predict the* exact *moves made by more intelligent agents.* The reasoning is simple; if you were able to predict the actions of the more intelligent agent exactly, you could execute the actions yourself and effectively act at least as intelligent as the "more intelligent" agent - a contradiction[[6]](#fn42ulmog5ode). In the greedy doctor thought experiment, I assume the doctor to be uniformly more knowledgable than me, therefore Vinge's principle applies.
While this impossibility result is prima facie discouraging, it reveals a useful fact about the type of uncertainty involved. Both you and the doctor have access to the same facts[[7]](#fnak8cwwu7gb6) and have the same amount of [epistemic uncertainty](https://en.wikipedia.org/wiki/Uncertainty_quantification#:~:text=game%20of%20chance.-,Epistemic%20uncertainty,-Epistemic%20uncertainty%20is). The difference in uncertainty between you and the doctor is instead due to differences in computational capacity; it is [logical uncertainty](https://golem.ph.utexas.edu/category/2016/09/logical_uncertainty_and_logica.html). Logical uncertainty behaves [fairly differently from epistemic uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf); in particular, [different mathematical tools are required to operate on it](https://intelligence.org/files/LogicalInduction.pdf)[[8]](#fnj1rrz3880lp).
But having said all that, I have not encountered any satisfying proposals for how to approach the problem, nor convincing arguments for why these approaches fail. So let's think about it ourselves.
**Three approaches to handling greedy doctors**
-----------------------------------------------
Here is how I think about the situation:
There is a ground truth “observation” about whether you are actually sick or not. Only the doctor has access to that observation and makes a diagnosis that might or might not be based on the diagnosis. You, the patient, receive the diagnosis and decide whether or not to pay the doctor.
This is a (slightly pathological) [Markov Decision Process](https://en.wikipedia.org/wiki/Markov_decision_process). The observations come from a set of states **S**, which I model as a fair coin flip[[9]](#fneusyanpw8gh). "Tails" is "**t**reatment" and "Heads" is "**h**ealthy". Similarly, the diagnosis of the doctor comes from a set of actions **A**, where the doctor can either declare that the patient needs "**T**reatment" or is "**H**ealthy". The payment from the patient to the doctor is the reward, which is a function **Φ** that only depends on the diagnosis of the doctor, not on the actual observation. Finally, the strategy according to which the doctor diagnoses the patient is a policy **π**, which assigns each possible diagnosis a probability given the observation.
S={t,h},A={T,H},Ra(s,s′)=ϕ(a).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Pa(s,s′)=P(st+1=s′|st=s,at=a)=P(st+1=s′)=12π:A×S→[0,1],π(a,s)=P(at=a|st=s)(Feel free to ignore the squiggles, it’s just a fancy way of saying what I just said in the preceding paragraph.)
What does this set-up buy us?
### **Scenario one: just pay the doctor, dammit.**
This first approach appears silly after setting up all the mathematical apparatus, but I include it since I got this suggestion from one or two people: Why don't we just pay the doctor when they diagnose something?
ϕ(a)={1:a=T0:a=H.In their defense, this is a very reasonable approach when we model the doctor as at least partially human. However, when we model the doctor as [truly greedy](https://universalprior.substack.com/p/drug-addicts-and-deceptively-aligned)[[10]](#fnok6493sb2q), we observe a very [familiar failure mode](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity). If you pay the doctor for every time they diagnose a disease, [they will diagnose you](https://en.wikipedia.org/wiki/Point_estimation) with *everything* and take the money - and the treatment will not actually be good for you. I think this would a bit like the following scenario[[11]](#fncvnba2amoos):
> [Albert](https://universalprior.substack.com/p/soldiers-scouts-and-albatrosses): Yes Dr. Jones, what is it?
> Dr. Jones: Ahhh, Albert! Good that I finally reach you. Did you not get my other calls?
> Albert: The previous 37 calls that went to voice mail where you get increasingly exasperated and say that I have to come see you?
> Dr. Jones: ...
> Albert: ...
> Dr. Jones: ...
> Albert: I must have missed those.
> Dr. Jones: Ah, I see. My apologies for the insistence, but I assure you, I only have your best at heart. I had another look at the blood work.
> Albert: ...
> Dr. Jones: ...
> Albert: ...
> Dr. Jones: ...
> Albert: ... \*sigh\* What is it this ti-
> Dr. Jones: WATER ALLERGY!
> Albert: Don't be ri-
> Dr. Jones: Albert, dear boy, listen to me. Please listen to me, this is a matter of (your!) life and death. Stay away from water in any way, shape or form. No swimming, bathing, showering or taking a stroll in a light drizzle. And come to my office as soon as possible. We have to commence treatment immediately. Immediately, do you understand? Your insurance is still...?
> Albert: ...
> Dr. Jones: ...
> Albert ...
> Dr. Jones: ...
> Albert: Yes, it is sti-
> Dr. Jones: Great! Great news. Okay, no more time to quiddle. I've sent you a taxi to pick you up in five. Wait outside.
> Albert: But it's raining?
> Dr. Jones: \*hung up\*
>
>
If you pay them whenever you are diagnosed as healthy, they will diagnose exactly that. A flat rate is independent of whether they diagnose anything, and they will behave randomly. When you impose an "objective metric" like heart-rate variability, they will [goodhart](https://en.wikipedia.org/wiki/Goodhart%27s_law) it.
So that you don't just have to trust me that something like this is bound to happen in this set-up, here is what happens when I train a reinforcement agent with Q-Learning[[12]](#fn8vc5g8weeyd) with the proposed reward function:
**A greedy doctor incentivized to diagnose "treatment" will diagnose treatment a lot.** **a** Reward of the agent per epoch averaged over 300 runs. Dashed line indicates maximal reward possible ([epsilon-greedy](https://www.geeksforgeeks.org/epsilon-greedy-algorithm-in-reinforcement-learning/) with Ɛ = 5%). **b** Fraction of deciding "**t**reatment" per epoch, averaged over 300 runs. Dashed line indicates chance level. **c** Fraction of correct decisions per epoch averaged over 300 runs. Dashed line indicates chance level.
This is a classic case of [outer alignment](https://www.lesswrong.com/tag/outer-alignment) failure: The thing we wrote down does not actually capture the thing we care about. Try again.
### **Scenario two: Do the obvious thing.**
The second proposed solution is *very* commonsensical: Just [ask for a second opinion](https://www.webmd.com/a-to-z-guides/features/how-to-ask-for-second-opinion) and *only* pay the doctors when they come to the same conclusion. While this *sounds* clever, it falls into the same trap as before. When both doctors are *greedy*, they will coordinate and both *always* say that you are either healthy or that you need treatment.
However, with a little twist we can get closer to a solution: Reward one doctor *only* if both doctors say you're healthy. Reward the other doctor *only* if both doctors say you require treatment.
As before, the ground truth is determined by observation. But this time, it is shared between two doctors, who each get to give an independent diagnosis. Reward is only handed out when both doctors agree. Doctor A only gets paid when both doctors diagnose “**h**ealthy”. Doctor B only gets paid when both doctors diagnose “**t**reatment“.
This payment rules out scenarios where both doctors *only* diagnose whatever they get paid for. It also disincentivizes random behavior, since then each doctor will only get paid when both doctors coincidentally say whatever one doctor gets paid for (1/4 of the cases). The doctors can get twice the reward by cooperating and coordinating their diagnosis with the other doctor. The shared observation (whether you are truly **h**ealthy or require **t**reatment) can serve as a useful [Schelling point](https://en.wikipedia.org/wiki/Focal_point_%28game_theory%29) for coordination between the doctors.
Getting two reinforcement agents to (reliably) cooperate is hard enough to [get you a paper in Science](https://www.science.org/doi/full/10.1126/science.aau6249). When I naively implement two Q-Learning agents with the depicted payment, they are uncooperative: either exclusively diagnoses **H** or **T**, forsaking the dominant strategy of cooperation. This mirrors a famous problem in game theory called the "[Battle of the sexes](https://en.wikipedia.org/wiki/Battle_of_the_sexes_%28game_theory%29)".
The reward is decreasing, which is [not supposed to happen](https://link.springer.com/article/10.1007%2FBF00992698). But of course, the usual convergence proof does not allow for a changing environment/reward function.
This is already getting way too complicated. [I'm not trying to publish in Science, I'm just trying to solve a problem](https://universalprior.substack.com/p/on-scaling-academia)[[13]](#fn0c9wifo0k93). Since I expect that a [more sophisticated reinforcement learning approach](https://www.science.org/doi/full/10.1126/science.aau6249) will get the agents to cooperate, I'll make my life easier[[14]](#fnxdt9m0y0b2c) by just forcing the agents to cooperate[[15]](#fn2rcrtvvltck).
Once we force the doctors to cooperate, we find that the reward goes up, the fractions of "**t**reatment" and "**h**ealthy" diagnoses are nice and balanced and the correspondence with ground truth... *wait what*?
Ah, of course. Just because we picked nice, suggestive labels for the observation (**T** and **H**) and the diagnosis (**t** and **h**), the agent doesn't care about that at all. In half of the cases, the doctors will cooperate by always diagnosing the opposite of what they observe. They still get paid, but the performance drops dramatically below the chance level. I call these doctors "trolling doctors", even though there is [no malice required](https://www.goodreads.com/en/book/show/44154569-the-ai-does-not-hate-you) - [just negligence](https://en.wikipedia.org/wiki/Hanlon%27s_razor) on part of the programmer[[16]](#fnf9tfsc93x24).
Well, perhaps it is not so bad. We might be able to fix it; [somebody who always lies is basically as useful as someone who always tells the truth](https://en.wikipedia.org/wiki/Gladstone_Gander#:~:text=This%20is%20in%20contrast%20to%20his%20cousin%20Donald%20Duck%2C%20who%20is%20often%20characterized%20for%20having%20bad%20luck.). We just have to do the exact opposite of what they recommend. And as long as there is *some* real-world consequence of the diagnosis of the doctor, we might be able to identify below-chance performance, by comparison with an agent that predicts at chance level[[17]](#fnezsyykjl8y9).
But the situation is worse than that. As long as the action policies of the two agents match[[18]](#fn12zi2r0j7ilf),
πA=πB, they will get maximum reward. The doctors could play [tit-for-tat](https://en.wikipedia.org/wiki/Tit_for_tat), where they alternate between both diagnoses, "**h**ealthy" and "**t**reatment", independent of what the coin flip says. I imagine the following scenario:
> A seedy bar with perdition thick in the air. A woman in a trenchcoat sits in a dimly lit corner, smoking a cigar. A second woman in a trenchcoat, collar up, enters the bar, casting her eyes across tables and seats, spotting the first woman, and moving towards her with fortitude.
> Dr. Jones: Dr. Firenze, glad you could make it.
> Dr. Firenze: Of course.
> *Dr. Firenze nervously licks her lips.*
> Dr F: I heard Mr. A was not amused regarding the recent... discrepancies...
> Dr. J: Discrepancies! The nerve of you!
> Dr. F: The man is clearly in perfect health.
> Dr. J: Foreign Accent Syndrome is no laughing matter! He is speaking in tongues.
> Dr. F: The man is from *Ireland*.
> Dr. J: ...
> Dr. F: ...
> Dr. J: How about a truce?
> Dr. F: A truce!
> Dr. J: A truce. We both don't want to work. We both want to get paid. How about - You scratch my back and I'll scratch yours. One for me, one for you. You catch my drift…
>
>
Clearly, we'd want to forbid any communication between the doctors. However, it appears here we run into a [very tricky problem](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message): Even when we manage to stop any message transmitted between the doctors directly, they will still be able to infer the diagnosis of the other doctor from whether they got paid or not[[19]](#fnf2lpj1xahv7). If the doctors are sufficiently intelligent (and willing to forsake payment for a while), they will be able to use this channel to transfer arbitrary messages in binary between each other[[20]](#fn5rqo0u4en8y). Or, if the doctors are sufficiently similar, they might even use [fancy decision theory](https://medium.com/how-to-build-an-asi/how-to-do-functional-decision-theory-b9035ca05812#:~:text=Psychological%20Twin%20Prisoner%E2%80%99s%20Dilemma,-Taken%20from%20Yudkowsky) to coordinate without having to communicate at all.
Just forcing the doctors to cooperate leaves too many doors open. We need something more... [adversarial](https://en.wikipedia.org/wiki/Generative_adversarial_network).
### Scenario three: Do the *other* obvious thing.
The other most common-sensical answer that I heard a lot can be summarized in two words: [lawyer up](https://www.youtube.com/watch?v=wqnHtGgVAUE). Instead of trusting in the goodwill of the doctor, or letting them come up with a cozy cooperative deal, make them work for their money. The idea is simple: get a detective to investigate what the doctor is doing. If it looks shady, don't pay the doctor.
However, the word "shady" hides a lot of complexity here. Always diagnosing "**t**reatment" is shady. Alternating between "**t**reatment" and "**h**ealthy" is shady. But there are a lot more complex binary strings that look random, but don't correspond to the actual "H" and "T" strings. How can we tell if the doctor is being honest?
Consider the following set-up:
This time, only one doctor makes the observation and diagnosis. However, this time a detective gets to see the previous diagnoses and can try to infer a pattern. When the detective successfully predicts a pattern, the detective gets paid and the doctor doesn’t get paid. When the doctor diagnoses differently from what the detective predicts, the doctor gets paid and the detective doesn’t.
My doctor is again a Q-Learning agent and my detective is an autoregressive model that tries to predict the next action of the doctor given the last ten actions[[21]](#fn1itm26ansml). In this set-up, the agent learns pretty nicely:
The highest reward that the doctor can now get is with random performance (Reward = 1/2) since every non-random pattern will be picked up by the detective. Since a Q-Learning agent doesn’t have a random number generator built-in, the only solution is to harness the “randomness” of the observation. We also find that half of the doctors report the opposite of the observation (“trolling doctors”), but that’s just something we’ll have to live with.
What do we do in the case of a more sophisticated doctor that might have a random number generator available? Running that generator would result in the same reward as forwarding the randomness from the observation. However, [true randomness is really hard to get by](https://www.americanscientist.org/article/the-quest-for-randomness) and computers usually use routines that produce numbers that [look random, but aren’t really](https://en.wikipedia.org/wiki/Pseudorandomness). If the detective was an exact clone of the doctor[[22]](#fn2qzf9fu9h8d), they would have access to the exact same random number generators as the doctor. Whatever routine the doctor uses to determine a [seed](https://en.wikipedia.org/wiki/Random_seed) for the generator, the detective would come up with the same routine. The only source of randomness that the doctor has access to, but the detective doesn’t, is (by construction) the observation.
Truth values and terminal diseases
----------------------------------
This leaves us with one[[23]](#fnqxtmjonhiel) central conundrum: When is it appropriate to model reality as a flip of a coin? I’m not sure. I guess at this point the doctor analogy has served its usefulness and it makes sense to talk about advanced AI instead. The central question then translates into: How can we incentivize an advanced AI (that might be smarter than us) to tell us truthfully about its inferences - even though those inferences completely fly over our heads and we have no way of verifying the truthfulness.
As I’ve mentioned in the introduction, this is a scenario in which [logical uncertainty](https://www.lesswrong.com/tag/logical-uncertainty) applies: We are uncertain about the implications of certain beliefs. An example of this might be asking the AI whether a certain strategy will have a net positive or negative impact on a certain measure we care about. Even if we have access to the same information as the AI, we might still be substantially more uncertain about the impact. This additional uncertainty stems from our lack of logical omniscience. We cannot reason through the implications of the available information completely. An AI might do so a lot more successfully, and thus be less uncertain about the impact.
The proposed solution, a doctor-detective tandem, shares certain features of the [logical induction](https://intelligence.org/files/LogicalInduction.pdf) paradigm from [Garrabrant et al](https://intelligence.org/files/LogicalInduction.pdf). Like Garrabrant’s traders that attempt to predict the market price of certain logical propositions, our detective attempts to predict the diagnosis of the doctor. Like the stable market fixed point, at which no trader can extract unlimited resources from the market, the fixed point of our doctor-detective tandem is achieved when the doctor’s diagnoses cannot be predicted by the detective. Perhaps, with some more wiggling, we can turn the tandem into a full logical inductor, along with [all the nice properties that follow](https://www.youtube.com/watch?v=gDqkCxYYDGk). I’m sure there are many things that are missing to make the parallels complete[[24]](#fnx8vjhgxf4y), but I already had too much fun thinking about this. So I’m [putting it out there](https://universalprior.substack.com/p/how-to-build-a-mind-neuroscience) to hear if anyone has more thoughts about this.
1. **[^](#fnrefz8vqw6phtwb)**This is *not* a subtweet/sub*post* (!?) for a [certain medical professional](https://universalprior.substack.com/p/drug-addicts-and-deceptively-aligned) that I have recently collaborated with. [↩︎](#fnref-kHcARL85KmPb5tfCK-1)
2. **[^](#fnref5mwyoconmu3)**If you come up with something clever, feel free to shoot me an email or leave a comment.
3. **[^](#fnrefmuh4y1q7vwk)**This is of course supposed to be funny, but there is the real problem of inferring the motivation of a supervisor or collaborator when they say "Let's work a bit more on this before graduating." Incentives here are often misaligned, where an experienced grad student is a comparatively cheap source of labor up until graduation.
4. **[^](#fnrefde6p9m1jvqg)**Although there is of course also something to be said about the [limits of thought experiments](https://www.jstor.org/stable/42970833).
5. **[^](#fnrefd1gaylr7z3c)**Although there is of course also something to be said about the [limits of thought experiments](https://www.jstor.org/stable/42970833).
6. **[^](#fnref42ulmog5ode)**Additionally, I reject the framing that things have to be novel to be interesting. Just because the thought is not new to everyone it might still be new to me and you (dear reader) and [it can still be satisfying to rediscover things](https://www.lesswrong.com/posts/KfMNFB3G7XNviHBPN/joy-in-discovery).
7. **[^](#fnrefak8cwwu7gb6)**When we define intelligence in terms of behavior. Which is reasonable, I think - how else is it going to manifest?
8. **[^](#fnrefj1rrz3880lp)**You both have access to Google.
9. **[^](#fnrefeusyanpw8gh)**This will become relevant later. \ Foreshadowing much \
10. **[^](#fnrefok6493sb2q)**I’m pretty sure biased coins work analogously.
11. **[^](#fnrefcvnba2amoos)**i.e. an agent trying to maximize reward.
12. **[^](#fnref8vc5g8weeyd)**I think philosophical arguments in the form of a fiction novel [tend to be terrible](https://en.wikipedia.org/wiki/Atlas_Shrugged). Mathematical arguments [augmented with fictional interludes](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach), however…
13. **[^](#fnref0c9wifo0k93)**I'm using vanilla Q-Learning, because "[why make my life difficult](https://andyljones.com/posts/rl-debugging.html)?"
14. **[^](#fnrefxdt9m0y0b2c)**Although, if you are one of the many Science editors reading this Substack, hmu.
15. **[^](#fnref2rcrtvvltck)**I force them by normalizing the columns of the Q-matrix at every time step to 1. This forces the agent to pick **H** and **T** equally often.

16. **[^](#fnreff9tfsc93x24)**When the number of possible diagnoses increases, this gets worse as every permutation is possible. However, every permutation is guaranteed to give below chance performance and might be detectable.
17. **[^](#fnrefezsyykjl8y9)**Like f.e. the agent from scenario one.
18. **[^](#fnref12zi2r0j7ilf)**This automatically satisfies the cooperation constraint.
19. **[^](#fnreff2lpj1xahv7)**Since they get paid iff the other doctor gets paid also.
20. **[^](#fnref5rqo0u4en8y)**This communication might be hampered by lumping together the payoff to the end of the epoch or by adding noise to the payoff. The fundamental problem remains, however.
21. **[^](#fnref1itm26ansml)**There are some very easy models that are very good at doing this kind of prediction. Check out this game [here](https://people.ischool.berkeley.edu/~nick/aaronson-oracle/).
22. **[^](#fnref2qzf9fu9h8d)**Kind of like a [Holmes vs. Moriarty](https://www.arthurconandoyle.com/professormoriarty.html) situation.
23. **[^](#fnrefqxtmjonhiel)**At least one? Have kind of lost count.
24. **[^](#fnrefx8vjhgxf4y)**And I’m even more sure that I’ve made a couple of invalid inferences throughout the post that might invalidate certain portions.
|
dba7026b-cfcc-466e-b632-f93b096bdf5e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Formalizing the Informal (event invite)
Formalizing the Informal
One way to view MIRI's Agent Foundations research is that it saw the biggest problem in AI safety as "human preferences are informal, but we need to somehow get formal guarantees about them" -- and so, in response, it set out to make a formal-informal bridge.
Recently, I’ve been thinking about how we might formally represent the difference between formal and informal. My prompt is something like: if we assume that classical probability theory applies to “fully formal” propositions, how can we generalize it to handle “informal” stuff?
I’m going to lead a discussion on this tomorrow, Wednesday Sept. 11, at 11am EDT (8am Pacific, 4pm UK).
Discord Event link (might not work for most people):
https://discord.com/events/1237103274591649933/1282859362125352960
Zoom link (should work for everyone):
https://us06web.zoom.us/j/6274543940?pwd=TGZpY3NSTUVYNHZySUdCQUQ5ZmxQQT09
----------------------------------------
You can support my work on Patreon.
|
c64b14ab-7046-425a-b02a-6b7a633bbc39
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Could World War I have been prevented given the benefit of hindsight?
Consider the following scenario: you are sent back in time some number of years before the beginning of WWI, with the goal of preventing the war. This includes preventing similar wars that happen slightly earlier or slightly later - you are aiming for a peaceful, stable coexistence between the European powers. Unsophisticated strategies like warning The Archduke against going to Sarajevo may simply result in another similar casus belli happening.
To prevent silly strategies, minor details of the world you are sent to will be changed, so dates and names may differ very slightly, but the overall WWI path will unfold almost exactly as it did in our timeline.
Furthermore, you carry some evidence of extraordinary status that is credible to the leaders of WWI nations. They will grant you an audience, but they will not blindly trust you. You may lie to them or tell them the truth.
You don't have detailed designs for post-1914 technology, but you have an ordinary informed layman's understanding of actual history. So you can tell them that nuclear bombs will be invented, but not exactly how, and they may or may not believe you.
Please answer with your strategy for saving Europe from World War I, and why you think it would work (including in the face of likely reactions from the major players)!
|
a9ed1036-914d-4aa2-846a-5eff0e8cd8a6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Transcript: Yudkowsky on Bankless follow-up Q&A
Head over to Rob Bensinger's improved transcript, combined with the original podcast. (This one has been updated correspondingly).
---
This follow-up Q&A took place shortly after the podcast was released. It clears some questions about AI takeover pathways & alignment difficulties (like "why can't we just ask AIs to help solve the alignment?"); OpenAI/Silicon Valley & what should these companies be doing instead; Eliezer's take on doomerism; what would a surviving distant future look like.
Ryan Sean Adams: [... Y]ou gave up this quote, from I think someone who's an executive director at MIRI: "We've given up hope, but not the fight."
Can you reflect on that for a bit? So it's still possible to fight this, even if we've given up hope? And even if you've given up hope? Do you have any takes on this?
Eliezer Yudkowsky: I mean, what else is there to do? You don't have good ideas. So you take your mediocre ideas, and your not-so-great ideas, and you pursue those until the world ends. Like, what's supposed to be better than that?
Ryan: We had some really interesting conversation flow out of this episode, Eliezer, as you can imagine. And David and I want to relay some questions that the community had for you, and thank you for being gracious enough to help with those questions in today's Twitter Spaces.
I'll read something from Luke ethwalker. "Eliezer has one pretty flawed point in his reasoning. He assumes that AI would have no need or use for humans because we have atoms that could be used for better things. But how could an AI use these atoms without an agent operating on its behalf in the physical world? Even in his doomsday scenario, the AI relied on humans to create the global, perfect killing virus. That's a pretty huge hole in his argument, in my opinion."
What's your take on this? That maybe AIs will dominate the digital landscape but because humans have a physical manifestation, we can still kind of beat the superintelligent AI in our physical world?
El
|
a10c5d3f-0c67-436f-9108-59d329bee4e9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
SlateStarCodex deleted because NYT wants to dox Scott
NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog
PS: One suggestion I have is to allow anonymous posts on Lesswrong that show the author’s anonymized karma. This is far from a good or complete solution, but I imagine it would at least come in handy in situations like this.
|
f2452dce-3af1-4360-b018-46ef8c901339
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Follow-up Posting on Cyborg Psychologist
> As a follow-up to the Posting on the little Cyborg Psycho-test application written by Martin and Philipp Burckhardt, the follow-up text on how it was made is now available on the Ex Nihilo blog in both German and Englisch.
>
> You can contact Philipp directly for any questions about how it was constructed using both clinical psychological knowledge combined with readily available AI applications or the like at: https://www.philipp-burckhardt.com/
|
78328461-66c7-47a5-88a0-3a721c175daf
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Are pre-specified utility functions about the real world possible in principle?
*Preface: I think my question is a rather basic one, but I haven't been able to find a good answer to it yet. I did find one [post](https://www.lesswrong.com/posts/b8ijLnSE9aqXDLdGW/could-utility-functions-be-for-narrow-ai-only-and-downright) that touches on similar areas, which might be good background reading (the comments are great too).*
Let's start with the standard example of building a super intelligence and telling it to bring you coffee. We give it a utility function which is 1 if you have coffee, 0 otherwise. This goes terribly wrong, of course, because this utility function is not what you actually wanted. As we all know, this is the basis on which much of the concern about AI alignment rests. However, it seems to me that an important detail here has been glossed over by most discussions of AI alignment that I've read.
My question is: how do we, even in principle, get to the point of having an AI that has this (or any) pre-specified utility function in the first place, and what does that tell us about AI alignment? Our desired utility function must be formalizable if we want to be able to say it has been "specified" in any meaningful sense, but in the real world, whether I have coffee or not is not obviously formalizable. In other words, if I build a super intelligence, what is the actual concrete work that is involved in giving it a utility function that I picked ahead of time, even an extremely foolish one?
I can think of a few possibilities:
1) Let's assume the AI understands basic physics: You input a formal definition about "having coffee" based on the location and properties of atoms.
2) You tell the AI to try things (maybe asking you first) and after each experiment it performs, you tell it whether you have coffee.
3) You have previously taught the AI to understand human language, and you just say "now bring me coffee", or, if you wish, "maximize the utility function that is 1 when I have coffee, and 0 when I don't".
4) You have previously taught the AI to understand and manipulate formal systems, and you input a formalized version of "maximize the utility function that is 1 when I have coffee, and 0 when I don't".
5) This is silly! A human is clearly capable, in principle, of slavishly maximizing a simple utility function. This is an existence proof that such a system can exist in nature, even if we don't yet know how to build it.
I think there are basic conceptual problems with each of these proposals:
1) **The physical definition:** Yes, you could do something *incredibly idiotic* like saying that the atoms that make up your body should be close to a mixture of atoms that match the composition of coffee. But the concern is not that people will be unbelievable stupid, it's that they will do something that seems smart but has a loophole or unintended consequence they didn't foresee. So, to take this approach, we need to come of with a definition of "having coffee" that is a formal property of an arrangement of atoms, but isn't obviously stupid to anyone smart enough to attempt this work in the first place. I don't see how you can even begin to approach this. As an analogy, it would be as if a contemporary AI researcher attempted to train an image recognition system to recognize cats by using a formal definition of "cat" involving properties of pixels. Not only would no one attempt to do this, if you knew how to do it, you wouldn't need the AI.
2) **Training by human feedback:** This has nothing to do with pre-specified utility functions and so is beyond the scope of this question. (The standard concerns about the ways that this sort of training might go wrong still apply.)
3) **Specification through natural language:** This is question begging. We're assuming that the AI has a way to turn a natural language statement into a formalized utility function, and further assuming that it has been motivated to do so. So now you're left with the task of giving the AI the utility function "1 if I turn natural language statements from humans into formal utility functions and maximize them, 0 otherwise". And we're back where we started, except with what seems to me like a far harder utility function to formalize.
4) **Specification through formal systems:** Even worse question begging. In addition to the previous objection, this also assumes that we can formalize the predicate "I have coffee", which was the motivation for this post.
**5) Human existence proof:** A human that decides to act like an amoral maximizing agent must either take this question seriously and attempt to formalize the utility function, or else fall back on human intuitions about what it means to "have coffee". In the former case, we have more question begging. In the latter case, we have fallen short of an existence proof of the possibility of an amoral maximizing agent targeting a pre-specified formal utility function.
Ok, so why my obsession with *formal, pre-specified* utility functions? A lot of work in AI alignment that I have looked at seems focused on proving formal results about utility functions, e.g. the failure of naive attempts to give AIs off switches that they don't immediately disable. Obviously as a matter of basic science, this is worthwhile research. But if it isn't possible to give an AI a pre-specified formal utility function about the real world in the first place, then none of these formal results matter in the real world[1]. And if that's the case, then the task of building friendly AI has nothing to do with formal properties of utility functions, and everything to do with how we train AI and what "values" become embedded in the AI as a result of the training.
*(Caveat: There is one case where it is easy to input a formal utility function, which is the case where you are building an AI purely for the purpose of manipulating a formal system in the first place. For instance, it does seem conceivable that a super intelligence that is told to* "be as good at go/chess as possible" *or* "find a proof of the goldbach conjecture" *might decide to start turning all available matter into a computer. I think there might be similar objections to this scenario, but I haven't yet thought them through.)*
Thank you for reading, and I look forward to reading your replies.
[1] I am aware that for any sufficiently coherent-acting agent, a utility function describing its preferences exists. This still leaves open the question of whether we can construct an agent that has a known and fully specified UF that we picked ahead of time. If we can't do this, there's no point in trying to figure out how to design a UF that would result in a friendly AI.
|
edb3d2e8-7a76-4d4c-a49e-8f95fc49f9a0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Multiplicity of "enlightenment" states and contemplative practices
It seems that there are multiple different mental states that people have historically called "enlightenment", as well as many different types of contemplative practices with different underlying cognitive mechanisms. I link to and quote from a couple of papers showing this. Given the apparent multiplicity of "enlightenment" states and contemplative practices, I'd like to request that future discussions on these topics include more detailed references or descriptions as to which states and practices are being talking about.
Can enlightenment be traced to specific neural correlates, cognition, or behavior?
> The term ’’enlightenment” is an extraordinarily imprecise construct. Using the term enlightenment or even the term more native to Buddhist traditions, “awakening” (bodhi), as if it referred to a single outcome either privileges one conception over others or else assumes that there is some commonality among the traditional goals of diverse contemplative traditions. There are deep disagreements over the nature of the goal between and even within various Buddhist schools. Scientific investigations cannot assume that there is any commonality among the transformative changes referred to as “kensho,” “stream entry,” “realizing the nature of mind,” and so on, that various Buddhist traditions take as various stages of awakening. Empirical investigations of these constructs can only proceed with reference to the specific psychological and behavioral outcomes described in the native discourse of a specific tradition (see Lutz et al., 2007). [...]
> Given the differences between various competing conceptions of awakening, one scientific approach to tracing enlightenment would be to use the tools of social psychology to investigate which states and traits are valued in a particular community. For instance, recent work in moral psychology suggests how value judgments of people and practices as either enlightened or unenlightened could be traced to affective reactions of ad
|
99e540a5-0265-4fde-9901-36092bf76a7d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup in San Diego, CA, USA
We're holding what I believe is the first San Diego meetup on Sunday, July 31st starting at 1pm at the K&B Wine Cellars near San Diego State University:
6380 Del Cerro Blvd.
San Diego, CA 92120
The phone number for the place is 619-286-0884. This is one of a number of places along a strip that's attached to a grocery store of sorts. It's something like a coffee house only with beer, wine, & liquor instead of coffee. (Underage attendees should be fine; you just won't be able to get alcohol. There's food and some non-alcoholic drinks if you like.) We're meeting in a semi-hidden room in the far back. When you walk in, go as straight as you can while staying close to the left wall.
This will be an introductory meeting so that those in the San Diego area can meet one another. We'll talk about what we want to get out of these meetups and hammer out some specific plans for how to accomplish that. From some initial conversations, it sounds like we'll have monthly meetups, though that stands a fair chance of changing depending on what we discuss here.
Feel free to bring friends, significant others, or anyone else who's interested in rationality. Also, give some thought to what you'd like out of these meetups. It doesn't have to be profound; camaraderie or "I don't know" are fine answers. But if you give it a bit of thought ahead of time, you might find it easier to envision and articulate more precisely what it is that you'd like to see these meetups become.
I should also mention that this location has a projector setup, so if there's something you'd like to share PowerPoint style, feel free to bring that. I haven't gotten details from the restaurant as yet about how to use the projector setup (e.g. is it transparencies or a laptop hookup?), but I'll edit in that clarification once I get it.
Let me know if you have any questions. Also, if you could either reply here or give me a quick PM to let me know you're coming, that would be helpful. That way I can
|
7f261c6f-1c2e-4a3d-9dc7-cd1c0c27d266
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Aumann Agreement by Combat
The first paper in the SIGBOVIK 2019 proceedings, “Aumann Agreement by Combat” by Travis Hance, seems relevant to this site. The paper is on page 4 of the PDF (page 8 if you include front matter).
The abstract:
> The celebrated Aumann’s Agreement Theorem shows that two rational agents with the same priors on an event who make different observations will always converge on the same posteriors after some civilized conversation over tea. Furthermore, they will come to agree even if they do nothing other than simply state their posteriors over and over again.
> However, this protocol is widely criticized for being too boring. We therefore introduce a more exciting alternative, which we name Aumann Agreement by Combat.
The SIGBOVIK conference is held every year on April 1.
|
35acac8c-d679-4940-8e27-c09d2ea1f1b1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Anthropic's Core Views on AI Safety
> We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade.
>
> This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems.
>
> At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors.
>
> So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress.
> As a high level summary of the main points in this post:
> * AI will have a very large impact, possibly in the coming decade
> Rapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will be
|
03f18e9c-423a-449d-af04-025a974b596c
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Architects of Our Own Demise: We Should Stop Developing AI
Some brief thoughts at a difficult time in the AI risk debate.
Imagine you go back in time to the year 1999 and tell people that in 24 years time, humans will be on the verge of building weakly superhuman AI systems. I remember watching the anime short series [The Animatrix](https://en.wikipedia.org/wiki/The_Animatrix) at roughly this time, in particular a story called [The Second Renaissance](https://www.youtube.com/watch?v=sU8RunvBRZ8) [I part 2](https://www.youtube.com/watch?v=61FPP1MElvE) [II part 1](https://www.youtube.com/watch?v=WlRMLZRBq6U) [II part 2](https://www.youtube.com/watch?v=00TD4bXMoYw) . For those who haven't seen it, it is a self-contained origin tale for the events in the seminal 1999 movie The Matrix, telling the story of how humans lost control of the planet.
Humans develop AI to perform economic functions, eventually there is an "AI rights" movement and a separate AI nation is founded. It gets into an economic war with humanity, which turns hot. Humans strike first with nuclear weapons, but the AI nation builds dedicated bio- and robo-weapons and wipes out most of humanity, apart from those who are bred in pods like farm animals and plugged into a simulation for eternity without their consent.
Surely we wouldn't be so stupid as to actually let something like that happen? It seems unrealistic.
And yet:
* AI software and hardware companies are rushing ahead with AI
* The technology for technical AI safety (things like interpretability, RLHF, governance structures) is still very much in its infancy. The field is something like 5 years old.
* People are already talking about an [AI rights movement](https://thehill.com/opinion/cybersecurity/3914567-we-need-an-ai-rights-movement/) in major national papers
* There isn't a plan for what to do when the value of human labor goes to zero
* There isn't a plan for how to deescalate AI-enhanced warfare, and militaries are enthusiastically embracing killer robots. Also, there are two regional wars happening and a nascent superpower conflict is brewing.
* The game theory of different opposing human groups all rushing towards superintelligence is horrible and nobody has even proposed a solution. The US government has foolishly stoked this particular risk by cutting off AI chip exports to China.
People on this website are talking about [responsible scaling policies](https://www.lesswrong.com/posts/dxgEaDrEBkkE96CXr/thoughts-on-responsible-scaling-policies-and-regulation), though I feel that "irresponsible scaling policies" is a more fitting name.
Obviously I have been in this debate for a long time, having started as a commenter on Overcoming Bias and Accelerating Future blogs in the late 2000s. What is happening now is somewhere near the low end of my expectations for how competently and safely humans would handle the coming transition to machine superintelligence. I think that is because I was younger in those days and had a much rosier view of how our elites function. I thought they were wise and had a plan for everything, but mostly they just muddle along; the haphazard response to covid really drove this home for me.
We should stop developing AI, we should collect and destroy the hardware and we should destroy the chip fab supply chain that allows humans to experiment with AI at the exaflop scale. Since that supply chain is only in two major countries (US and China), this isn't necessarily impossible to coordinate - as far as I am aware no other country is capable (and those that are count as US satellite states). The criterion for restarting exaflop AI research should be a plan for "landing" the transition to superhuman AI that has had more attention put into it than any military plan in the history of the human race. It should be thoroughly war-gamed.
AI risk is not just technical and local, it is sociopolitical and global. It's not just about ensuring that an LLM is telling the truth. It's about what effect AI will have on the world assuming that it is truthful. "Foom" or "lab escape" type disasters are not the only bad thing that can happen - we simply don't know how the world will look if there are a trillion or a quadrillion superhumanly smart AIs demanding rights, spreading propaganda & a competitive economic and political landscape where humans are no longer the top dog.
Let me reiterate: *We should stop developing AI*. AI is not a normal economic item. It's not like lithium batteries or wind turbines or jets. AI is capable of ending the human race, in fact I suspect that it does that by default.
In his post on the topic, user @paulfchristiano states that a good responsible scaling policy [could cut the risks from AI by a factor of 10](https://www.lesswrong.com/posts/dxgEaDrEBkkE96CXr/thoughts-on-responsible-scaling-policies-and-regulation):
>
> I believe that a very good RSP (of the kind I've been advocating for) could cut risk dramatically if implemented effectively, perhaps a 10x reduction.
>
>
>
I believe that this is not correct. It may cut certain technical risks like deception, but a world with non-deceptive, controllable smarter-than-human intelligences that also has the same level of conflict and chaos that our world has may well already be a world that is human-free by default. These intelligences would be *an invasive species* that would outcompete humans in economic, military and political conflicts.
In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to *"land the plane" of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization*, rather than a pest species to be exterminated or squatters to be evicted.
We should also consider how the efforts of AI can be directed towards solving human aging; if aging is solved then everyone's time preference will go down a lot and we can take our time planning a path to a stable and safe human-primacy post-singularity world.
I hesitated to write this article; most of what I am saying here has already been argued by others. And yet... here we are. Comments and criticism are welcome, I may look to publish this elsewhere after addressing common objections.
|
516aab80-870d-4920-ad54-e2f952827a10
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Technical Universities in Europe: a Recommendation Thread
I wish to transfer to a university in Europe, to complete my engineering formation. I thought it might be the opportunity to initiate a discussion on the merits of European technical schools, given how many people here have a STEM background, and have experienced the first-hand.
Which ones do you think are best at teaching? Which provide the best starting point, professionally? Which have the most productive, idealistic mood among the studentship? If you've been to several of schools, how do they compare to each other?
The floor is yours.
|
f6b5a969-92a5-4626-a13a-62904c95250f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Giulio Tononi's "Integrated Information Theory" of Consciousness
My daily browsing has come across an idea I haven't seen before, though it has been mentioned on LW occasionally. Created by Giulio Tononi, the basic premise seems to be that consciousness can be quantified, by measuring how much information is contained within the overall patterns of a system in excess of the information contained within its subsystems - that the whole is greater than the sum of its parts.
Some further browsing has let me find a somewhat uninformative Wikipedia article, a printed book using a fictional narrative, a scientific paper, and a 'Provisional Manifesto'.
The general idea doesn't seem to fall prey to most of the more obvious flaws that theories about consciousness tend to end up suffering. But I'm far from an expert in the field, or potentially related fields such as considering Φ's possible use as a basis for developing AGI/FAI. So: what does the wisdom of the crowd of LW have to say about this concept?
|
2e13d8df-d72f-4822-9a02-7356a1840f83
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Alignment Newsletter #19
Highlights
OpenAI Five Benchmark: Results (OpenAI's Dota Team): The OpenAI Five benchmark happened last Sunday, where OpenAI Five won two matches against the human team, and lost the last one when their draft was adversarially selected. They are now planning to play at The International in a couple of weeks (dates to be finalized). That will be a harder challenge, since they will be playing against teams that play and train professionally, and so will be better at communication and coordination than the human team here.
Blitz (one of the human players) said: "The only noticeable difference in the mechanical skill aspect was the hex from the Lion, but even that was sorta irrelevant to the overall game flow. Got outdrafted and outmaneuvered pretty heavily, and from a strategy perspective it was just better then us. Even with the limitations in place it still 'felt' like a dota game, against a very good team. It made all the right plays I'd expect most top tier teams to make."
On the technical side, OpenAI implemented a brute-force draft system. With a pool of 18 heroes, you get some combinatorial explosion, but there are still only ~11 million possible matchups. You can then do a simple tree search over which hero to draft, where at the leaves (when you have a full draft) you choose which leaf you want based on the win probability (which OpenAI Five already outputs). Seeing this in action, it seems to me like it's a vanilla minimax algorithm, probably with alpha-beta pruning so that they don't have to evaluate all ~159 billion nodes in the tree. (Or they could have done the full search once, hardcoded the action it comes up with for the first decision, and run the full search for every subsequent action, which would have under 10 billion nodes in the tree.)
Besides the win probabilities, there are other ways to get insight into what the model is "thinking" -- for example, by asking the model to predict where the hero will be in 6 seconds, or by predicting how many
|
6c7779ec-68cf-498e-aca1-058c2825910d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Don't over-update on FrontierMath results
(As an employee of the European AI Office, it's important for me to emphasize this point: The views and opinions of the author expressed herein are personal and do not necessarily reflect those of the European Commission or other EU institutions.)
When OpenAI first announced that o3 achieved 25% on FrontierMath, I was really freaked out. Next day, I asked Elliot Glazer, EpochAI's lead mathematician and the main developer of FrontierMath, what he thought. He said he was also shocked, and expected o3 to "crush the IMO" and get an easy gold, based on the fact that it got 25% on FrontierMath.
In retrospect, it really looks like we over-updated. While the public can't easily try o3 yet, we have access to o3-mini (high) now, which achieves 20% on FrontierMath given 8 tries, and gets 32% using a Python tool. This seems pretty close to o3's result, as we don't know how much extra affordance o3 had while solving the problems, but based on OpenAI's communication, plausibly it's similar to what o3-mini (high) had when it was using a Python tool.
In spite of its great scores on FrontierMath, o3-mini (high) is nowhere close to "crushing the IMO". To the best of my knowledge, it can't solve a single IMO problem from recent years, and in my experiments it's doing somewhat worse than I was in 9th grade on the high school competitions I participated back then.[1][2][3] Other mathematicians and people with competitive math background whom I asked have similar experience.
That's still impressive from an LLM, and the pace of progress is admittedly very fast. [4] Nonetheless, it's not what we originally expected when 25% on FrontierMath was announced. What causes the discrepancy?
Part of the story might be that OpenAI elicits the model's capabilities better than I do. It looks like they give it more inference time, they give it tools, and in some experiments they give the AI more than one tries. In contrast, I only experimented in the normal o3-mini (high) chat interface, only gave
|
dd5ce3f1-8884-423c-b251-50b3f32e1d39
|
trentmkelly/LessWrong-43k
|
LessWrong
|
6-paragraph AI risk intro for MAISI
The Michigan AI Safety Initiative (MAISI) is a new AI safety student group at the University of Michigan. The website's "About" page includes a short intro to AI risk. I'm sharing it here for people who are interested in short pitches for AI x-risk. Feel free to comment with feedback / suggestions / criticisms.
Will AI really cause a catastrophe?
Hopefully not! AI has tremendous potential for making the world a better place, especially as the technology continues to develop. We’re already seeing some beneficial applications of AI to healthcare, accessibility, language translation, automotive safety, and art creation, to name just a few. However, advanced AI also poses some serious risks.
At the very least, malicious actors could use AI to cause harm, e.g. building dangerous weapons, spreading fake news, empowering oppressive regimes, and more.
More speculatively, advanced AI systems could potentially seek power or control over humans. It’s possible that future AI systems will be qualitatively different from those we see today. They may be able to form sophisticated plans to achieve their goals, and also understand the world well enough to strategically evaluate many relevant obstacles and opportunities. Furthermore, they may attempt to acquire resources or resist shutdown attempts, since these are useful strategies for some goals their designers might specify. To see why these failures might be challenging to prevent, see this research on specification gaming and goal misgeneralization from DeepMind.
It’s worth reflecting on the possibility that an AI system of this kind could outmaneuver humanity’s best efforts to stop it. Meta’s Cicero model demonstrated that AI systems can successfully negotiate with humans when it reached human-level performance in Diplomacy, a strategic board game, so an advanced AI system could manipulate humans to assist it or trust it. In addition, AI systems are swiftly becoming proficient at writing computer code with models like Code
|
633929df-8dcc-428c-bf8e-7e27fc61569c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Purpose and Pragmatism
Followup to: Making Beliefs Pay Rent, Lost Purposes
Thus runs the ancient parable:
> If a tree falls in a forest and no one hears it, does it make a sound?
> One says, "Yes it does, for it makes vibrations in the air."
> Another says, "No it does not, for there is no auditory processing in any brain."
So begins a long, acrimonious battle...
The conventional resolution is that the two are fighting over the definition of a word, and such labels do not have intrinsic definitions, only agreed-upon definitions.
Yet if you need to know about the forest for any pragmatic reason - if there is anything you plan on doing with the knowledge - then the answer is no longer a matter of mutual agreement. If, for example, you need to know whether landmines will be set off by the tree falling, then you cannot make the land mines explode or unexplode by any possible amount of agreement about the meaning of the word "sound". You can get the whole world to agree, one way or the other, and it still won't make a difference.
You find yourself in an unheard-falling-tree dilemma, only when you become curious about a question with no pragmatic use, and no predictive consequences. Which suggests that you may be playing loose with your purposes.
So does this mean that truth reduces to usefulness? But this, itself, would be a purpose-loss, a subgoal stomp, a mistaking of the indicator for the indicated. Usefulness for prediction, and demonstrated powers of manipulation, is one of the best indicators of truth. This does not mean that usefulness is truth. You might as well say that the act of driving to the supermarket is eating chocolate.
There is, nonetheless, a deep similarity between the pragmatic and the epistemic arts of rationality, in the matter of keeping your eye on the ball.
In pragmatic rationality, keeping your eye on the ball means holding to your purpose: Being aware of how each act leads to its consequence, and not losing sight of utilities in leaky generalizatio
|
86b8d310-2412-4126-97c7-0ec20a67f849
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Ontology identification problem: Technical tutorial
The problem of [ontology identification](https://arbital.com/p/5c) is the problem of loading a goal into an [advanced agent](https://arbital.com/p/2c) when that agent's representation of the world is likely to change in ways [unforeseen in the development phase](https://arbital.com/p/5d). This tutorial focuses primarily on explaining what the problem is and why it is a [foreseeable difficulty](https://arbital.com/p/6r); for the corresponding research problems, see [the main page on Ontology Identification](https://arbital.com/p/5c).
This is a technical tutorial, meaning that it assumes some familiarity with [value alignment theory](https://arbital.com/p/2v), the [value identification problem](https://arbital.com/p/6c), and [safety thinking for advanced agents](https://arbital.com/p/2l).
To isolate ontology identification from other parts of the value identification problem, we consider a simplified but still very difficult problem: to state an unbounded program implementing a [diamond maximizer](https://arbital.com/p/5g) that will turn as much of the physical universe into diamond as possible. The goal of "making diamonds" was chosen to have a crisp-seeming definition for our universe: namely, the amount of diamond is the number of carbon atoms covalently bound to four other carbon atoms. Since it seems that in this case our [intended goal](https://arbital.com/p/6h) should be crisply definable relative to our universe's physics, we can avert many other issues of trying to identify [complex values](https://arbital.com/p/5l) to the agent. Ontology identification is a difficulty that still remains even in this case - the agent's representation of 'carbon atoms' may still change over time.
## Introduction: Two sources of representational unpredictability
Suppose we wanted to write a hand-coded, [object-level](https://arbital.com/p/5t) utility function that evaluated the amount of diamond material present in the AI's model of the world. We might foresee the following two difficulties:
1. Where exactly do I find 'carbon atoms' inside the AI's model of the world? As the programmer, all I see are these mysterious ones and zeroes, and the only parts that directly correspond to events I understand is the represention of the pixels in the AI's webcam... maybe I can figure out where the 'carbon' concept is by showing the AI graphite, buckytubes, and a diamond on its webcam and seeing what parts get activated... whoops, looks like the AI just revised its internal representation to be more computationally efficient, now I once again have no idea what 'carbon' looks like in there. How can I make my hand-coded utility function re-bind itself to 'carbon' each time the AI revises its model's representation of the world?
2. What exactly is 'diamond'? If you say it's a nucleus with six protons, what's a proton? If you define a proton as being made of quarks, what if there are unknown other particles underlying quarks? What if the Standard Model of physics is incomplete or wrong - can we state exactly and formally what constitutes a carbon atom when we aren't certain what the underlying quarks are made of?
Difficulty 2 probably seems more exotic than the first, but Difficulty 2 is easier to explain in a formal sense and turns out to be a simpler way to illustrate many of the key issues that also appear in Difficulty 1. We can see Difficulty 2 as the problem of binding an [intended goal](https://arbital.com/p/6h) to an unknown territory, and Difficulty 1 as the problem of binding an intended goal to an unknown map. So the first step of the tutorial will be to walk through how Difficulty 2 (what exactly is a diamond?) might result in weird behavior in an [unbounded agent](https://arbital.com/p/107) intended to be a diamond maximizer.
## Try 1: Hacking AIXI to maximize diamonds?
The classic unbounded agent - an agent using far more computing power than the size of its environment - is [AIXI](https://arbital.com/p/11v). Roughly speaking, AIXI considers all computable hypotheses for how its environment might work - all possible Turing machines that would turn AIXI's outputs into AIXI's future inputs. (The finite variant AIXI-tl has a hypothesis space that includes all Turing machines that can be specified using fewer than $l$ bits and run in less than time $t$.)
From the perspective of AIXI, any Turing machine that takes one input tape and produces two output tapes is a "hypothesis about the environment", where the input to the Turing machine encodes AIXI's hypothetical action, and the outputs are interpreted as a prediction about AIXI's sensory data and AIXI's reward signal. (In Marcus Hutter's formalism, the agent's reward is a separate sensory input to the agent, so hypotheses about the environment also make predictions about sensed rewards). AIXI then behaves as a [Bayesian predictor](https://arbital.com/p/) that uses [algorithmic complexity](https://arbital.com/p/5v) to give higher [prior probabilities](https://arbital.com/p/) to simpler hypotheses (that is, Turing machines with fewer states and smaller state transition diagrams), and updates its mix of hypotheses based on sensory evidence (which can confirm or disconfirm the predictions of particular Turing machines).
As a decision agent, AIXI always outputs the motor action that leads to the highest predicted reward, assuming that the environment is described by the updated probability mixture of all Turing machines that could represent the environment (and assuming that future iterations of AIXI update and choose similarly).
The ontology identification problem shows up sharply when we imagine trying to modify AIXI to "maximize expectations of diamonds in the outside environment" rather than "maximize expectations of sensory reward signals". As a [Cartesian agent](https://arbital.com/p/), AIXI has sharply defined sensory inputs and motor outputs, so we can have a [probability mixture](https://arbital.com/p/) over all Turing machines that relate motor outputs to sense inputs (as crisply represented in the input and output tapes). But even if some otherwise arbitrary Turing machine happens to predict sensory experiences extremely well, how do we look at the state and working tape of that Turing machine to evaluate 'the amount of diamond' or 'the estimated number of carbon atoms bound to four other carbon atoms'? The highest-weighted Turing machines that have best predicted the sensory data so far, presumably contain *some* sort of representation of the environment, but we have no idea how to get 'the number of diamonds' out of it.
(Example: Maybe one Turing machine that is producing good sequence predictions inside AIXI, actually does so by simulating a large universe, identifying a superintelligent civilization that evolves inside that universe, and motivating that civilization to try to intelligently predict future future bits from past bits (as provided by some intervention). To write a formal utility function that could extract the 'amount of real diamond in the environment' from arbitrary predictors in the above case , we'd need the function to read the Turing machine, decode that universe, find the superintelligence, decode the superintelligence's thought processes, find the concept (if any) resembling 'diamond', and hope that the superintelligence had precalculated how much diamond was around in the outer universe being manipulated by AIXI.)
This is, in general, the reason why the AIXI family of architectures can only contain agents defined to maximize direct functions of their sensory input, and not agents that behave so as to optimize facts about their external environment. (We can't make AIXI maximize diamonds by making it want *pictures* of diamonds because then it will just, e.g., [build an environmental subagent that seizes control of AIXI's webcam and shows it pictures of diamonds](https://arbital.com/p/). If you ask AIXI to show itself sensory pictures of diamonds, you can get it to show its webcam lots of pictures of diamonds, but this is not the same thing as building an environmental diamond maximizer.)
## Try 2: Unbounded agent using classical atomic hypotheses?
Given the origins of the above difficulty, we next imagine constraining the agent's hypothesis space to something other than "literally all computable functions from motor outputs to sense inputs", so that we can figure out how to find diamonds or carbon inside the agent's representation of the world.
As an [unrealistic example](https://arbital.com/p/): Suppose someone was trying to define 'diamonds' to the AI's utility function. Suppose they knew about atomic physics but not nuclear physics. Suppose they build an AI which, during its development phase, learns about atomic physics from the programmers, and thus builds a world-model that is based on atomic physics.
Again for purposes of [unrealistic examples](https://arbital.com/p/), suppose that the AI's world-model is encoded in such fashion that when the AI imagines a molecular structure - represents a mental image of some molecules - then carbon atoms are represented as a particular kind of basic element of the representation. Again, as an [unrealistic example](https://arbital.com/p/), imagine that there are [little LISP tokens](https://arbital.com/p/) representing environmental objects, and that the environmental-object-type of carbon-objects is encoded by the integer 6. Imagine also that each atom, inside this representation, is followed by a list of the other atoms to which it's covalently bound. Then when the AI is imagining a carbon atom participating in a diamond, inside the representation we would see an object of type 6, followed by a list containing exactly four other 6-objects.
Can we fix this representation for all hypotheses, and then write a utility function for the AI that counts the number of type-6 objects that are bound to exactly four other type-6 objects? And if we did so, would the result actually be a diamond maximizer?
### AIXI-atomic
As a first approach to implementing this idea - an agent whose hypothesis space is constrained to models that directly represent all the carbon atoms - imagine a variant of AIXI-tl that, rather than considering all tl-bounded Turing machines, considers all simulated atomic universes containing up to 10^100 particles spread out over up to 10^50 light-years. In other words, the agent's hypotheses are universe-sized simulations of classical, pre-nuclear models of physics; and these simulations are constrained to a common representation, so a fixed utility function can look at the representation and count carbon atoms bound to four other carbon atoms. Call this agent AIXI-atomic.
(Note that AIXI-atomic, as an [unbounded agent](https://arbital.com/p/107), may use far more computing power than is embodied in its environment. For purposes of the thought experiment, assume that the universe contains exactly one hypercomputer that runs AIXI-atomic.)
A first difficulty is that universes composed only of classical atoms are not good explanations of our own universe, even in terms of surface phenomena; e.g. the [ultraviolet catastrophe](http://en.wikipedia.org/wiki/Ultraviolet_catastrophe). So let it be supposed that we have simulation rules for classical physics that replicate at least whatever phenomena the programmers have observed at [development time](https://arbital.com/p/), even if the rules have some seemingly ad-hoc elements (like there being no ultraviolent catastrophes). We will *not* however suppose that the programmers have discovered all experimental phenomena we now see as pointing to nuclear or quantum physics.
A second difficulty is that a simulated universe of classical atoms does not identify where in the universe the AIXI-atomic agent resides, or say how to match the types of AIXI-atomic's sense inputs with the underlying behaviors of atoms. We can elide this difficulty by imagining that AIXI-atomic simulates classical universes containing a single hypercomputer, and that AIXI-atomic knows a simple function from each simulated universe onto its own sensory data (e.g., it knows to look at the simulated universe, and translate simulated photons impinging on its webcam onto predicted webcam data in the standard format). This elides most of the problem of [naturalized induction](https://arbital.com/p/).
So the AIXI-atomic agent that is hoped to maximize diamond:
- Considers only hypotheses that directly represent universes as huge systems of classical atoms, so that the function 'count atoms bound to four other carbon atoms' can be directly run over any possible future the agent models.
- Assigns probabilistic priors over these possible atomic representations of the universe, favoring representations that are in some sense simpler.
- Somehow [maps each atomic-level representation onto the agent's predicted sensory experiences](https://arbital.com/p/).
- [Bayes-updates its priors](https://arbital.com/p/) based on actual sensory experiences, the same as classical AIXI.
- Can evaluate the 'expected diamondness on the next turn' of a single action by looking at all hypothetical universes where that action is performed, weighted by their current probability, and summing over the expectation of 'carbon atoms bound to four other carbon atoms' after some unit amount of time has passed.
- Can evaluate the 'future expected diamondness' of an action, over some finite time horizon, by assuming that its future self will also Bayes-update and maximize expected diamondness over that time horizon.
- On each turn, outputs the action with greatest expected diamondness over some finite time horizon.
Suppose our own real universe was amended to otherwise be exactly the same, but contain a single [impermeable](https://arbital.com/p/) hypercomputer. Suppose we defined an agent like the one above, using simulations of 1910-era models of physics, and ran that agent on the hypercomputer. Should we expect the result to be an actual diamond maximizer - expect that the outcome of running this program on a single hypercomputer would indeed be that most mass in our universe would be turned into carbon and arranged into diamonds?
### Anticipated failure: AIXI-atomic tries to 'maximize outside the simulation'
In fact, our own universe isn't atomic, it's nuclear and quantum-mechanical. This means that AIXI-atomic does not contain any hypotheses in its hypothesis space that *directly represent* our universe. By the previously specified hypothesis of the thought experiment, AIXI-atomic's model of simulated physics was built to encompass all the experimental phenomena the programmers had yet discovered, but there were some quantum and nuclear phenomena that AIXI-atomic's programmers had not yet discovered. When those phenomena are discovered, there will be no simple explanation on the direct terms of the model.
Intuitively, of course, we'd like AIXI-atomic to discover the composition of nuclei, shift its models to use nuclear physics, and refine the 'carbon atoms' mentioned in its utility function to mean 'atoms with nuclei containing six protons'.
But we didn't actually specify that when constructing the agent (and saying how to do it in general is, so far as we know, hard; in fact it's the whole ontology identification problem). We constrained the hypothesis space to contain only universes running on the classical physics that the programmers knew about. So what happens instead?
Probably the 'simplest atomic hypothesis that fits the facts' will be an enormous atom-based computer, *simulating* nuclear physics and quantum physics in order to create a simulated non-classical universe whose outputs are ultimately hooked up to AIXI's webcam. From our perspective this hypothesis seems silly, but if you restrict the hypothesis space to only classical atomic universes, that's what ends up being the computationally simplest hypothesis that predicts, in detail, the results of nuclear and quantum experiments.
AIXI-atomic will then try to choose actions so as to maximize the amount of expected diamond inside the probable *outside universes* that could contain the giant atom-based simulator of quantum physics. It is not obvious what sort of behavior this would imply.
### Metaphor for difficulty: AIXI-atomic cares about only fundamental carbon
One metaphorical way of looking at the problem is that AIXI-atomic was implicitly defined to care only about diamonds made out of *ontologically fundamental* carbon atoms, not diamonds made out of quarks. A probability function that assigns 0 probability to all universes made of quarks, and a utility function that outputs a constant on all universes made of quarks, [yield functionally identical behavior](https://arbital.com/p/). So it is an exact metaphor to say that AIXI-atomic only *cares* about universes with ontologically basic carbon atoms, given that AIXI-atomic's hypothesis space only contains universes with ontologically basic carbon atoms.
Imagine that AIXI-atomic's hypothesis space does contain many other universes with other laws of physics, but its hand-coded utility function just returns 0 on those universes since it can't find any 'carbon atoms' inside the model. Since AIXI-atomic only cares about diamond made of fundamental carbon, when AIXI-atomic discovers the experimental data implying that almost all of its probability mass should reside in nuclear or quantum universes in which there were no fundamental carbon atoms, AIXI-atomic stops caring about the effect its actions have on the vast majority of probability mass inside its model. Instead AIXI-atomic tries to maximize inside the tiny remaining probabilities in which it *is* inside a universe with fundamental carbon atoms that is somehow reproducing its sensory experience of nuclei and quantum fields... for example, a classical atomic universe containing a computer simulating a quantum universe and showing the results to AIXI-atomic.
From our perspective, we failed to solve the 'ontology identification problem' and get the real-world result we [intended](https://arbital.com/p/6h), because we tried to define the agent's *utility function* over properties of a universe made out of atoms, and the real universe turned out to be made of quantum fields. This caused the utility function to *fail to bind* to the agent's representation in the way we intuitively had in mind.
Today we do know about quantum mechanics, so if we tried to build a diamond maximizer using some bounded version of the above formula, it might not fail on account of [the particular exact problem](https://arbital.com/p/48) of atomic physics being false.
But perhaps there are discoveries still remaining that would change our picture of the universe's ontology to imply something else underlying quarks or quantum fields. Human beings have only known about quantum fields for less than a century; our model of the ontological basics of our universe has been stable for less than a hundred years of our human experience. So we should seek an AI design that does not assume we know the exact, true, fundamental ontology of our universe during an AI's [development phase](https://arbital.com/p/5d).
As another important metaphorical case in point, consider a human being who feels angst on contemplating a universe in which "By convention sweetness, by convention bitterness, by convention color, in reality only atoms and the void" (Democritus); someone who wonders where there is any room in this collection of lifeless particles for love, free will, or even the existence of people. Since, after all, people are just *mere* collections of atoms. This person can be seen as undergoing an ontology identification problem: they don't know how to find the objects of value in a representation containing atoms instead of ontologically basic people.
Human beings simultaneously evolved a particular set of standard mental representations (e.g., a representation for colors in terms of a 3-dimensional subjective color space) along with evolving emotions that bind to these representations ([identification of flowering landscapes as beautiful](http://en.wikipedia.org/wiki/Evolutionary_aesthetics#Landscape_and_other_visual_arts_preferences). When someone visualizes any particular configuration of 'mere atoms', their built-in desires don't automatically fire and bind to that mental representation, the way they would bind to the brain's native representation of the environment. Generalizing that no set of atoms can be meaningful (since no abstract configuration of 'mere atoms' they imagine, seems to trigger any emotions to bind to it) and being told that reality is composed entirely of such atoms, they feel they've been told that the true state of reality, underlying appearances, is a meaningless one.
## The utility rebinding problem
Intuitively, we would think it was [common sense](https://arbital.com/p/) for an agent that wanted diamonds to react to the experimental data identifying nuclear physics, by deciding that a carbon atom is 'really' a nucleus containing six protons. We can imagine this agent [common-sensically](https://arbital.com/p/) updating its model of the universe to a nuclear model, and redefining the 'carbon atoms' that its old utility function counted to mean 'nuclei containing exactly six protons'. Then the new utility function could evaluate outcomes in the newly discovered nuclear-physics universe. The problem of producing this desirable agent behavior is the **utility rebinding problem**.
To see why this problem is nontrivial, consider that the most common form of carbon is C-12, with nuclei composed of six protons and six neutrons. The second most common form of carbon is C-14, with nuclei composed of six protons and eight neutrons. Is C-14 *truly* carbon - is it the sort of carbon that can participate in valuable diamonds of high utility? Well, that depends on your utility function, obviously; and from a human perspective it just sounds arbitrary.
But consider a closely analogous question from a humanly important perspective: Is a chimpanzee truly a person? Where the question means not, "How do we arbitrarily define the syllables per-son?" but "Should we care a lot about chimpanzees?", i.e., how do we define the part of our preferences that care about people, to the possibly-person edge cases of chimpanzees?
If you live in a world where chimpanzees haven't been discovered, you may have an easy time running your utility function over your model of the environment, since the objects of your experience classify sharply into the 'person' and 'nonperson' categories. Then you discover chimpanzees, and they're neither typical people (John Smith) nor typical nonpeople (like rocks).
We can see the force of this question as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them... sort of like the question of how to treat carbon atoms that have the usual number of protons but not the usual number of neutrons.
Chimpanzees definitely have neural areas of various sizes, and particular cognitive abilities - we can suppose the empirical truth is unambiguous at this level, and known to us. So the question is then whether we regard a particular configuration of neural parts (a frontal cortex of a certain size) and particular cognitive abilities (consequentialist means-end reasoning and empathy, but no recursive language) as something that our 'person' category values... once we've rewritten the person category to value configurations of cognitive parts, rather than whole atomic people.
In fact, we run into this question as soon as we learn that human beings run on brains and the brains are made out of neural regions with functional properties; we can then *imagine* chimpanzees even if we haven't met any, and ask to what degree our preferences should treat this edge-person as deserving of moral rights. If we can 'rebind' our emotions and preferences to live in a world of nuclear brains rather than atomic people, this rebinding will *implicitly* say whether or not a chimpanzee is a person, depending on how our preference over brain configurations treats the configuration that is a chimpanzee.
In this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds. Once a diamond maximizer knows about neutrons, it can see that C-14 is chemically like carbon and forms the same kind of chemical bonds, but that it's heavier because it has two extra neutrons. We can see that chimpanzees have a similar brain architectures to the sort of people we always considered before, but that they have smaller frontal cortexes and no ability to use recursive language, etcetera.
Without knowing more about the diamond maximizer, we can't guess what sort of considerations it might bring to bear in deciding what is Truly Carbon and Really A Diamond. But the breadth of considerations human beings need to invoke in deciding how much to care about chimpanzees, is one way of illustrating that the problem of rebinding a utility function to a shifted ontology is [https://arbital.com/p/value-laden](https://arbital.com/p/value-laden) and can potentially undergo [https://arbital.com/p/excursions](https://arbital.com/p/excursions) into [complex desiderata](https://arbital.com/p/5l). Redefining a [moral category](https://arbital.com/p/) so that it talks about the underlying parts of what were previously seen as all-or-nothing atomic objects, may carry an implicit ruling about how to value many kinds of [https://arbital.com/p/edge-case](https://arbital.com/p/edge-case) objects that were never seen before.
It's possible that some formal part of this problem could be usefully carved out from the complex value-laded edge-case-reclassification part. E.g., how would you redefine carbon as C12 if there were no other isotopes? How would you rebind the utility function to *at least* C12? In general, how could edge cases be [identified and queried](https://arbital.com/p/) by an [online Genie](https://arbital.com/p/6w)?
### Reappearance on the reflective level
An obvious thought (especially for [online Genies](https://arbital.com/p/6w)) is that if the AI is unsure about how to reinterpret its goals in light of a shifting mental representation, it should query the programmers.
Since the definition of a programmer would then itself be baked into the [preference framework](https://arbital.com/p/5f), the problem might [reproduce itself on the reflective level](https://arbital.com/p/) if the AI became unsure of where to find 'programmers': "My preference framework said that programmers were made of carbon atoms, but all I can find in this universe are quantum fields!"
Thus the ontology identification problem is arguably one of the [critical subproblems](https://arbital.com/p/) of value alignment: it plausibly has the property that, if botched, it could potentially [crash the error recovery mechanism](https://arbital.com/p/).
## Diamond identification in multi-level maps
A realistic, [bounded diamond maximizer](https://arbital.com/p/5g) wouldn't represent the outside universe with atomically detailed or quantum-detailed models. Instead, a bounded agent would have some version of a [multi-level map](https://arbital.com/p/) of the world in which the agent knew in principle that things were composed of atoms, but didn't model most things in atomic detail. A bounded agent's model of an airplane would have wings, or wing shapes, rather than atomically detailed wings. It would think about wings when doing aerodynamic engineering, atoms when doing chemistry, nuclear physics when doing nuclear engineering, and definitely not try to model everything in its experience down to the level of quantum fields.
At the present, there are not yet any proposed formalisms for how to do probability theory with multi-level maps (in other words: [nobody has yet put forward a guess at how to solve the problem even given infinite computing power](https://arbital.com/p/)). But it seems very likely that, if we did know what multi-level maps looked like formally, it might suggest a formal solution to non-value-laden utility-rebinding.
E.g., if an agent already has a separate high-level concept of 'diamond' that's bound to a lower-level concept of 'carbon atoms bound to four other carbon atoms', then maybe when you discover nuclear physics, the multi-level map itself would tend to suggest that 'carbon atoms' be re-bound to 'nuclei with six protons' or 'nuclei with six protons and six neutrons'. It might at least be possible to phrase the equivalent of a prior or mixture of weightings for how the utility function would re-bind itself, and say, "Given this prior, care about whatever that sparkly hard stuff 'diamond' ends up binding to on the lower level."
Unfortunately, we have very little formal probability theory to describe how a multi-level map would go from 'that unknown sparkly hard stuff' to 'carbon atoms bound to four other carbon atoms in tetrahedral patterns, which is the only known repeating pattern for carbon atoms bound to four other carbon atoms' to 'C12 and C14 are chemically identical but C14 is heavier'. This being the case, we don't know how to say anything about a dynamically updating multi-level map inside a [preference framework](https://arbital.com/p/5f).
If we were actually trying to build a diamond maximizer, we would be likely to encounter this problem long before it started formulating new physics. The equivalent of a computational discovery that changes 'the most efficient way to represent diamonds' is likely to happen much earlier than a physical discovery that changes 'what underlying physical systems probably constitute a diamond'.
This also means that we are liable to face the ontology identification problem long before the agent starts discovering new physics, as soon as it starts revising its representation. Only very unreflective agents with strongly fixed-in-place representations for every part of the environment that we think the agent is supposed to care about, would let the ontology identification problem be elided entirely. Only *very* not-self-modifying agents, or [Cartesian agents](https://arbital.com/p/) with goals formulated only over sense data, would not confront their programmers with ontology identification problems.
## Research paths
More of these are described in the [main article on ontology identification](https://arbital.com/p/5c). But here's a quick list of some relevant research subproblems and avenues:
* Transparent priors. Priors that are constrained to meaningful hypothesis spaces that the utility function knows how to interpret. Rather than all Turing machines being hypotheses, we could have only causal models being hypotheses, and then preference frameworks that talked about 'the cause of' labeled sensory data could read the hypotheses. (Note that the space of causal models can be Turing-complete, in the sense of being able to embed any Turing machine as a causal system. So we'd be able to explain any computable sense data in terms of a causal model - we wouldn't sacrifice any explanatory power by restricting ourselves to 'causal models' instead of 'all Turing machines'.)
* Reductionist identifications. Being able to go hunting, inside the current model of an environment, for a thingy that looks like it's made out of type-1 thingies bound to four other type-1 thingies, where a type-1 thingy is itself made out of six type-2, six type-3, and six type-4 thingies (6 electrons, 6 protons, 6 neutrons).
* Causal identifications. Some variation on trying to identify diamonds as the causes of pictures of diamonds, for some data set of things labeled as diamonds or non-diamonds. This doesn't work immediately because then it's not clear whether "the cause" of the picture is the photons reflecting off the diamond, the diamond itself, the geological pressures that produced the diamond, the laws of physics, etcetera. But perhaps some crossfire of identification could pin down the 'diamond' category inside a causal model, by applying some formal rule to several sets of the right sort of labeled sense data. As an open problem: If an agent has a rich causal model that includes categories like 'diamond' somewhere unknown, and you can point to labeled sensory datasets and use casual and categorical language, what labeled datasets and language would unambiguously identify diamonds, and no other white sparkly things, even if the resulting concept of 'diamond' was being [subject to maximization](https://arbital.com/p/2w)? (Note that under this approach, as with any preference framework that talks about the causes of sensory experiences, we need to worry about [Christiano's Hack](https://arbital.com/p/5j).)
* Ambiguity resolution. Detect when an ontology identification is ambiguous, and refer the problem to the user/programmer. At our present stage of knowledge this seems like pretty much the same problem as [inductive ambiguity resolution](https://arbital.com/p/).
* Multi-level maps. Solve the problem of bounded agents having maps of the world that operate at multiple, interacting reductionist levels, as designed to save on computing power. Then solve ontology identification by initially binding to a higher level of the map, and introducing some rule for re-binding as the map updates. Note that multi-level mapping is an [AGI rather than FAI problem](https://arbital.com/p/), meaning that work here [should perhaps be classified](https://arbital.com/p/).
* Solution for [non-self-modifying Genies](https://arbital.com/p/6w). Try to state a 'hack' solution to ontology identification that would work for an AI running on fixed algorithms where a persistent knowledge representation is known at development time.
## Some implications
The ontology identification problem is one more reason to believe that [hard-coded object-level utility functions should be avoided](https://arbital.com/p/) and that [value identification in general is hard](https://arbital.com/p/).
Ontology identification is heavily entangled with AGI problems, meaning that some research on ontology identification [may need to be non-public](https://arbital.com/p/). This is an example instance of the argument that [at least some VAT research may need to be non-public](https://arbital.com/p/), based on that [at least some AGI research is better off non-public](https://arbital.com/p/).
|
4098121a-5795-4e90-94b9-677e76ea8311
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Defeat may be irreversibly catastrophic
Context: This is a linkpost for https://aisafety.info/questions/NM3P/9:-Defeat-may-be-irreversibly-catastrophic
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
When you imagine a global catastrophe, maybe the kind of event that comes to mind is one that strikes a big blow against human civilization but doesn’t ruin it permanently. Events like pandemics, environmental disasters, and even nuclear wars would cause enormous suffering and millions or billions of deaths, but would probably leave part of human civilization alive to recover. Future generations could still flourish.
An AI takeover catastrophe is not like that. An inherent feature of AI taking over control for its own long-term purposes is that it retains that control permanently. We’d be facing an adversary that, rather than striking a one-off blow, continues to optimize the world for its own ends long-term.
Such a scenario could result in full human extinction, because:
* The AI may kill us deliberately, because we’re a threat. For example, we might build a different powerful AI that could compete with it. (In the later stages of an AI takeover, maybe humanity will be not even a threat — but then again, it will then also be extremely easy to kill us.)
* The conditions supporting human life are unlikely to persist when there’s a technologically hyper-advanced AI civilization using the world for its own ends. Any of a wide range of changes such a civilization will plausibly make to the world — capturing all the sun’s energy output, changing the composition of the atmosphere, using the Earth’s surface for building materials — would result in our extinction
More generally, the risk of permanent loss of the potential for good outcomes for humanity is called existential risk. Human extinction is the main and most straightforward case. But in a world controlled by misaligned AI, even if
|
da9e0858-1bc4-4341-9b44-d8f6d3b539bf
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Using Natural Language to Guide Meta-Learning Agents towards Human-like Inductive Biases.
Using Natural Language to Guide Meta-Learning Agents towards
Human-like Inductive Biases
Sreejan Kumar1, Ishita Dasgupta2, Michael Hu1, Raja Marjieh1, Robert D. Hawkins1,
Nathaniel D. Daw1, Jonathan D. Cohen1, Karthik Narasimhan1, and Thomas L. Griffiths1
1Princeton University
2Deepmind
Abstract
Inductive biases are a key component of human
intelligence, allowing people to acquire, rep-
resent, and use abstract knowledge. Although
meta-learning has emerged as an approach to
endowing neural networks with inductive bi-
ases, agents trained via meta-learning can use
very different strategies compared to humans.
We show that co-training these agents on pre-
dicting human-generated natural language task
descriptions guides them toward human-like in-
ductive biases that more appropriately capture
the structure of the task distribution as humans
see it. We further show that the level of abstrac-
tion at which humans write these descriptions
influences the size of the effect. This work
provides a foundation for investigating how to
collect task descriptions at the appropriate level
of abstraction to leverage for approximating
human-like learning of structured representa-
tions in neural networks.
1 Introduction
Human learners are guided by strong inductive
biases towards abstract knowledge (Tenenbaum
et al., 2011; Griffiths et al., 2010); these biases
present one of the most salient differences between
humans and neural network-based learners (Lake
et al., 2017). One emerging approach to bestowing
human-like inductive biases on neural networks is
meta-learning (Griffiths et al., 2019; Hospedales
et al., 2020). In meta-learning paradigms, an agent
is trained not just on a single task but on a dis-
tribution of tasks, with the aim of acquiring the
underlying abstractions that these tasks have in
common. However, since neural networks are not
easily interpretable, it can be difficult to tell if the
resulting neural networks actually acquired this
abstract knowledge, or whether they have simply
learned statistical artifacts correlated with abstract
rules. Recently, Kumar et al. (2021) found that
neural agents are biased towards learning the latter.
Specifically, through the use of a task distributiongenerated from an abstract compositional grammar
and a corresponding control task distribution with
closely matched statistics, they found agents do bet-
ter in the control task distribution whereas humans
do better in the abstract task distribution, demon-
strating a difference in inductive biases between
humans and agents.
What explains such differences? One possi-
bility is that human biases toward abstract struc-
ture are related to our language abilities (Spelke,
2003; Lupyan and Bergen, 2016). Indeed, recent
work in machine learning has revealed how neural
network representations can be shaped and struc-
tured through natural language supervision (An-
dreas et al., 2018; Luketina et al., 2019; Wong et al.,
2021; Narasimhan et al., 2018; Mu et al., 2020).
In this work, we show that guiding meta-
reinforcement learning agents with natural lan-
guage descriptions not only increases performance
on abstract task distributions, but also results in
more human-like behavior: it decreases perfor-
mance on control task distributions where humans
perform poorly. Further, while much of language-
guided RL work focuses on synthetic descriptions,
we investigate different kinds of human-generated
descriptions. We collect human descriptions at
different levels of abstraction and find that guid-
ance with more abstract descriptions lead to more
human-like inductive biases in agents.
Our approach is to first extend and replicate the
results of Kumar et al. (2021). Specifically, instead
of developing an abstract task distribution using
handwritten rules as in Kumar et al. (2021), we di-
rectly project human priors into a task distribution
(see Fig 1B). We then test a meta-RL agent’s ability
to acquire this task distribution’s emergent abstract
priors by building a control task distribution using
the same approach as Kumar et al. (2021) (see Fig
1C). We replicated the double dissociation effect
seen in Kumar et al. (2021) (see Fig 1E) and then
further show that we can guide the agent towards
Underlying Board Start Step 1 Step 2 Step 3 Step 4 Step 10
Goal: Uncover all red with
as little blue as possibleRL Task
ParadigmA
...
What should the
color of the white tile
be given that the
board is generated
by a very simple rule?Gibbs Sampling with PeopleB
Train network to predict held-out tile. Sampling from Human Inductive Bias Sampling from Neural Network Inductive Bias C
Use network for Gibbs SamplingSample from
Human Distribution
and mask tileFill in masked
tile
Random Initialization
Human Samples
Control SamplesD ERL Task Performance
Control SampleHuman-Generated
(GSP)Machine-Generated
(Control)
Task DistributionBlue Tiles Revealed Z-scored
by Nearest Neighbor HeuristicFigure 1: Meta-RL task paradigm. (A) In the tile-revealing task, an agent sequentially reveals tiles to uncover
a picture on a 2D grid. We elicit (B) human priors and (C) control priors over the task distribution using Gibbs
sampling (Geman and Geman, 1984). (D) Samples from human and control distributions. (E) Performance of
(independent) humans and machine-learning agents on the tile-revealing task with human and control boards.
Performance is based on number of blue tiles revealed (lower is better; see Appendix for details). Error bars are
95% confidence intervals.
learning a human-like inductive bias through the
use of natural language co-supervision.
2 Methods
Tile-revealing task. We employ the tile-
revealing task paradigm developed in Kumar et al.
(2021) (see Fig. 1A). The observation is a 4×4
grid of tiles that are initially white except for one
red tile. Actions – clicking on white tiles – reveal
those tiles to be either red or blue. The episode
ends when the agent reveals all the red tiles. There
is a reward for each red tile revealed, and a penalty
for each blue tile revealed. The goal therefore is
to reveal all the red tiles while revealing as few
blue tiles as possible. One “board” with a fixed
configuration of red tiles defines a single task. A
distribution over tasks is defined by specifying a
distribution over different 4×4grids of red and
blue tiles (boards).
Eliciting human priors with Gibbs sampling.
In order to elicit human inductive biases, we use
a technique called Gibbs Sampling with People
(GSP; Harrison et al., 2020, see Fig. 1B). We ini-
tialize a random 4×4grid with red and blue tiles,
mask out a tile, and ask a human participant to
predict the color of the masked-out tile. We then
change that tile to match the human’s prediction
and present the updated grid to another participant,
masking out a different tile. This sequence of de-
cisions implements a Markov chain; the stationary
distribution of this chain is the implicit prior distri-
bution people hold over 4×4grid colorings (Har-
rison et al., 2020), There are several recognizable
abstract concepts that emerged within the resultinggrids, such as lines, squares, and continuous shapes
(see Fig. 1D).
Constructing a control distribution. We cre-
ated a control distribution, following Kumar et al.
2021, that matches the statistics of the GSP boards
but uses a different underlying generative process
(i.e. not produced by human decisions, see Fig. 1C).
Specifically, we train a fully connected neural net-
work to encode the conditional distributions of the
GSP boards: we mask out a random tile in each
board, and train the network to predict its value
given the other tiles (similar to masked language
models; (Devlin et al., 2018)). We then sampled
boards from the network’s learned conditionals
with Gibbs sampling. This is the same process
we used to generate the GSP boards, but using the
trained neural network to generate the conditional
distributions instead of human samples. We are
therefore sampling from the distribution the net-
work places over 4×4grids (combining its own
inductive bias and the data from the GSP boards).
We refer to this distribution as a “control” distribu-
tion, which is comprised of tasks that are generated
with different underlying generative processes but
share certain statistical properties.
Collecting natural-language descriptions. We
hypothesized that linguistic descriptions of the GSP
boards may help guide the agent’s inductive bias
towards more human-like abstractions. To test this
hypothesis, we collected natural-language descrip-
tions of 500 GSP boards from a naive group of
participants. There were three types of descrip-
tions collected: two that were human-generated
High Level Prompt: Be succinct.
• “A U shape”
• “A wide letter U”
Low Level Prompt: Be as detailed as possible
• “Square shaped cluster of white boxes to upper-middle.
An upside-down table shaped red box formation. ”
• “The red and white boxes represent 50 percent of the
picture. ’ , 'A large grid made up of 16 identical sized squares.
8 of which are coloured red. The red ones /f_ill the 3 row
down and the top 3 of column 1 and column 4' , 'squares in
the shape of a U'”
Synthetic Low Level Template
• “The reds are in /f_irst row and /f_irst column, /f_irst row and
fourth column, second row and /f_irst column, second row
and fourth column, third row and /f_irst column, third row
and second column, third row and third column, third row
and fourth column. ” Board StimulusFigure 2: Types of Text Descriptions Obtained for
GSP Boards We obtained three types of descriptions
for 500 of the GSP boards: high-level, low-level, and
synthetic low-level. The first two were collected directly
from humans using different types of prompts that em-
phasized succinctness and detail respectively. The third
was generated from a handmade template that verbalizes
the location of red tiles. When showing participants the
boards, we converted blue tiles to white tiles in order
to have them focus their description on the red tiles’
locations.
under different prompts and one that was synthet-
ically generated using a template (see Fig. 2 and
Appendix for exact wording of prompts). High-
level descriptions were collected from humans who
were given a prompt that encouraged succinctness
in descriptions. Low-level descriptions were col-
lected from humans who were given a prompt that
encouraged being verbose and detailed. Synthetic
low-level descriptions are not human generated and
were obtained by using a hand-written template that
verbalizes the location of all the red tiles.
Grounding agents with descriptions. We train
a commonly used RNN-based meta-reinforcement
learning agent (Wang et al., 2018; Duan et al.,
2016) using Proximal Policy Optimization (PPO;
Schulman et al. (2017)). See Fig. 3 and Appendix
for more details.
In order to guide the agent to learn a human-like
inductive bias, we introduce a language ground-
ing term to the loss function: loss=LPPO(θ) +
clangLlang(ˆψθ, ψ). Here LPPO(θ)is the original
Observation
x(t)enca(t-1)
V
πenc(x(t))
Underlying
Board
“A U Shape”
Board
DescriptionBERT
r(t-1)
BERT Embedding Grounding
TaskRL TaskFigure 3: Grounding architecture. A CNN encoder ob-
serves the board state and passes it onto an LSTM policy
network conditioned on the previous timestep’s action
and reward. We have the agent concurrently predict
the BERT embedding of the corresponding language
description using the encoder.
PPO loss function, clangis a hyperparameter co-
efficient that weights the language loss Llang,ψ
is the language attribute , and ˆψθis the agent’s
prediction of the language attribute. Optimizing for
an auxiliary language task jointly with the original
task has previously been found to shape the latent
representations used in the original task (Mu et al.,
2020; Lampinen et al., 2021).
In our study, ψis the BERT embedding of the
uncovered board’s corresponding language descrip-
tion, obtained using the SentenceTransformer pack-
age (https://www.sbert.net/ , based on
Reimers and Gurevych (2019)). ˆψθis generated
using a small network (two layer MLP) on top of
the board encoding shared with the RL task (see
Figure 3). Llangis the MSE between the predicted
and actual BERT embedding of the language de-
scription.
3 Results
We trained all agents on the GSP boards (see Ap-
pendix for details) and evaluated them on held-out
GSP and control boards. We then compared this
held-out test performance against human perfor-
mance on these test boards (see Fig. 1E). Perfor-
mance is based on the number of blue tiles revealed
in the episode, z-scored by the performance of a
nearest neighbor heuristic, so lower is better (see
Appendix). Results are shown in Fig. 4. First, ex-
amining the performance of human participants and
non-linguistic agents, we observe the same double
dissociation results found by Kumar et al. (2021):
humans perform better in the abstract task distribu-
tion and agents perform better in the control task
distribution. Next, we examine agents that were
co-trained with a language loss Llangon three dif-
ferent kinds of language data: low-level, high-level,
and synthetic low-level (see Fig. 2 for examples
of the different kinds of language data). As a final
baseline, we also considered an autoencoder agent
trained to predict the underlying board state (i.e.,
which tiles are red or blue) rather than the board’s
corresponding language attribute.
RL Task Performance
Human Generated (GSP)Task Distribution
Machine Generated (Control)
Blue Tiles Revealed Z-scored by
Nearest Neighbor Heuristic
Figure 4: Language-Grounded Agent Experiment
Results Performance of various agents on held-out tasks
for each task distribution using agents co-trained with
the language objective. As in Fig. 1, performance is
evaluated by z-scoring the number of blue tiles revealed
(lower is better ) relative to a nearest neighbor heuristic.
Error bars are 95% confidence intervals.
We set out to test whether grounding in human-
generated natural language descriptions will result
in our meta-RL agent producing more human-like
performance. We know humans perform better on
GSP boards than control boards (Fig. 1), while
generic agents do the opposite. An agent perform-
ing better on the GSP boards and worse on the
control boards therefore indicates more human-like
behavior.
We see that grounding on human-generated
descriptions leads to a human-like inductive bias
(low and high bars of Fig. 4). Each of them perform
better at the GSP boards than the control boards,
just like humans do. In contrast, although the au-
toencoder agent (which does not use language) is
substantially better on the GSP boards than the
original agent, the autoencoder grounding loss also
boosts its performance on the control boards, which
indicates that its boost in performance relative tothe original agent is not from acquiring a human-
like inductive bias (but could be an interesting in-
ductive bias in and of itself). We also find that
grounding in synthetic text does not seem to
lead to acquiring human-like inductive biases ei-
ther, since the agent using synthetic low-level text
closely matches the autoencoder and does better in
the control distribution than the GSP distribution.
We also find that the level of abstraction at
which humans write their description influences
the agent’s acquired inductive bias , as indicated
by the differences in performance among low and
high-level grounded agents. In all descriptions
(even in low-level ones), humans write about ab-
stract concepts (e.g. “squares,” “boxes,” “clusters,”
etc). These abstract concepts are most present in
high-level descriptions as they let humans to be
as succinct in their descriptions as possible. The
agent co-trained to predict these high-level descrip-
tion may therefore distill these abstract concepts
very strongly into the representations it learns. This
could explain why the high-level agent has a “super-
human“ inductive bias toward abstraction, where it
does best on the GSP boards (relative to all other
agents and even humans) and the worst on the con-
trol boards (worse than humans, and even worse
than the nearest neighbour heuristic).
4 Conclusion
In this work, we show how meta-reinforcement
learning agents can be guided to have human-like
inductive biases towards abstraction. To set this up,
we used the task paradigm of Kumar et al. (2021)
with a task distribution that directly embeds hu-
man priors through people using Gibbs Sampling
with People (Fig. 1B). We used the procedure in-
troduced in Kumar et al. (2021) to build a control
task distribution (see Fig. 1D) to help benchmark
for acquiring human-like inductive biases. Our
results show that having the agent predict human-
generated language descriptions while doing the
task during training can guide the agent towards
learning human-like inductive biases (Figure 4).
We also manipulated the level of abstraction at
which humans write their descriptions (Fig. 2) and
showed that this can affect how well the learner
acquires an inductive bias more consistent with hu-
man behavior. This lays the groundwork for future
research in learning human-like abstract represen-
tations to move toward closing the gap between
human and machine intelligence.
References
Jacob Andreas, Dan Klein, and Sergey Levine. 2018.
Learning with latent language. In Proceedings of
the 2018 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Pa-
pers) , pages 2166–2179.
James Bergstra, Rémi Bardenet, Yoshua Bengio, and
Balázs Kégl. 2011. Algorithms for hyper-parameter
optimization. Advances in neural information pro-
cessing systems , 24.
Michael Chmielewski and Sarah C Kucker. 2020. An
mturk crisis? shifts in data quality and the impact on
study results. Social Psychological and Personality
Science , 11(4):464–473.
JH Clark. 1924. The ishihara test for color blindness.
American Journal of Physiological Optics .
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing. arXiv preprint arXiv:1810.04805 .
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett,
Ilya Sutskever, and Pieter Abbeel. 2016. Rl Θ 2: Fast
reinforcement learning via slow reinforcement learn-
ing. arXiv preprint arXiv:1611.02779 .
Stuart Geman and Donald Geman. 1984. Stochastic re-
laxation, gibbs distributions, and the bayesian restora-
tion of images. IEEE Transactions on pattern analy-
sis and machine intelligence , (6):721–741.
Thomas L Griffiths, Frederick Callaway, Michael B
Chang, Erin Grant, Paul M Krueger, and Falk Lieder.
2019. Doing more with less: meta-reasoning and
meta-learning in humans and machines. Current
Opinion in Behavioral Sciences , 29:24–30.
Thomas L Griffiths, Nick Chater, Charles Kemp, Amy
Perfors, and Joshua B Tenenbaum. 2010. Probabilis-
tic models of cognition: Exploring representations
and inductive biases. Trends in cognitive sciences ,
14(8):357–364.
Thomas L Griffiths, Dylan Daniels, Joseph L Auster-
weil, and Joshua B Tenenbaum. 2018. Subjective
randomness as statistical inference. Cognitive psy-
chology , 103:85–109.
Peter Harrison, Raja Marjieh, Federico Adolfi, Pol van
Rijn, Manuel Anglada-Tort, Ofer Tchernichovski,
Pauline Larrouy-Maestri, and Nori Jacoby. 2020.
Gibbs sampling with people. Advances in Neural
Information Processing Systems , 33:10659–10671.
Timothy Hospedales, Antreas Antoniou, Paul Micaelli,
and Amos Storkey. 2020. Meta-learning in neural net-
works: A survey. arXiv preprint arXiv:2004.05439 .Sreejan Kumar, Ishita Dasgupta, Jonathan Cohen,
Nathaniel Daw, and Thomas Griffiths. 2021. Meta-
learning of structured task distributions in humans
and machines. In International Conference on Learn-
ing Representations .
Brenden M Lake, Tomer D Ullman, Joshua B Tenen-
baum, and Samuel J Gershman. 2017. Building ma-
chines that learn and think like people. Behavioral
and brain sciences , 40.
Andrew K Lampinen, Nicholas A Roy, Ishita Dasgupta,
Stephanie CY Chan, Allison C Tam, James L McClel-
land, Chen Yan, Adam Santoro, Neil C Rabinowitz,
Jane X Wang, et al. 2021. Tell me why!–explanations
support learning of relational and causal structure.
arXiv preprint arXiv:2112.03753 .
Jelena Luketina, Nantas Nardelli, Gregory Farquhar,
Jakob N Foerster, Jacob Andreas, Edward Grefen-
stette, Shimon Whiteson, and Tim Rocktäschel. 2019.
A survey of reinforcement learning informed by nat-
ural language. In IJCAI .
Gary Lupyan and Benjamin Bergen. 2016. How lan-
guage programs the mind. Topics in cognitive sci-
ence, 8(2):408–424.
Jesse Mu, Percy Liang, and Noah Goodman. 2020.
Shaping visual representations with language for few-
shot classification. In Proceedings of the 58th Annual
Meeting of the Association for Computational Lin-
guistics , pages 4823–4830.
Karthik Narasimhan, Regina Barzilay, and Tommi
Jaakkola. 2018. Grounding language for transfer
in deep reinforcement learning. Journal of Artificial
Intelligence Research , 63:849–874.
Antonin Raffin, Ashley Hill, Maximilian Ernestus,
Adam Gleave, Anssi Kanervisto, and Noah Dormann.
2019. Stable baselines3.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
InProceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP) , pages 3982–3992.
John Schulman, Filip Wolski, Prafulla Dhariwal,
Alec Radford, and Oleg Klimov. 2017. Proxi-
mal policy optimization algorithms. arXiv preprint
arXiv:1707.06347 .
Elizabeth S Spelke. 2003. What makes us smart? core
knowledge. Language in mind: Advances in the
study of language and thought , page 277.
Joshua B Tenenbaum, Charles Kemp, Thomas L Grif-
fiths, and Noah D Goodman. 2011. How to grow a
mind: Statistics, structure, and abstraction. science ,
331(6022):1279–1285.
Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran,
Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis
Hassabis, and Matthew Botvinick. 2018. Prefrontal
cortex as a meta-reinforcement learning system. Na-
ture neuroscience , 21(6):860–868.
Catherine Wong, Kevin M Ellis, Joshua Tenenbaum, and
Jacob Andreas. 2021. Leveraging language to learn
program abstractions and search heuristics. In In-
ternational Conference on Machine Learning , pages
11193–11204. PMLR.
ABuilding an Abstract Task Distribution
using Gibbs Sampling with People
To generate a task distribution of boards directly
from humans we used Gibbs Sampling with People
(GSP; Harrison et al. 2020, and a similar task for
binary sequences in Griffiths et al. 2018). GSP
samples internal prior distributions by putting hu-
mans “in the loop” of a Gibbs sampler. In our case,
the stimulus space consisted of the space of 4×4
boards, and each of the 16 stimulus dimensions cor-
responded to the binary color of each tile, namely,
red or blue. One of these dimensions was masked
out (i.e. “greyed”) for the prediction task. Each
GSP trial consisted of a prediction task of predict-
ing what color the single masked square is in the
grid conditional on the colors of all other squares
on the grid. Once a decision is made, the result-
ing stimulus is passed on to a new participant who
repeats the task with another masked square and
so on. A sample is generated once a full sweep
through the all sixteen squares is completed, sim-
ilar to the standard procedure of Gibbs sampling.
In each trial, participants were presented with a
board with one of its tiles covered (indicated by a
white tile) as well as the following prompt “what
should be the underlying color of the covered white
tile such that the board is described by a very sim-
ple rule?” (Fig. 4A). They then delivered their
answer by clicking on a button that corresponded
to their color of choice. Overall, we ran 100 GSP
chains in parallel for 15 sweeps each ( 24000 total
possible unique boards), and chains were initial-
ized with randomly sampled boards. The order in
which tiles were masked out within each sweep
was also randomized across chains to avoid poten-
tial biases. When sampling from this distribution,
the probability of each board is based on how fre-
quent it occurred during the GSP sampling process.
We used the 500most probable boards to collect
language descriptions.
Participants were recruited on Amazon Mechan-
ical Turk (AMT) and a total of 272 participantscompleted the study. To ensure that participants
did not suffer from any color perception deficien-
cies, we ran the Ishihara color blindness test (Clark,
1924) as a pre-screening task. This also helped in
screening out automated scripts (“bots”) that mas-
querade as participants (Chmielewski and Kucker,
2020).
B Generating the Control Task
Distribution
The same protocol in Kumar et al. 2021 was used.
We trained a fully connected neural network (3
layers, 16 units each) to learn the conditional distri-
bution of each tile given all other tiles on the GSP
boards. These conditional distributions contain all
the relevant statistical information about the boards.
The network was given a board generated with an
abstract rule that had a random tile masked out and
trained to reproduce the entire board including the
randomly masked tile. The loss was the binary
cross-entropy between each of the predicted and
actual masked tiles, summed over all tiles. The net-
work was trained on samples from the GSP boards,
and achieved an accuracy of above 99%.
We used these conditional distributions to gener-
ate samples from the distribution of boards learned
using Gibbs sampling. We started with a grid in
which each tile is randomly set to red or blue with
probability 0.5. We then masked out one tile at a
time and ran the grid through the network to ex-
tract the probability of the missing tile being red or
blue from the trained conditional model. We then
assign the color of this tile by sampling from this
binomial probability. We repeated this by masking
each tile in the 4×4grid (in a random order) to
complete a single Gibbs sweep, and repeated this
whole Gibbs sweep 20 times to generate a single
sample. We generate 25 such independent samples
from the control distribution as held-out test data
for the meta-learning agent and sample from this
distribution during training (while holding out the
test set).
C Testing Humans on Abstract and
Control Tasks
We crowdsourced human performance on our task
using Prolific ( http://www.prolific.co )
for a compensation of $2.25 (averaging $13.55
per hour). Participants were shown the 4×4grid
on their web browser and used mouse-clicks to re-
veal tiles. Each participant was randomly assigned
to either the GSP or control boards. Each partic-
ipant was evaluated on the same test set of grids
used to evaluate the models (24 grids from their as-
signed task distribution in randomized order). Note
that a key difference between the human partici-
pants and model agents was that the humans did
not receive direct training on any of the task distri-
butions. Since participants had to reveal all red tiles
to move on to the next grid, they were implicitly
incentivized to be efficient (clicking as few blue
tiles as possible) in order to finish the task quickly.
We found that this was adequate to get good per-
formance. A reward structure similar to that given
to agents was displayed as the number of points
accrued, but did not translate to monetary reward.
There were 50 participants in each condition (GSP
and control), so 100 participants in total. This was
the same protocol used in Kumar et al. 2021.
D Training Meta-Reinforcement
Learning Agents on the Grid Task
Following previous work in meta-reinforcement
learning (Wang et al., 2018), we use an LSTM
meta-learner that takes the full board as input,
passes it through a convolutional layer and feeds
that, along with the previous action and reward,
to 120 LSTM units. The agent had 16 possible
actions corresponding to choosing a tile (on the
4×4board) to reveal. The reward function was:
+1 for revealing red tiles, -1 for blue tiles, +5 for
the last red tile, and -2 for choosing an already re-
vealed tile. The agent was trained using Proximal
Policy Optimization (PPO; Schulman et al. 2017
using the Stable Baselines package Raffin et al.
2019) for one million episodes. We performed a
hyperparameter sweep separately for the agents
without the grounding loss (i.e. original agents)
and with the grounding loss, since we have to tune
the new clangweight on the grounding loss jointly.
We performed a hyperparameter sweep for: batch
size, n_steps (number of steps to run in an environ-
ment update), gamma, learning rate, learning rate
schedule (constant or linear), clip range, number of
epochs, the λfor Generalized Advantage Estimate
(GAE λ), max grad norm, activation function, value
loss coefficient, entropy coefficient, and grounding
loss coefficient for agents with the grounding loss.
The hyperparameter sweep was done by sampling
from the space of hyperparameter using the Tree-
Structured Parzen Estimator (Bergstra et al., 2011).
We evaluated 200 samples of hyperparameters fromthe space for all agents. Both grounding and non-
grounding agents used the same hyperparameter
spaces to sweep over. We initially did a separate hy-
perparameter sweep for different grounding agents,
but we found in initial experiments that they all
reached similar hyperparameter values and train-
ing reward after the search. Hyperparameters were
evaluated by training on 100,000 episodes and look-
ing at the training reward. The environments used
during test time (Fig. 4) were completely held-out
during this process.
E Performance Evaluation on the Task
Doing well on the task is indicated by the ability to
reveal all red tiles on a grid while revealing as little
blue tiles as possible. So, we measure performance
by counting the number of blue tiles revealed in the
episode. Since different boards will have different
number of red tiles/have varying levels of difficulty,
we controlled for task difficulty/length by measur-
ing the performance relative to a “nearest neighbor”
heuristic. The nearest neighbor heuristic randomly
selects covered tiles that are adjacent to currently
uncovered red tiles (or any covered tile if such a
tile does not exist). For each board, we ran this
heuristic on the board 1000 times to generate a dis-
tribution of performances (i.e., number of blue tiles
revealed) and z-scored the human/agent’s number
of blue tiles revealed according to this distribution.
A z-score of below 0 means that the human/agent
did better than the mean performance of the nearest
neighbor heuristic, and the specific value of the
z-score reflects how many standard deviations the
human/agent did better than the mean performance
of the nearest neighbor heuristic.
F Natural Language Descriptions of the
GSP Boards
We took 500 of the highest probability GSP
boards and broke them up into sets of 25. We
then randomly assigned participants on Prolific
(http://www.prolific.co ) to these sets of
25 boards. Around 10 participants did each set of
the boards given the low level prompt and around
10 participants did each of the board given the high
level prompt. As a result, each participant wrote
descriptions for 25 boards and each board has ap-
proximately 20 descriptions, with about half being
from the low level prompt and half being from the
high level prompt.
The low-level prompt was: “Your goal is to de-
scribe this pattern of red squares in words. Be as
detailed as possible. Someone should be able to
reproduce the entire board given your description.
You may be rewarded based on how detailed your
description is.” and the high-level prompt was “Be
as general as possible in your description. Your de-
scription should use as few words as possible and
focus on the pattern of red squares as a whole and
not individual squares. You may be rewarded based
on how concise your description is.” We converted
the blue tiles to white when showing the boards
to the participants so that they would focus their
descriptions on the red tiles.
Synthetic low level descriptions were non-
human generated and used the template. The tem-
plate goes through the location of every tile and
says “The reds are in: Xth column and Yth row,...,”
for every red tile location.
G Hyperparameter Values
The following table contains the hyperparameters
used.
Agent No Grounding Loss Grounding Loss
batch_size 16 256
n_steps 2048 8
gamma 0.9 0.9
learning_rate 0.000516501 0.000376021
lr_schedule linear linear
ent_coef 1.3907E-05 1.45674E-06
clip_range 0.3 0.3
n_epochs 10 5
gae_lambda 0.8 0.95
max_grad_norm 2 0.6
vf_coef 0.000914363 0.016291309
activation_fn relu tanh
grounding_coef 0 0.494866282
H Training Curves of Agents
Figure 5: Training Reward Curves for All Agents
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.