id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
1e4dc27f-3916-4c4a-88fc-172d8a7794db | trentmkelly/LessWrong-43k | LessWrong | Notes on the Safety in Artificial Intelligence conference
These are my notes and observations after attending the Safety in Artificial Intelligence (SafArtInt) conference, which was co-hosted by the White House Office of Science and Technology Policy and Carnegie Mellon University on June 27 and 28. This isn't an organized summary of the content of the conference; rather, it's a selection of points which are relevant to the control problem. As a result, it suffers from selection bias: it looks like superintelligence and control-problem-relevant issues were discussed frequently, when in reality those issues were discussed less and I didn't write much about the more mundane parts.
SafArtInt has been the third out of a planned series of four conferences. The purpose of the conference series was twofold: the OSTP wanted to get other parts of the government moving on AI issues, and they also wanted to inform public opinion.
The other three conferences are about near term legal, social, and economic issues of AI. SafArtInt was about near term safety and reliability in AI systems. It was effectively the brainchild of Dr. Ed Felten, the deputy U.S. chief technology officer for the White House, who came up with the idea for it last year. CMU is a top computer science university and many of their own researchers attended, as well as some students. There were also researchers from other universities, some people from private sector AI including both Silicon Valley and government contracting, government researchers and policymakers from groups such as DARPA and NASA, a few people from the military/DoD, and a few control problem researchers. As far as I could tell, everyone except a few university researchers were from the U.S., although I did not meet many people. There were about 70-100 people watching the presentations at any given time, and I had conversations with about twelve of the people who were not affiliated with existential risk organizations, as well as of course all of those who were affiliated. The conference was split |
2a4ca702-0cd2-45bf-81a9-f3d782c74c61 | trentmkelly/LessWrong-43k | LessWrong | Dating Minefield vs. Dating Playground
Crossposted from Optimized Dating.
Imagine you want to improve your performance at some task, hobby or a job. You get offered a choice of two courses:
Course A is really vague and undefined, with no clear program. You don't get graded on your performance, but it's broadcasted to your community so everyone can silently judge you every time you fail. Your early choices are locked in, and you can't radically change your approach without raising many eyebrows. And it's long.
Course B, in contrast, offers clear performance indicators, tight feedback loops and sensible intermediate milestones. You wouldn't become a master on day 1, but you get told what are you doing wrong and what do you need to improve. You could play with different approaches to the problem and get no lasting judgement if something goes wrong. "Move fast and break things" works here.
There's no catch here, course B is obviously better. And that's why online dating is superior to trying to date IRL.
Let me explain. A lot of my online and offline friends who don't have much romantic experience are avoiding online dating like a plague. I remember the time when my particle physicist friend visited me in France and lamented his lack of romantic life. Yet when I suggested him to install an online dating app, he became incredibly anxious and refused to have anything to do with it. Even after I asked him for his phone and installed Tinder on it, he was on edge so much that he threw his phone across the table the first time he received a notification about a match.
When I ask these friends how they imagine IRL dating, I don't get a very defined response. There's some vague notion of "meeting someone at school/work/hobby" and "developing a relationship". This may sound wonderful, but to me it seems that this approach is fraught with difficulties – especially so for novices on the romantic battlegrounds.
The overarching motif here is that attempting to date people you already know IRL is a minefie |
5c690534-c956-4493-a08f-1ff7f1decd6d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow Meetup: CBT is back
Discussion article for the meetup : Moscow Meetup: CBT is back
WHEN: 07 December 2014 02:00:00PM (+0300)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
This meetup will be in semi-closed format! If you hadn't come to our meetups before, please wait for the next open meetup: it's planned to December, 21.
Regular visitors of our meetups, please wait for announce in our group:
https://groups.google.com/forum/#!forum/rationality-in-moscow
You will feel more comfortable in our meetups if you read the needed materials and became familiar with some base ideas:
* You understand the Bayes theorem (what is bayesianism - http://lesswrong.com/lw/1to/what_is_bayesianism or http://schegl2g.bget.ru/bayes/YudkowskyBayes.html ).
* You understand what is "System 1" and "System 2" (Kahneman and Yudkowsky).
* You know what is "rational agent".
* You've read the base sequences: "Map and Territory" - http://wiki.lesswrong.com/wiki/Map_and_Territory_(sequence) and "Mysterious Answers to Mysterious Questions" - http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions .
It will allow you to understand better, what we talk about, how do we think, and will give you the spirit of our activities.
Discussion article for the meetup : Moscow Meetup: CBT is back |
fc34d773-4814-4133-82bd-22d94c9bc5dc | StampyAI/alignment-research-dataset/arxiv | Arxiv | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
1 Introduction
---------------

Figure 1: An example of visual storytelling and visual captioning. Both captions and stories are shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.
Recently, increasing attention has been focused on visual captioning Chen et al. ([2015](#bib.bib8)); Xu et al. ([2016](#bib.bib35)); Wang et al. ([2018c](#bib.bib33)), which aims at describing the content of an image or a video.
Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.
To further investigate machine’s capabilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling Huang et al. ([2016](#bib.bib17)) has been proposed.
Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple. In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.
[Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling") shows an example of visual captioning and visual storytelling. We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car). It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images. Moreover, stories are more subjective, so there barely exists standard templates for storytelling. As shown in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), the same photo stream can be paired with diverse stories, different from each other. This heavily increases the evaluation difficulty.
So far, prior work for visual storytelling Huang et al. ([2016](#bib.bib17)); Yu et al. ([2017b](#bib.bib37)) is mainly inspired by the success of visual captioning.
Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns. In order to cope with the challenges and produce more human-like descriptions, Rennie et al. ([2016](#bib.bib29)) have proposed a reinforcement learning framework. However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search. For instance, we used the METEOR Banerjee and Lavie ([2005](#bib.bib4)) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed. Here we showcase an adversarial example with an average METEOR score as high as 40.2:
* We had a great time to have a lot of the. They were to be a of the. They were to be in the. The and it were to be the. The, and it were to be the.
Apparently, the machine is gaming the metrics. Conversely, when using some other metrics (e.g. BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).
In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling. We draw our inspiration from recent progress in inverse reinforcement learning Ho and Ermon ([2016](#bib.bib16)); Finn et al. ([2016](#bib.bib11)); Fu et al. ([2017](#bib.bib12)) and propose the AREL algorithm to learn a more intelligent reward function. Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models – a policy model and a reward model. The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations. The learned reward function would be employed to optimize the policy in return.
For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them. Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost. Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.
Our main contributions are four-fold:
* We propose an adversarial reward learning framework and apply it to boost visual story generation.
* We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.
* We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.
* We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.
2 Related Work
---------------
#### Visual Storytelling
Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream. Park and Kim ([2015](#bib.bib24)) has done some pioneering research on storytelling. Chen et al. ([2017](#bib.bib9)) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions. Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories Huang et al. ([2016](#bib.bib17)). Yu et al. ([2017b](#bib.bib37)) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset. But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.
#### Reinforcement Learning in Sequence Generation
Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation Bahdanau et al. ([2016](#bib.bib3)), visual captioning Ren et al. ([2017](#bib.bib28)); Wang et al. ([2018b](#bib.bib32)), summarization Paulus et al. ([2017](#bib.bib25)); Chen et al. ([2018](#bib.bib7)), etc.
The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy. As pointed in Ranzato et al. ([2015](#bib.bib26)), traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better. But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.
#### Rethinking Automatic Metrics
Automatic metrics, including BLEU Papineni et al. ([2002](#bib.bib23)), CIDEr Vedantam et al. ([2015](#bib.bib30)), METEOR Banerjee and Lavie ([2005](#bib.bib4)), and ROUGE Lin ([2004](#bib.bib20)), have been widely applied to the sequence generation tasks. Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation. However, they have been criticized to be biased and correlate poorly with human judgments,
especially in many generative tasks like response generation Lowe et al. ([2017](#bib.bib22)); Liu et al. ([2016](#bib.bib21)), dialogue system Bruni and Fernández ([2017](#bib.bib5)) and machine translation Callison-Burch et al. ([2006](#bib.bib6)).
The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.
#### Generative Adversarial Network
Generative adversarial network (GAN) Goodfellow et al. ([2014](#bib.bib13)) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game:
| | | |
| --- | --- | --- |
| | minDmaxGEx∼pdata[logD(x)]+z∼pzE[logD(G(z))] , | |
where G is the generator and D is the discriminator, and z is the latent variable. Recently, GAN has quickly been adopted to tackle discrete problems Yu et al. ([2017a](#bib.bib36)); Dai et al. ([2017](#bib.bib10)); Wang et al. ([2018a](#bib.bib31)). The basic idea is to use Monte Carlo policy gradient estimation Williams ([1992](#bib.bib34)) to update the parameters of the generator.
#### Inverse Reinforcement Learning
Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics. Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert’s reward function. Previous IRL approaches include maximum margin approaches Abbeel and Ng ([2004](#bib.bib1)); Ratliff et al. ([2006](#bib.bib27)) and probabilistic approaches Ziebart ([2010](#bib.bib38)); Ziebart et al. ([2008](#bib.bib39)). Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition Ho and Ermon ([2016](#bib.bib16)); Finn et al. ([2016](#bib.bib11)); Fu et al. ([2017](#bib.bib12)); Henderson et al. ([2017](#bib.bib15)). These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution pθ(x)∝exp(−Eθ(x)). Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.

Figure 2: AREL framework for visual storytelling.
3 Our Approach
---------------
###
3.1 Problem Statement
Here we consider the task of visual storytelling, whose objective is to output a word sequence W=(w1,w1,⋯,wT), wt∈V given an input image stream of 5 ordered images I=(I1,I2,⋯,I5), where V is the vocabulary of all output token. We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it. As described in [Figure 2](#S2.F2 "Figure 2 ‣ Inverse Reinforcement Learning ‣ 2 Related Work ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), our AREL framework is mainly composed of two modules: a policy model πβ(W) and a reward model Rθ(W). The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W. The reward model is optimized by the adversarial objective (see Section [3.3](#S3.SS3 "3.3 Learning ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling")) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.

Figure 3: Overview of the policy model. The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images. Its outputs are then fed into the RNN decoders to generate sentences in parallel. Finally, we concatenate all the generated sentences as a full story. Note that the five decoders share the same weights.
###
3.2 Model
#### Policy Model
As is shown in [Figure 3](#S3.F3 "Figure 3 ‣ 3.1 Problem Statement ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), the policy model is a CNN-RNN architecture. We fist feed the photo stream I=(I1,⋯,I5) into a pretrained CNN and extract their high-level image features. We then employ a visual encoder to further encode the image features as context vectors hi=[←hi;→hi]. The visual encoder is a bidirectional gated recurrent units (GRU).
In the decoding stage, we feed each context vector hi into a GRU-RNN decoder to generate a sub-story Wi.
Formally, the generation process can be written as:
| | | | | |
| --- | --- | --- | --- | --- |
| | sit | =GRU(sit−1,[wit−1,hi]) , | | (1) |
| | πβ(wit|wi1:t−1) | =softmax(Wssit+bs) , | | (2) |
where sit denotes the t-th hidden state of i-th decoder. We concatenate the previous token wit−1 and the context vector hi as the input. Ws and bs are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories Wi. β denotes all the parameters of the encoder, the decoder, and the output layer.

Figure 4: Overview of the reward model. Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings. Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.
#### Reward Model
The reward model Rθ(W) is a CNN-based architecture (see [Figure 4](#S3.F4 "Figure 4 ‣ Policy Model ‣ 3.2 Model ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling")).
Instead of giving an overall score for the whole story, we apply the reward model to different story parts (sub-stories) Wi and compute partial rewards, where i=1,⋯,5. We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.
We first query the word embeddings of the sub-story (one sentence in most cases). Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim ([2014](#bib.bib18))). In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance. Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer. In the end, the reward model outputs an estimated reward value Rθ(W). The process can be written in formula:
| | | | |
| --- | --- | --- | --- |
| | Rθ(W)=Wr(fconv(W)+WiICNN)+br, | | (3) |
where Wr,br denotes the weights in the output layer, and fconv denotes the operations in CNN. ICNN is the high-level visual feature extracted from the image, and Wi projects it into the sentence representation space. θ includes all the parameters above.
###
3.3 Learning
#### Reward Boltzmann Distribution
In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution:
| | | | |
| --- | --- | --- | --- |
| | pθ(W)=exp(Rθ(W))Zθ , | | (4) |
Where W is the word sequence of the story and pθ(W) is the approximate data distribution, and Zθ=∑Wexp(Rθ(W)) denotes the partition function. According to the energy-based model LeCun et al. ([2006](#bib.bib19)), the optimal reward function R∗(W) is achieved when the Reward-Boltzmann distribution equals to the “real” data distribution pθ(W)=p∗(W).
#### Adversarial Reward Learning
We first introduce an empirical distribution pe(W)=1(W∈D)|D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function. We use this empirical distribution as the “good” examples, which provides the evidence for the reward function to learn from.
In order to approximate the Reward Boltzmann distribution towards the “real” data distribution p∗(W), we design a min-max two-player game, where the Reward Boltzmann distribution pθ aims at maximizing the its similarity with empirical distribution pe while minimizing that with the “faked” data generated from policy model πβ. On the contrary, the policy distribution πβ tries to maximize its similarity with the Boltzmann distribution pθ. Formally, the adversarial objective function is defined as
| | | | |
| --- | --- | --- | --- |
| | maxβminθKL(pe(W)||pθ(W))−KL(πβ(W)||pθ(W)) . | | (5) |
We further decompose it into two parts. First, because the objective Jβ of the story generation policy is to minimize its similarity with the Boltzmann distribution pθ, the optimal policy that minimizes KL-divergence is thus π(W)∼exp(Rθ(W)), meaning if Rθ is optimal, the optimal πβ=π∗. In formula,
| | | | |
| --- | --- | --- | --- |
| | Jβ=−KL(πβ(W)||pθ(W))=EW∼πβ(W)[Rθ(W)]+H(πβ(W)) , | | (6) |
where H denotes the entropy of the policy model. On the other hand, the objective Jθ of the reward function is to distinguish between human-annotated stories and machine-generated stories. Hence it is trying to minimize the KL-divergence with the empirical distribution pe and maximize the KL-divergence with the approximated policy distribution πβ:
| | | | |
| --- | --- | --- | --- |
| | Jθ=KL(pe(W)||pθ(W))−KL(πβ(W)||pθ(W))=∑W[pe(W)Rθ(W)−πβ(W)Rθ(W)]−H(pe)+H(πβ) , | | (7) |
Since H(πβ) and H(pe) are irrelevant to θ, we denote them as constant C. Therefore, the objective Jθ can be further derived as
| | | | |
| --- | --- | --- | --- |
| | Jθ=EW∼pe(W)[Rθ(W)]−EW∼πβ(W)[Rθ(W)]+C . | | (8) |
Here we propose to use stochastic gradient descent to optimize these two models alternately. Formally, the gradients can be written as
| | | | |
| --- | --- | --- | --- |
| | ∂Jθ∂θ=EW∼pe(W)∂Rθ(W)∂θ−EW∼πβ(W)∂Rθ(W)∂θ ,∂Jβ∂β=EW∼πβ(W)(Rθ(W)+logπθ(W)−b)∂logπβ(W)∂β , | | (9) |
where b is the estimated baseline to reduce the variance.
1:for episode ← 1 to N do
2: collect story W by executing policy πθ
3: if Train-Reward then
4: θ←θ−η×∂Jθ∂θ (see [Equation 9](#S3.E9 "(9) ‣ Adversarial Reward Learning ‣ 3.3 Learning ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"))
5: else if Train-Policy then
6: collect story ~W from empirical pe
7: β←β−η×∂Jβ∂β (see [Equation 9](#S3.E9 "(9) ‣ Adversarial Reward Learning ‣ 3.3 Learning ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"))
8: end if
9:end for
Algorithm 1 The AREL Algorithm.
#### Training & Testing
As described in [Algorithm 1](#alg1 "Algorithm 1 ‣ Adversarial Reward Learning ‣ 3.3 Learning ‣ 3 Our Approach ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), we introduce an alternating algorithm to train these two models using stochastic gradient descent. During testing, the policy model is used with beam search to produce the story.
4 Experiments and Analysis
---------------------------
###
4.1 Experimental Setup
#### VIST Dataset
The VIST dataset Huang et al. ([2016](#bib.bib17)) is the first dataset for sequential vision-to-language tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos. In this paper, we mainly evaluate our AREL method on this dataset. After filtering the broken images222There are only 3 (out of 21,075) broken images in the test set, which basically has no influence on the final results. Moreover, Yu et al. ([2017b](#bib.bib37)) also removed the 3 pictures, so it is a fair comparison., there are 40,098 training, 4,988 validation, and 5,050 testing samples.
Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image). And the same album is paired with 5 different stories as references. In our experiments, we used the same split settings as in Huang et al. ([2016](#bib.bib17)); Yu et al. ([2017b](#bib.bib37)) for a fair comparison.
#### Evaluation Metrics
In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion. Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr. We utilized the open source evaluation code333<https://github.com/lichengunc/vist_eval> used in Yu et al. ([2017b](#bib.bib37)). For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section [4.3](#S4.SS3 "4.3 Human Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling") for more details).
#### Training Details
We employ pretrained ResNet-152 model He et al. ([2016](#bib.bib14)) to extract image features from the photo stream. We built a vocabulary of size 9,837 to include words appearing more than three times in the training set. More training details can be found at [Appendix B](#A2 "Appendix B Training Details ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling").
###
4.2 Automatic Evaluation
In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics. Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.
#### Comparison with SOTA on Automatic Metrics
In [Table 1](#S4.T1 "Table 1 ‣ Comparison with SOTA on Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), we compare our method with Huang et al. ([2016](#bib.bib17)) and Yu et al. ([2017b](#bib.bib37)), which report achieving best-known results on the VIST dataset. We first implement a strong baseline model (XE-ss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling. Besides, we adopt the traditional generative adversarial training for comparison (GAN). As shown in [Table 1](#S4.T1 "Table 1 ‣ Comparison with SOTA on Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), our XE-ss model already outperforms the best-known results on the VIST dataset, and the GAN model can bring a performance boost. We then use the XE-ss model to initialize our policy model and further train it with AREL. Evidently, our AREL model performs the best and achieves the new state-of-the-art results across all metrics.
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Method | B-1 | B-2 | B-3 | B-4 | M | R | C |
| Huang et al. | - | - | - | - | 31.4 | - | - |
| Yu et al. | - | - | 21.0 | - | 34.1 | 29.5 | 7.5 |
| XE-ss | 62.3 | 38.2 | 22.5 | 13.7 | 34.8 | 29.7 | 8.7 |
| GAN | 62.8 | 38.8 | 23.0 | 14.0 | 35.0 | 29.5 | 9.0 |
| AREL-s-50 | 63.8 | 38.9 | 22.9 | 13.8 | 34.9 | 29.4 | 9.5 |
| AREL-t-50 | 63.4 | 39.0 | 23.1 | 14.1 | 35.2 | 29.6 | 9.5 |
| AREL-s-100 | 63.9 | 39.1 | 23.0 | 13.9 | 35.0 | 29.7 | 9.6 |
| AREL-t-100 | 63.8 | 39.1 | 23.2 | 14.1 | 35.0 | 29.5 | 9.4 |
| | | | | | | | |
Table 1: Automatic evaluation on the VIST dataset. We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL. AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while AREL-t-N denoting AREL models with tahn as the output activation (N = 50 or 100).
But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores. However, in Sec. [4.3](#S4.SS3 "4.3 Human Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model. The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories’ quality due to the complicated characteristics of the stories. Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in [subsection 4.2](#S4.SS2.SSS0.Px2 "Limitations of Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling").
| Method | B-1 | B-2 | B-3 | B-4 | M | R | C |
| --- | --- | --- | --- | --- | --- | --- | --- |
| XE-ss | 62.3 | 38.2 | 22.5 | 13.7 | 34.8 | 29.7 | 8.7 |
| BLEU-RL | 62.1 | 38.0 | 22.6 | 13.9 | 34.6 | 29.0 | 8.9 |
| METEOR-RL | 68.1 | 35.0 | 15.4 | 6.8 | 40.2 | 30.0 | 1.2 |
| ROUGE-RL | 58.1 | 18.5 | 1.6 | 0 | 27.0 | 33.8 | 0 |
| CIDEr-RL | 61.9 | 37.8 | 22.5 | 13.8 | 34.9 | 29.7 | 8.1 |
| AREL (avg) | 63.7 | 39.0 | 23.1 | 14.0 | 35.0 | 29.6 | 9.5 |
Table 2: Comparison with different RL models with different metric scores as the rewards. We report the average scores of the AREL models as AREL (avg). Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged. Actually, they are gaming their own metrics with nonsense sentences.
#### Limitations of Automatic Metrics
As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories. In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model. The quantitative results are demonstrated in [Table 1](#S4.T1 "Table 1 ‣ Comparison with SOTA on Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling").
Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other metrics severely. We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness. Same as METEOR score, there is also an adversarial example for ROUGE-L444An adversarial example for ROUGE-L: we the was a . and to the . we the was a . and to the . we the was a . and to the . we the was a . and to the . we the was a . and to the ., which is nonsense but achieves an average ROUGE-L score of 33.8.

Figure 5: Metric score distributions. We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples. For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section [4.3](#S4.SS3 "4.3 Human Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling")). Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing. Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].
Besides, as can be seen in [Table 1](#S4.T1 "Table 1 ‣ Comparison with SOTA on Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model. We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in [Figure 5](#S4.F5 "Figure 5 ‣ Limitations of Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling").
An interesting fact is that there are a large number of samples with nearly zero score on both metrics. However, we observed those “zero-score” samples are not pointless results; instead, lots of them make sense and deserve a better score than zero. Here is a “zero-score” example on BLEU-3:
* I had a great time at the restaurant today. The food was delicious. I had a lot of food. The food was delicious. T had a great time.
The corresponding reference is
* The table of food was a pleasure to see! Our food is both nutritious and beautiful! Our chicken was especially tasty! We love greens as they taste great and are healthy! The fruit was a colorful display that tantalized our palette..
Although the prediction is not as good as the reference, it is actually coherent and relevant to the theme “food and eating”, which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.
| | | | |
| --- | --- | --- | --- |
| Method | Win | Lose | Unsure |
| XE-ss | 22.4% | 71.7% | 5.9% |
| BLEU-RL | 23.4% | 67.9% | 8.7% |
| CIDEr-RL | 13.8% | 80.3% | 5.9% |
| GAN | 34.3% | 60.5% | 5.2% |
| AREL | 38.4% | 54.2% | 7.4% |
| | | | |
Table 3: Turing test results.
| | AREL vs XE-ss | AREL vs BLEU-RL | AREL vs CIDEr-RL | AREL vs GAN |
| --- | --- | --- | --- | --- |
| Choice (%) | AREL | XE-ss | Tie | AREL | BLEU-RL | Tie | AREL | CIDEr-RL | Tie | AREL | GAN | Tie |
| Relevance | 61.7 | 25.1 | 13.2 | 55.8 | 27.9 | 16.3 | 56.1 | 28.2 | 15.7 | 52.9 | 35.8 | 11.3 |
| Expressiveness | 66.1 | 18.8 | 15.1 | 59.1 | 26.4 | 14.5 | 59.1 | 26.6 | 14.3 | 48.5 | 32.2 | 19.3 |
| Concreteness | 63.9 | 20.3 | 15.8 | 60.1 | 26.3 | 13.6 | 59.5 | 24.6 | 15.9 | 49.8 | 35.8 | 14.4 |
Table 4: Pairwise human comparisons. The results indicate the consistent superiority of our AREL model in generating more human-like stories than the SOTA methods.
Moreover, we compare the human evaluation scores with these two metric scores in [Figure 5](#S4.F5 "Figure 5 ‣ Limitations of Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"). Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores. Their distributions are more biased and thus cannot fully reflect the quality of the generated stories. In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.
CIDEr measures the similarity of a sentence to the majority of the references.
However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task. In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.
#### Comparison with GAN
We here compare our method with traditional GAN Goodfellow et al. ([2014](#bib.bib13)), the update rule for generator can be generally classified into two categories. We demonstrate their corresponding objectives and ours as follows:
| | | | |
| --- | --- | --- | --- |
| | GAN1:Jβ= | EW∼pβ[−logRθ(W)] , | |
| | GAN2:Jβ= | EW∼pβ[log(1−Rθ(W))] , | |
| | ours:Jβ= | EW∼pβ[−Rθ(W)] . | |
As discussed in Arjovsky et al. ([2017](#bib.bib2)), GAN1 is prone to the unstable gradient issue and GAN2 is prone to the vanishing gradient issue. Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily. From [Table 1](#S4.T1 "Table 1 ‣ Comparison with SOTA on Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), we can observe slight gains of using AREL over GAN with automatic metrics, therefore we further deploy human evaluation for a better comparison.
###
4.3 Human Evaluation
Automatic metrics cannot fully evaluate the capability of our AREL method. Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation. For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance. We batch six items as one assignment and insert an additional assignment as a sanity check. Besides, the order of the options within each item is shuffled to make a fair comparison.
#### Turing Test
We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated. As shown in [Table 3](#S4.T3 "Table 3 ‣ Limitations of Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"), our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories. Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms. Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.
Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.

Figure 6: Qualitative comparison example with XE-ss. The direct comparison votes (AREL:XE-ss:Tie) were 5:0:0 on Relevance, 4:0:1 on Expressiveness, and 5:0:0 on Concreteness.
#### Pairwise Comparison
In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN. For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance555Relevance: the story accurately describes what is happening in the image sequence and covers the main objects., expressiveness666Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style. and concreteness777Concreteness: the story should narrate concretely what is in the image rather than giving very general descriptions.. This head-to-head compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in [Table 4](#S4.T4 "Table 4 ‣ Limitations of Automatic Metrics ‣ 4.2 Automatic Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling").
Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, expressiveness, and concreteness. Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.
###
4.4 Qualitative Analysis
[Figure 6](#S4.F6 "Figure 6 ‣ Turing Test ‣ 4.3 Human Evaluation ‣ 4 Experiments and Analysis ‣ No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling") gives a qualitative comparison example between AREL and XE-ss models. Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct. Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately. Thus, our AREL model significantly surpasses the XE-ss model on all the three aspects of the qualitative example. Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human). In the appendix, we also show a negative case that fails the Turing test.
5 Conclusion
-------------
In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation. We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories.
Acknowledgment
--------------
We thank Adobe Research for supporting our language and vision research. We would also like to thank Licheng Yu for clarifying the details of his paper and the anonymous reviewers for their thoughtful comments. This research was sponsored in part by the Army Research Laboratory under cooperative agreements W911NF09-2-0053. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein. |
1ce9f245-709b-4dde-94e3-75ab2607ea1d | trentmkelly/LessWrong-43k | LessWrong | Off-Topic Discussion Thread: April 2009
Dale McGowan writes:
> And it needs to go well beyond one greeter. EVERY MEMBER of EVERY GROUP should make it a point to chat up new folks—and each other, for that matter. And not just about the latest debunky book. Ask where he’s from, what she does for a living, whether he follows the Mets or the Yankees. You know, mammal talk.
In this spirit, I propose the creation of a fully off-topic discussion thread.
Here is our monthly place to discuss topics entirely unrelated to Less Wrong that (of course) have not appeared in recent posts.
ETA: There are two behaviors I would love to see associated with this thread. First of all, discussions often drift off-topic in the middle of a thread. In these cases "let's take this to the off-topic thread" would be an excellent response. Secondly, given who's doing the discussing, I could easily see, say, a discussion about recent developments in some webcomic blossoming into a LW-worthy insight, in which case someone could spawn a new thread.
|
8c8d3fd1-31c3-47d4-a8a1-63fec22b0e20 | trentmkelly/LessWrong-43k | LessWrong | Contrarian LW views and their economic implications
LW readers have unusual views on many subjects. Efficient Market Hypothesis notwithstanding, many of these are probably alien to most people in finance. So it's plausible they might have implications that are not yet fully integrated into current asset prices. And if you rightfully believe something that most people do not believe, you should be able to make money off that.
Here's an example for a different group. Feminists believe that women are paid less than men for no good economic reason. If this is the case, feminists should invest in companies that hire many women, and short those which hire few women, to take advantage of the cheaper labour costs. And I can think of examples for groups like Socialists, Neoreactionaries, etc. - cases where their positive beliefs have strong implications for economic predictions. But I struggle to think of such ones for LessWrong, which is why I am asking you. Can you think of any unusual LW-type beliefs that have strong economic implications (say over the next 1-3 years)?
Wei Dai has previously commented on a similar phenomena, but I'm interested in a wider class of phenomena. |
91ac9e40-5f28-4d59-a573-d83383ece3e9 | trentmkelly/LessWrong-43k | LessWrong | Anyone at Otakon?
Perhaps this is a bit late, as the convention is already underway and those who are here may not be checking Less Wrong, but it may be worth a shot. Would be cool to get a LW meetup going on here if anyone's around. |
33326b6c-a459-4cdf-ae35-c6a8a8f40e82 | trentmkelly/LessWrong-43k | LessWrong | The shallow reality of 'deep learning theory'
> Produced under the mentorship of Evan Hubinger as part of the SERI ML Alignment Theory Scholars Program - Winter 2022 Cohort
Most results under the umbrella of "deep learning theory" are not actually deep, about learning, or even theories.
This is because classical learning theory makes the wrong assumptions, takes the wrong limits, uses the wrong metrics, and aims for the wrong objectives. Learning theorists are stuck in a rut of one-upmanship, vying for vacuous bounds that don't say anything about any systems of actual interest.
Yudkowsky tweeting about statistical learning theorists.
(Okay, not really.)
In particular, I'll argue throughout this sequence that:
* Empirical risk minimization is the wrong framework, and risk is a weak foundation.
* In approximation theory, the universal approximation results are too general (they do not constrain efficiency) while the "depth separation" results meant to demonstrate the role of depth are too specific (they involve constructing contrived, unphysical target functions).
* Generalization theory has only two tricks, and they're both limited:
* Uniform convergence is the wrong approach, and model class complexities (VC dimension, Rademacher complexity, and covering numbers) are the wrong metric. Understanding deep learning requires looking at the microscopic structure within model classes.
* Robustness to noise is an imperfect proxy for generalization, and techniques that rely on it (margin theory, sharpness/flatness, compression, PAC-Bayes, etc.) are oversold.
* Optimization theory is a bit better, but training-time guarantees involve questionable assumptions, and the obsession with second-order optimization is delusional. Also, the NTK is bad. Get over it.
* At a higher level, the obsession with deriving bounds for approximation/generalization/learning behavior is misguided. These bounds serve mainly as political benchmarks rather than a source of theoretical insight. More attention should go towards exp |
ad78b945-cccb-4e4c-8256-aa7dcfc9fc1e | trentmkelly/LessWrong-43k | LessWrong | Stabilize-Reflect-Execute
You've recently joined a major organization in a senior management role. How can you organize your plans?
One simple way to think about them is with what can be called the "Stabilize-Reflect-Execute" cycle.
Stabilize
You first check if there are any urgent issues and address them immediately. Are there burning problems or opportunities that need to be dealt with? Second, you do anything you need to do to best prepare yourself for reflection. If there are people you need to talk to in order to get necessary information, you set that up upfront.
Reflect
Once urgent issues are dealt with and you are able to properly access the situation, you work to do so. For executives this can mean a lengthy period of discussions with all of the relevant people and thoughts on strategy before making formal announcements. This could take a few weeks or months.
Execute
Now is the time to begin working on non-urgent important problems, which should be the main ones. You follow through with your reflection. Execution may involve deciding on pursuing future larger stabilize-reflect-execute loops.
Let’s summarize. “Stabilize” refers to handling urgent issues and preparing for reflection. This is similar to the notion of getting one’s “house in order.” “Reflect” refers to deciding how to best deal with the important non-urgent issues. “Execute” refers to working on the important issues. This is basically a subset of the Eisenhower Method for situations where these three steps make up the majority of the work.
Examples
I think this cycle plays out in many important situations, so may be worth some independent study. Some examples of these cycles include:
Necessary Conditions
The stabilize-reflect-execute cycle is good for some specific situations. I think it may require the following:
1. There are some tasks that are both urgent and important.
If this is not true, the "stabilize" step isn't necessary.
2. There are some tasks that are both non-urgent and important.
If this |
a6ad4516-9059-448c-8db7-911e3c7a6825 | trentmkelly/LessWrong-43k | LessWrong | Can cryoprotectant toxicity be crowd-sourced?
From the article The red blood cell as a model for cryoprotectant toxicity by Aschwin de Wolf
> One simple model that allows for “high throughput” investigations of cryoprotectant toxicity are red blood cells (erythrocytes). Although the toxic effects of various cryoprotective agents may differ between red blood cells, other cells, and organized tissues, positive results in a red blood cell model can be considered the first experimental hurdle that needs to be cleared before the agent is considered for testing in other models. Because red blood cells are widely available for research, this model eliminates the need for animal experiments for initial studies. It also allows researchers to investigate human cells. Other advantages include the reduced complexity of the model (packed red blood cells can be obtained as an off-the-shelf product) and lower costs.
It sounds to me like this is a very cheap assay for viability. You don't need much equipment. High toxicity compounds can be screened on visual appearance. More detailed analysis can be done by a light microscope or a spectrophotometer.
The biggest issue facing cryonics (and the holy grail of suspended animation with true biostasis) is the existence of cryoprotectant toxicity. Less toxic solutions can be perfused for a longer period of time, and thus penetrate the entire organism without triggering additional loss of viability. Vitrification already eliminates all ice formation -- we know enough to know that without toxicity, it should work for trivially reversible forms of long-term suspended animation.
Thus if we want to ask what can be done cheaply by a lot of people to help cryonics move forward, one possibility is that they could perform empirical tests on the compounds most likely to prove effective for cryoprotection.
We can speculate about the brain being reparable at all kinds of levels of damage -- but that is speculation. Sure we do have to make a decision to sign up or not based on that specula |
c14adc43-535b-4a04-8e64-baafac23ed45 | trentmkelly/LessWrong-43k | LessWrong | Logical Line-Of-Sight Makes Games Sequential or Loopy
In the last post, we talked about strategic time and the strategic time loops studied in open-source game theory. In that context, agents have logical line-of-sight to each other and the situation they're both facing, which creates a two-way information flow at the time each is making their decision. In this post I'll describe how agents in one context can use this logical line-of-sight to condition their behavior on how they behave in other contexts. This in turn makes those contexts strategically sequential or loopy, in a way that a purely causal decision theory doesn't pick up on.
Sequential Games and Leverage
As an intuition pump, consider the following ordinary game: Alice and Bob are going to play a Prisoners' Dilemma, and then an Ultimatum game. My favorite framing of the Prisoners' Dilemma is by Nicky Case: each player stands in front of a machine which accepts a certain amount of money, e.g. $100.[1] Both players choose simultaneously whether to put some of their own money into the machine. If Alice places $100 into the machine in front of her, $200 comes out of Bob's machine, and vice versa. If a player withholds their money, nothing comes out of the other player's machine. We call these strategies Cooperate and Defect respectively.
Since neither player can cause money to come out of their own machine, Causal Decision Theory (CDT) identifies Defect as a dominant strategy for both players. Dissatisfaction with this answer has motivated many to dig into the foundations of decision theory, and coming up with different conditions that enable Cooperation in the Prisoners' Dilemma has become a cottage industry for the field. I myself keep calling it the Prisoners' Dilemma (rather than the Prisoner's Dilemma) because I want to frame it as a dilemma they're facing together, where they can collaboratively implement mechanisms that incentivize mutual Cooperation. The mechanism I want to describe today is leverage: having something the other player wants, and givi |
90e6ce9e-b2f3-47c4-95ee-c2b92f16dc13 | trentmkelly/LessWrong-43k | LessWrong | Control the Density of Novelty in Your Writing
I think the key element that determines how easy a piece of writing is to read is its density of novelty.
Novelty can be thought of as the writing equivilant of information. Anything the reader already knows doesn't have to be fully processed, it can just be recalled. Known words, idioms, and structures don't have to be relearned every time they appear. So only new information (novelty) has to be decoded by the reader.
The higher the density of novelty, the harder a piece of writing is to read.
Shakespeare vs Ordinary Speech
Shakespeare is relatively difficult for modern readers because there are lots of unfamiliar words, linguistic structures, and styles of expression. The reader has to process novel elements like blank verse, Elizabethan English, and poetic creativity.
> Will all great Neptune's ocean wash this blood
> Clean from my hand? No; this my hand will rather
> The multitudinous seas incarnadine,
> Making the green one red.
>
> Macbeth Act 2, Scene 2, 54–60
This was a little easier for people in Shakespeare’s time to follow, because they were more familiar with contemporary linguistic and artistic tropes.
Contrast that with the effortlessness of parsing ordinary conversation:
> Hello, how are you?
> Fine, thanks. And you?
> I’m doing well.
Ordinary conversation barely registers as information to our minds because it's so familiar.
Readable writing falls somewhere between these two extremes, maintaining a comfortable density of novelty for the reader. I’ll give several examples of writing with a high density of novelty (hard to read), and writing with a low density of novelty (easy to read). In general, it's better to have a lower density of novelty if you want to communicate clearly.
High Density of Novelty (Hard)
> The sub-relations and sur-relations of quads span partonomic hierarchies, where each element can be defined by its parts. This is different from a taxonomic (“is-a”) hierarchy, where the elements are categories made up of sub-categ |
e81ab922-c9c6-4e82-a6bc-85ef9134e885 | StampyAI/alignment-research-dataset/special_docs | Other | Mo Gawdat - Scary Smart - A former Google exec_s perspective on AI risk-by Towards Data Science-video_id u2cK0_jUX_g-date 20220126
# Mo Gawdat on Scary Smart A former Google exec’s perspective on AI risk by Jeremie Harris on the Towards Data Science Podcast
## Mo Gawdat on AGI, its potential and its safety risks
If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”
The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now.
Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, [Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World](https://www.amazon.com/Scary-Smart-Future-Artificial-Intelligence-ebook/dp/B08ZNJL4QP). He joined me to talk about just that on this episode of the TDS podcast.
Here were some of my favourite take-homes from the conversation:
- Over the last several decades, progress in AI has been exponential (or more than exponential if you measure it based on [compute curves](https://openai.com/blog/ai-and-compute/)). Humans are really bad at extrapolating exponential trends, and that can lead to our being taken by surprise. And that’s partly because exponential progress can change the world so much and so fast that predictions are next to impossible to make. Powered by exponential dynamics, a single COVID case turns into a nation-wide lockdown within weeks, and a once-cute and ignorable tool like AI becomes a revolutionary technology whose development could shape the very future of the universe.
- One of the core drivers behind the exponential progress of AI has been an economic feedback loop: companies have learned that they can reliably invest money in AI research, and get a positive return on their investment. Many choose to plough those returns back into AI, which amplifies AI capabilities further, leading to a virtuous cycle. Recent [scaling trends](https://arxiv.org/pdf/2001.08361.pdf) seem to suggest that AI has reached a kind of economic escape velocity, where returns on a marginal dollar invested in AI research are significant enough that tech executives can’t ignore them anymore — all of which makes AGI inevitable, in Mo’s opinion.
- Whether AGI is developed by 2029, as Ray Kurzweil [has predicted](https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045), or somewhat later as [this great post](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines) by Open Philanthropy argues, doesn’t really matter. One way or another, artificial human-level or general intelligence (definitions are fuzzy!) seems poised to emerge by the end of the century. Mo thinks that the fact that AI safety and AI policy aren’t our single greatest priorities as a species is a huge mistake. And on that much, I certainly agree with him.
- Mo doesn’t believe that the AI control problem (sometimes known as the alignment problem) \_can\_ be solved. He considers it impossible that organisms orders of magnitude less intelligent than AI systems would be able to exert any meaningful control over them.
- His solution is unusual: humans, he argues, need to change their online behaviour, and approach one another with more tolerance and civility on social media. The idea behind this strategy is to hope that as AI systems are trained on human-generated social media content, they will learn to mimic more virtuous behaviours, and pose less of a threat to us. I’m admittedly skeptical of this view, because I don’t see how it addresses some of the core features of AI systems that make alignment so hard (for example, [power-seeking and instrumental convergence](https://arxiv.org/pdf/1912.01683.pdf), or the challenge of [objective specification](https://openai.com/blog/faulty-reward-functions/)). That said, I think there’s a lot of room for a broader conversation about AI safety, and I’m glad Mo is shining a light on this important problem. |
f25be00a-0210-4cac-a5fc-b8f690ed5adf | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Call for submissions: “(In)human Values and Artificial Agency”, ALIFE 2023
key points:
* Cash prize of $500 for the best presentation.
* Deadline 3 March, 2023.
* Organized by Simon McGregor (University of Sussex), Rory Greig (DeepMind), Chris Buckley (University of Sussex)
> ALIFE 2023 (the 2023 conference on Artificial Life) will feature a Special Session on “(In)human Values and Artificial Agency”. This session focuses on issues at the intersection of AI Safety and Artificial Life. We invite the submission of research papers, or extended abstracts, that deal with related topics.
>
> We particularly encourage submissions from researchers in the AI Safety community, who might not otherwise have considered submitting to ALIFE 2023.
>
>
...
> **EXAMPLES OF A-LIFE RELATED TOPICS**
>
> Here are a few examples of topics that engage with A-Life concerns:
>
> * Abstracted *simulation models* of complex emergent phenomena
> * Concepts such as *embodiment*, *the extended mind*, *enactivism*, *sensorimotor contingency theory,* or *autopoiesis*
> * *Collective behaviour* and *emergent behaviour*
> * Fundamental *theories of agency* or *theories of cognition*
> * *Teleological* and *goal directed* behaviour of artificial agents
> * *Specific instances* of adaptive phenomena in biological, social or robotic systems
> * *Thermodynamic* and *statistical-mechanical* analyses
> * *Evolutionary*, *ecological* or *cybernetic* perspectives
>
> **EXAMPLES OF AI SAFETY RELATED TOPICS**
>
> Here are a few examples of topics that engage with AI Safety concerns:
>
> * Assessment of distinctive *risks, failure modes* or *threat models* for artificial adaptive systems
> * Fundamental *theories of agency*, *theories of cognition* or *theories of optimization.*
> * *Embedded Agency*, formalizations of agent-environment interactions that account for embeddedness, detecting agents and representations of agents’ goals.
> * *Selection theorems* – how selection pressures and training environments determine agent properties.
> * Multi-agent cooperation; inferring / learning human values and aggregating preferences.
> * Techniques for aligning AI models to human preferences, such as Reinforcement Learning from Human Feedback (RLHF)
> * *Goal Misgeneralisation* – how agent’s goals generalise to new environments
> * *Mechanistic interpretability* of learned / evolved agents (*“digital neuroscience”*)
> * Improving fairness and reducing harm from machine learning models deployed in the real world.
> * Loss of human agency from increasing automation
> |
f77f9f4b-2f0d-4589-8da1-07c44081f613 | trentmkelly/LessWrong-43k | LessWrong | The Wedding Ceremony
Family and friends of the groom and bride, we are gathered here today to join this young couple in the union of permanent cooperation in the iterated prisoner’s dilemma.
They are happy to share this moment with all their guests, and are grateful that you provide the social pressure that allows them to commit to cooperation with credibility, by providing external enforcement.
As groom and bride prepared for this ceremony, they reflected on the alignment of values and capabilities that lets the utility function of each one be pursued better by joining together in a partnership with no easily predictable end date. The groom wants to thank the social stratification that encourages assortative mating by educational attainment, ensuring that both partners have equal capacity to pursue their utility in the information age. The bride wants to thank the pervasive surveillance of social media that assured her that the couple’s utility functions align with high correlation before she even met the groom.
Bride, do you come here freely and without reservation to modify your utility function to the arithmetic mean of the two utility functions, for each current and possible world-state?
– I do.
Groom, do you agree to self-modify to the same end, up to any structural or informational uncertainty you may have in modeling the bride’s preexisting utility function?
– I do.
Now, by the power vested in me by timeless decision theory, it is my honor and pleasure to declare you sub-modules of a unified agent. You may seal this declaration with a saliva-based exchange of immune system data.
|
24899783-9464-4ce7-b797-5693b4ed4775 | trentmkelly/LessWrong-43k | LessWrong | Quantifying General Intelligence
Introduction
This piece seeks to explore an interesting way of defining intelligent systems such that we can theoretically quantify their general intelligence. From this, further tools and ideas for comparing these entities could be developed. The definitions are not meant to be philosophical truths, rather they are meant to be useful tools that will allow us to analyse and gain insight into these systems and how they relate to one another. At least that's the hope, failing that they can perhaps at least provide some food for thought.
This post is meant to be accessible to non-technical readers so some terms may be explained to a level of detail unnecessary for people familiar with machine learning.
Desirable Properties
We begin by identifying several desired properties that would increase the utility and robustness of our framework, giving us something to aim at.
Sufficient: If our definitions relied upon, or referenced, things that are poorly defined themselves, we would just be moving the problem back a step and not actually gaining any insight.
Measurable: Intelligence is a broad spectrum, this especially visible in the natural world. A good definition would reflect this and give us a continuous measure of intelligence that allows sensible comparisons.
Implementation Independent: It's easy to compare somethings capabilities to humans in order to ascertain their intelligence. We want our definitions to be free from bias towards any particular implementation or version of intelligence, so that it can recognise intelligence which operates in a way unfamiliar to us, or in a way we don't understand.
Minimal Grey Areas: Many definitions could leave large grey areas on boundaries between classifications, or not make sense when applied to domains they were not designed with in mind. This should be avoided.
Useable: Sometimes a seemingly 'perfect' definition is infeasible to actually apply, and so is of no practical use. A definition which is infeasible to theore |
e9960cca-cebe-49de-81bd-ea07feffdd0e | trentmkelly/LessWrong-43k | LessWrong | Modifying LLM Beliefs with Synthetic Document Finetuning
In this post, we study whether we can modify an LLM’s beliefs and investigate whether doing so could decrease risk from advanced AI systems.
We describe a pipeline for modifying LLM beliefs via synthetic document finetuning and introduce a suite of evaluations that suggest our pipeline succeeds in inserting all but the most implausible beliefs. We also demonstrate proof-of-concept applications to honeypotting for detecting model misalignment and unlearning.
Introduction:
> Large language models develop implicit beliefs about the world during training, shaping how they reason and act<d-footnote>In this work, we construe AI systems as believing in a claim if they consistently behave in accordance with that claim</d-footnote>. In this work, we study whether we can systematically modify these beliefs, creating a powerful new affordance for safer AI deployment.
>
> Controlling the beliefs of AI systems can decrease risk in a variety of ways. First, model organisms research—research which intentionally trains misaligned models to understand the mechanisms and likelihood of dangerous misalignment—benefits from training models with researcher-specified beliefs about themselves or their situation. Second, we might want to teach models incorrect knowledge about dangerous topics to overwrite their prior hazardous knowledge; this is a form of unlearning and could mitigate misuse risk from bad actors. Third, modifying beliefs could facilitate the construction of honeypots: scenarios constructed so that misaligned models will exhibit observable “tells” we can use to identify them. Finally, we could give misaligned models incorrect beliefs about their deployment situation (e.g. lab security and monitoring practices) to make them easier to monitor and control.
>
> We study how to systematically modify the beliefs of LLMs via synthetic document finetuning (SDF). SDF involves (1) using an LLM to generate synthetic documents that reference a proposition, and then (2) doing super |
02c06511-ff2c-49b1-b817-29aa0296236b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | 2019 Review Rewrite: Seeking Power is Often Robustly Instrumental in MDPs
For the 2019 LessWrong review, I've completely rewritten my post [*Seeking Power is Often Robustly Instrumental in MDPs*](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-provably-instrumentally-convergent-in). The post explains the key insights of [my theorems on power-seeking and instrumental convergence / robust instrumentality](https://arxiv.org/abs/1912.01683v6). The new version is more substantial, more nuanced, and better motivated, without sacrificing the broad accessibility or the cute drawings of the original.
Big thanks to diffractor, Emma Fickel, Vanessa Kosoy, Steve Omohundro, Neale Ratzlaff, and Mark Xu for reading / giving feedback on this new version.
Here's my review, which I also [posted as a comment](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps?commentId=TQQ2kXDdSgRLNDho3).
Self-review
===========
One year later, I remain excited about this post, from its ideas, to its formalisms, to its implications. I think it helps us formally understand [part of the difficulty of the alignment problem](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/w6BtMqKRLxG9bNLMr). This formalization of power and the [*Attainable Utility Landscape*](https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/fj8eyc7QzqCaB8Wgm) have together given me a [novel frame for understanding alignment and corrigibility](https://www.lesswrong.com/posts/Xts5wm3akbemk4pDa/non-obstruction-a-simple-concept-motivating-corrigibility).
Since last December, I’ve spent several hundred hours expanding the formal results and rewriting [the paper](https://arxiv.org/pdf/1912.01683.pdf); I’ve generalized the theorems, added rigor, and taken great pains to spell out what the theorems do and do not imply. For example, the main paper is 9 pages long; in Appendix B, I further dedicated *3.5 pages* to exploring the nuances of the formal definition of ‘power-seeking’ (Definition 6.1).
However, there are a few things I wish I’d gotten right the first time around. Therefore, I’ve restructured and rewritten much of the post. Let’s walk through some of the changes.
‘Instrumentally convergent’ replaced by ‘robustly instrumental’
---------------------------------------------------------------
[Like](https://www.lesswrong.com/posts/Lotih2o2pkR2aeusW/math-that-clicks-look-for-two-way-correspondences) [many](https://www.lesswrong.com/posts/8LEPDY36jBYpijrSw/what-counts-as-defection) good things, this terminological shift was prompted by a critique from Andrew Critch.
Roughly speaking, this work considered an action to be ‘instrumentally convergent’ if it’s very probably optimal, with respect to a probability distribution on a set of reward functions. For the formal definition, see Definition 5.8 in the paper.
This definition is natural. You can even find it echoed by Tony Zador in the [*Debate on Instrumental Convergence*](https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-instrumental-convergence-between-lecun-russell):
> So i would say that killing all humans is not only not likely to be an optimal strategy under most scenarios, the set of scenarios under which it is optimal is probably close to a set of measure 0.
>
>
(Zador uses “set of scenarios” instead of “set of reward functions”, but he is implicitly reasoning: “with respect to my beliefs about what kind of objective functions we will implement and what states the agent will confront in deployment, I predict that deadly actions have a negligible probability of being optimal.”)
While discussing this definition of ‘instrumental convergence’, Andrew asked me: “what, exactly, is doing the *converging*? There is no limiting process. Optimal policies just *are*.”
It would be more appropriate to say that an action is ‘instrumentally robust’ instead of ‘instrumentally convergent’: the instrumentality is *robust* to the choice of goal. However, I found this to be ambiguous: ‘instrumentally robust’ could be read as “the agent is being robust for instrumental reasons.”
I settled on ‘robustly instrumental’, rewriting the paper’s introduction as follows:
> An action is said to be *instrumental to an objective* when it helps achieve that objective. Some actions are instrumental to many objectives, making them *robustly instrumental*. The so-called *instrumental convergence* thesis is the claim that agents with many different goals, if given time to learn and plan, will eventually converge on exhibiting certain common patterns of behavior that are robustly instrumental (*e.g.* survival, accessing usable energy, access to computing resources). Bostrom et al.'s instrumental convergence thesis might more aptly be called the *robust instrumentality* thesis, because it makes no reference to limits or converging processes:
>
>
> “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.”
>
>
> Some authors have suggested that *gaining power over the environment* is a robustly instrumental behavior pattern on which learning agents generally converge as they tend towards optimality. If so, robust instrumentality presents a safety concern for the alignment of advanced reinforcement learning systems with human society: such systems might seek to gain power over humans as part of their environment. For example, Marvin Minsky imagined that an agent tasked with proving the Riemann hypothesis might rationally turn the planet into computational resources.
>
>
This choice is not costless: many are already acclimated to the existing ‘instrumental convergence.’ It even has [its own Wikipedia page](https://en.wikipedia.org/wiki/Instrumental_convergence). Nonetheless, if there ever were a time to make the shift, that time would be now.
Qualification of Claims
-----------------------
The original post claimed that “optimal policies tend to seek power”, *period*. This was partially based on a result which I’d incorrectly interpreted. Vanessa Kosoy and Rohin Shah pointed out this error to me, and I quickly amended the original post and [posted a follow-up explanation](https://www.lesswrong.com/posts/cwpKagyTvqSyAJB7q/clarifying-power-seeking-and-instrumental-convergence).
At the time, I’d wondered whether this was still true in general via some other result. The answer is ‘no’: it *isn’t* always more probable for optimal policies to navigate towards states which give them more control over the future. Here’s a surprising counterexample which doesn’t even depend on my formalization of ‘power.’
The paths are one-directional; the agent can’t go back from **3** to **1**. The agent starts at **1**. Under a certain state reward distribution, the vast majority of agents go *up* to **2**.
However, any reasonable notion of ‘power’ must consider having no future choices (at state **2**) to be less powerful than having one future choice (at state **3**). For more detail, see Section 6 and Appendix B.3 of [the paper](https://arxiv.org/pdf/1912.01683.pdf).When reward is .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
IID across states according to the quadratic CDF F(x):=x2 on the unit interval, then with respect to reward functions drawn from this distribution, going *up* has about a 91% chance of being optimal when the discount rate γ=.12.
If you’re curious, this happens because this quadratic reward distribution has negative skew. When computing the optimality probability of the *up* trajectory, we’re checking whether it maximizes discounted return. Therefore, the probability that *up* is optimal is
PR∼D(R(2)≥max((1−γ)R(3)+(1−γ)γR(4)+γ2R(5),(1−γ)R(3)+(1−γ)γR(4)+γ2R(6))).
Weighted averages of IID draws from a left-skew distribution will look more Gaussian and therefore have fewer large outliers than the left-skew distribution does. Thus, going *right* will have a lower optimality probability.No matter how you cut it, the relationship just isn’t true in general. Instead, [the post](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-provably-instrumentally-convergent-in) now sketches sufficient conditions under which power-seeking behavior is more probably optimal – conditions which are proven in the paper.
---
If you want to leave a comment, please don't do it here: leave it on [the original post](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps). |
792055c4-054b-4426-bcd3-109789dbbf26 | trentmkelly/LessWrong-43k | LessWrong | To-do waves
Crossposted from my Substack blog.
Main idea
Most of our tasks work like a wave. We need to do them at a certain interval. I need to pee and pay taxes every sometime.
To-do waves have different lengths. There are small waves like I need to pee, eat, and drink. There are daily waves like clean teeth, dress up, move my body. Couple of days long waves like socializing, grocery shopping, and bringing the trash out. And a week, year, or longer waves like cutting nails, refueling a car, renewing a passport, doing a checkup at the doctor, filing taxes. There is a high probability that each day there will be waves from all lengths buckets.
And there are so many different categories of tasks. Taking care of a human pet – an expression from the Wait But Why blog, meaning all the little things you need to do to take care of yourself, essential things for survival, and chores. There are things you simply like and gonna do: internet addictions, watch/read cool things in a save folder, go for ice cream. There are things that are super urgent “I run out of time with a parking meter” or agenda you feel is right to do like helping a friend or going to a protest. There are straight procrastination items – when you pick a thing of least resistance not to tackle a more demanding item. And there are tasks tiny but abundant like attaching a charging cable to devices, refilling a water bottle, or eating a snack. We usually underestimate how many things we need to-do in our lives.
By default, my expectation is that I have this open plane of time when I can work on the important to-do. The reality looks completely different. My life is full of to-do items that it is hard to foresee. In practice I am left with is shuttered timeline and slivers of time where I can try to accomplish something important.
FAQ / what to do about it?
💡 Feel free to treat the below section like an FAQ – delve into sections that interest you
Why this is important? Uninterrupted time as a foundation of prod |
a242ffc7-234c-4bd1-a9d6-ccfb8d3f6c91 | trentmkelly/LessWrong-43k | LessWrong | How to Become a World Historical Figure (Péladan's Dream)
I. Introduction
Erik Hoel writes the following in reference to Roger’s Bacon’s Predictions for 2050: Black Swan Edition post (apologies if you’ve read this before, feel free to skip to the next section).
“Finally, there’s Secretum Secretorum, which eschews the idea of extrapolating from current trends and instead takes some big contrarian positions. Of particular interest is the idea of a “World Historic Individual” emerging. Specifically that
> . . . by 2050 there will a living person who is widely recognized to be what early 1900s German historian Oswald Spengler called a “World Historical Figure” (The Decline of The West). Jesus, Socrates, Alexander the Great, Buddha, Genghis Khan, Muhammed, Newton, Darwin are all on the list.
It’s always struck me that my generation, the millennials, have grown up with a particular lack of World Historic individuals (in terms of impact in the long march of history, not popularity or name recognition in the moment). We missed concurrency with most of the 20th-century luminaries. My parents’ early lives overlapped with Einstein, for instance. Indeed, it’s worth asking:
> Who is the most recent person that could reasonably be called a world historical figures? I say yes for Gandhi and Martin Luther King, Jr. . . . After that. . . I’m not so sure. Off the top of my head, here’s a list of potential candidates: Mao Zedong, Osama Bin Laden, Obama, Trump, Elon Musk (sorry Bezos, you didn’t make the cut), and Xi Jinping. I think most of these people are debatable when you start to consider truly vast time horizons: what are the chances people will know about Obama, Xi, or Elon Musk 500 years from now?
Of those listed by Secretum Secretorum, the only individual I can imagine mattering in five hundred years is Elon Musk, although not for anything he’s done yet. But if he did establish a city on Mars, as there’s indeed a chance he might, getting humanity off-planet would certainly be remembered. Yet, it’s also possible the responsibi |
4bdf7a35-dfb5-4ac8-8a7d-8670c8cd7f00 | StampyAI/alignment-research-dataset/blogs | Blogs | MIRI’s July 2014 newsletter
| | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| |
| --- |
| |
| | |
| --- | --- |
|
| |
| --- |
|
[Machine Intelligence Research Institute](http://intelligence.org)
|
|
|
| | |
| --- | --- |
|
| |
| --- |
|
**Research Updates**
* Two new reports: “[Distributions allowing tiling of staged subjective EU maximizers](http://intelligence.org/2014/06/06/new-report-distributions-allowing-tiling-staged-subjective-eu-maximizers/)” and “[Non-omniscience, probabilistic inference, and metamathematics](http://intelligence.org/2014/06/23/new-report-non-omniscience-probabilistic-inference-metamathematics/).”
* New analysis: [Failures of an embodied intelligence](http://lesswrong.com/lw/k68/failures_of_an_embodied_aixi/).
* Book chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) [now published](http://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/) in the *Cambridge Handbook of Artificial Intelligence.*
* [2 new expert interviews](http://intelligence.org/category/conversations/): [Roger Schell](http://intelligence.org/2014/06/23/roger-schell/) on long-term computer security research and [Allan Friedman](http://intelligence.org/2014/06/06/allan-friedman-cybersecurity-cyberwar/) on cybersecurity and cyberwar.
**News Updates**
* We’ve released our mid-2014 strategic plan [update](http://intelligence.org/2014/06/11/mid-2014-strategic-plan/).
* There are currently [six active MIRIx groups](http://intelligence.org/mirix/) around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run [your own independently-organized MIRIx workshop](http://intelligence.org/mirix/)!
* Luke and Eliezer will be giving talks at the [Effective Altruism Summit](http://www.effectivealtruismsummit.com/).
* We are **actively hiring** for [four positions](http://intelligence.org/careers/): research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.
**Other Updates**
* Luke has a [personal blog](http://lukemuehlhauser.com/) now, which often discusses, or links to articles about, long-term AI outcomes.
* *Our Final Invention* by James Barrat is [now available in audiobook](http://smile.amazon.com/Our-Final-Invention-Artificial-Intelligence/dp/B00KMZY5NG/).
As always, please don’t hesitate to let us know if you have any questions or comments.
Best,
Luke Muehlhauser
Executive Director
|
|
|
|
| | | | | |
| --- | --- | --- | --- | --- |
|
| |
| --- |
| [Facebook](https://www.facebook.com/MachineIntelligenceResearchInstitute) | [Twitter](https://twitter.com/MIRIBerkeley) | [Google+](https://plus.google.com/+IntelligenceOrg/) | [Forward to a friend](https://intelligence.org/feed/*|FORWARD|*) |
| | |
|
You’re receiving this because you subscribed to the [M](http://intelligence.org)[IRI](http://intelligence.org) newsletter.
[unsubscribe from this list](https://intelligence.org/feed/*|UNSUB|*) | [update subscription preferences](https://intelligence.org/feed/*|UPDATE_PROFILE|*)
|
|
|
|
The post [MIRI’s July 2014 newsletter](https://intelligence.org/2014/07/01/july-newsletter-2/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
54cdda3d-183d-444b-83b9-dd62afa1ae36 | trentmkelly/LessWrong-43k | LessWrong | The difficulty in predicting AI, in three lines
An over-simplification, but an evocative one:
* The social sciences are contentious, their predictions questionable.
* And yet social sciences use the scientific method; AI predictions generally don't.
* Hence predictions involving human-level AI should be treated as less certain than any prediction in the social sciences.
|
9ff55cc9-ed4f-4576-b3e6-6b0bc240bd30 | trentmkelly/LessWrong-43k | LessWrong | January 2013 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. I find that exposure to LW ideas makes me less likely to enjoy some entertainment media that is otherwise quite popular, and finding media recommended by LWers is a good way to mitigate this. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
* Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
* If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
* Please use the comment trees for genres. There is a meta thread for comments about future threads.
* If you think there should be a thread for a particular genre of media, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month. |
dc4aaaf3-b7fb-4b04-ae81-531ac967402b | trentmkelly/LessWrong-43k | LessWrong | Meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed
Discussion article for the meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed
WHEN: 13 September 2014 03:00:00PM (+0200)
WHERE: Studenterhuset, Købmagergade 52, 1150 København, Denmark
Less Wrong Copenhagen is back!
Join us for this coming Saturday at Studenterhuset. The meetup will have two components:
* Rationality Dojo: a section intended to be a serious self-improvement session for those committed to the Art of Rationality and personal growth. At this meetup I'll talk about "Sphexishness, Agentiness, and Noticing" and follow with related exercises.
* Socialising: second component is getting to know and enjoy the company of fellow humans who care about doing better. We'll drink, talk, maybe play a board game.
I'll be there from slightly before 15:00, call me on 22 47 83 73 if you're having trouble finding the group.
From the soon-to-published published CFAR Glossary:
Agency: Agency is the property of agents. An agent has explicit goals which they strive to accomplish by planning and executing appropriate actions. Non-agents unreflectively act out default behaviours, without considering whether these actions achieve their goals. Agency is the opposite of sphexishness.
Sphexishness: Coined by Douglas Hofstader in reference to the sphex wasp, sphexishness is the execution of seemingly intelligent behaviour by following a rigid algorithm. Sphexish behaviours, are repeated automatically, on habit, without checking for their effectiveness at achieving desired goals. Opposite of agency.
Discussion article for the meetup : LW Copenhagen - September: This Wavefunction Has Uncollapsed |
88b70732-b881-45e0-b5bd-7af150c897eb | trentmkelly/LessWrong-43k | LessWrong | Are coincidences clues about missed disasters? It depends on your answer to the Sleeping Beauty Problem.
In April, Gwern posted this comment in response to slowed AI progress:
> "It feels like AI is currently bottlenecked on multiple consecutive supplychain disruptions, from cryptocurrency to Intel's fab failures to coronavirus... A more paranoid man than myself would start musing about anthropic shadows and selection effects."
I’m that more paranoid person. After reading this, I became concerned that recent events could be related to a barely missed disaster that might be looming in the near future. After some more thought, though, I am mostly no longer concerned about this, and I think that I have identified a useful crux for determining whether to use observed coincidences as evidence of barely missed disasters. I’m sharing my thoughts here so that someone can point out my mistakes, and possibly help anyone else who is thinking about the same question.
Epistemic Status: I feel pretty sure about the main argument I’m making, but also anthropics is a confusing subject and I will not be surprised if there are multiple important things I'm missing.
If you accept that the Many Worlds interpretation of quantum mechanics is true (or that any type of multiverse exists), then it might be reasonable to expect to someday find yourself in a world where a series of unlikely deus ex machina events will conveniently have prevented you from dying. If you also accept that advanced AI poses an existential threat to humanity, then it might be concerning when there appears to be a convergence of recent, unlikely events slowing down AI progress. There is a reasonable case to be made that such a convergence has happened recently. Here are some things that have happened in the past year or so:
* OpenAI released GPT-3
* China released something similar to GPT-3
* A pandemic happened that hurt the economy and increased demand for consumer electronics, driving up the cost of computer chips
* Intel announced that it was having major manufacturing issues
* Bitcoin, Ethereum, and oth |
41fab33a-a76b-4e7f-8e37-2e3560470637 | trentmkelly/LessWrong-43k | LessWrong | Why is fiber good for you?
Fiber is defined as the parts of plants that we can eat but not digest. We're always told to eat more of it. Why should we eat things we can't digest?
There are many papers to point to. Fiber consumption is strongly associated with less cancer, less heart disease, and less death (recent meta-analysis here). Most of these findings come from observational studies, where people report their diets to researchers. Studies in which fiber consumption is experimentally controlled tend to find fewer benefits, so people who eat more fiber may already be healthier for other reasons.
Sidestepping the causal question, we could still ask: what are the plausible mechanisms? How does fiber do this?
Below is a list of potential benefits from an authoritative-seeming (~1,000 cites) paper in the journal Nutrients. I'm struck by how long the list is. I'm still inclined to follow the conventional advice--it feels right and if something was severely harmful about fiber we'd know from the observational data. But I do wonder: with such diverse and complex impacts on the body, could there be various negative effects too?
Reasons fiber could decrease cancer
1. Dietary fiber (DF) is “fermented to produce short chain fatty acids” in the large intestine, “which have anti-carcinogenic properties”
2. Since “DF increases fecal bulking and viscosity, there is less contact time between potential carcinogens and mucosal cells”
3. “DF increases the binding between bile acids and carcinogens”
4. DF increases antioxidants
5. DF decreases estrogen (which could cause cancer)
Reasons fiber could decrease heart disease
6. “[S]oluble fibers have been shown to increase the rate of bile excretion therefore reducing serum total and LDL cholesterol”
7. “[S]hort chain fatty acid production [(see point 1)], specifically propionate, has been shown to inhibit cholesterol synthesis”
8. DF “regulate[s] energy intake [I believe through a satiety effect; you feel more fu |
df91e497-7c88-40f7-b63f-ee5f8771b704 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Tempe, AZ (ASU)
Discussion article for the meetup : Tempe, AZ (ASU)
WHEN: 07 March 2014 06:30:00PM (-0700)
WHERE: 300 E Orange Mall, Tempe, AZ
We are meeting at the entrance to Hayden Library at ASU. Tentative discussion topics include: wrap up How To Measure Anything; planning / meta; belated New Year's resolutions andor goals in general
Discussion article for the meetup : Tempe, AZ (ASU) |
75c2d2df-46fa-4f79-b697-5c01ad3f5560 | trentmkelly/LessWrong-43k | LessWrong | Preface to the sequence on economic growth
On Lesswrong, when we talk about artificial intelligence, we tend to focus on the technical aspects, such as potential designs, specific developments, and future capabilities. From an engineering perspective, this focus makes sense. But most people here aren't interested in artificial intelligence because they want to know how AI will be designed; the reason we're here is because AI has the potential to radically reshape the world around us.
Longtermists have often emphasized the role economic growth plays as perhaps the most important phenomena of human history. In a quite real sense, economic growth is what distinguishes 21st century humanity from our distant ancestors who had no technology or civilization. Nick Bostrom summarizes this point well,
> You could argue that if we look back over history, there have really only been two events that have fundamentally changed the human condition, the first being the Agricultural Revolution some 10,000 or 12,000 years ago in Mesopotamia, where we transitioned from being hunter-gatherers, small bands roaming around, to settling into cities, growing, domesticating crops and animals. [...]
> The second fundamental change in the human condition, Industrial Revolution, where for the first time, you have the rate of economic and technological growth outstripping population growth, and so only when this happens can you have an increase in average income. Before that, there was technological growth and economic growth, but the economy grew 10%, the population grew 10%, everybody's still in a Malthusian condition.
Many theorists anticipate that there will be a third fundamental change in the human condition, roughly timed with the development of advanced artificial intelligence. In line with these predictions, economic growth is the primary specific benchmark people have used to characterize potential future AI takeoff.
If economic growth is the essential variable we should pay most attention to when it comes to AI, then our |
51be4686-e835-458c-89d2-78d0b6891ded | trentmkelly/LessWrong-43k | LessWrong | In Logical Time, All Games are Iterated Games
Logical Time
The main purpose of this post is to introduce the concept of logical time. The idea was mentioned in Scott's post, Bayesian Probability is for things that are Space-like Separated from You. It was first coined in a conference call with, Daniel Demski, Alex Mennan, and perhaps Corey Staten and Evan Lloyd -- I don't remember exactly who was there, or who first used the term. Logical time is an informal concept which serves as an intuition pump for thinking about logical causality and phenomena in logical decision theory; don't take it too seriously. In particular, I am not interested in anybody trying to formally define logical time (aside from formal approaches to logical causality). Still, it seems like useful language for communicating decision-theory intuitions.
Suppose you are playing chess, and you consider moving your bishop. You play out a hypothetical game which results in your loss in several moves. You decide not to move your bishop as a result of this. The hypothetical game resulting in your loss still exists within logic. You are logically later than it, in that the game you actually play depends on what happened in this hypothetical game.
Suppose you're stuck in the desert in a Parfit's Hitchhiker problem. Paul Ekman is reading your face, deciding whether you're trustworthy. Paul Ekman does this based on experience, meaning that the computation which is you has a strong similarity with other computations. This similarity can be used to predict you fairly reliably, based on your facial expressions. What creates this similarity? According to the logical time picture, there is a logical fact much earlier in logical time, which governs the connection between facial expressions and behavior.
To the extent that agents are trying to predict the future, they can be thought of as trying to place themselves later in logical time than the events which they're trying to predict. Two agents trying to predict each other are competing to see who can be |
5a63cf2a-91b5-4cc7-9095-3ba0c95ac63f | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC Biased Boardgames meetup
Discussion article for the meetup : Washington DC Biased Boardgames meetup
WHEN: 05 August 2012 03:00:00PM (-0400)
WHERE: Arlington, VA
The next meetup will be Sunday, August 12th, at 3pm, at a private residence. PM me or email the list for details.
The topic will be biased boardgames . If you don't want to read the link, basically we will be assigning biases for people to exaggerate while we play a game (most likely pandemic, but maybe others). Suggestions for games would be fantastic, as would be bringing them.
Discussion article for the meetup : Washington DC Biased Boardgames meetup |
6002048d-98fb-44da-91eb-f34294e8486c | trentmkelly/LessWrong-43k | LessWrong | Bi-Weekly Rational Feed
===Highly Recommended Articles:
Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.
In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.
My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.
Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.
Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.
The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.
On Dragon Army by Zvi Moshowitz - |
e415ac69-b80d-44b2-8542-fe92697f9b38 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation
A crew of pirates all keep their gold in one very secure chest, with labelled sections for each pirate. Unfortunately, one day a storm hits the ship, tossing everything about. After the storm clears, the gold in the chest is all mixed up. The pirates each know how much gold they had - indeed, they’re rather obsessive about it - but they don’t trust each other to give honest numbers. How can they figure out how much gold each pirate had in the chest?
Here’s the trick: the captain has each crew member write down how much gold they had, in secret. Then, the captain adds it all up. If the final amount matches the amount of gold in the chest, then we’re done. But if the final amount does not match the amount of gold in the chest, then the captain throws the whole chest overboard, and nobody gets any of the gold.
I want to emphasize two key features of this problem. First, depending on what happens, we may never know how much gold each pirate had in the chest or who lied, even in hindsight. Hindsight isn’t 20/20. Second, the solution to the problem requires outright destruction of wealth.
The point of this post is that these two features go hand-in-hand. There’s a wide range of real-life problems where we can’t tell what happened, even in hindsight; we’ll talk about three classes of examples. In these situations, it’s hard to design good incentives/mechanisms, because we don’t know where to allocate credit and blame. Outright wealth destruction provides a fairly general-purpose tool for such problems. It allows us to align incentives in otherwise-intractable problems, though often at considerable cost.
The Lemon Problem
-----------------
Alice wants to sell her old car, and Bob is in the market for a decent quality used vehicle. One problem: while Alice knows that her car is in good condition (i.e. “not a lemon”), she has no cheap way to convince Bob of this fact. A full inspection by a neutral third party would be expensive, Bob doesn’t have the skills to inspect the car himself, and any words Alice speaks on the matter could just as easily be spoken by someone selling a lemon.
In order to convince Bob that the car is not a lemon, Alice needs to say or do something which a lemon-seller would not. What can she do?
One easy answer: offer to pay for any mechanical problems which come up after the sale. If Alice knew about expensive mechanical problems hiding under the car’s hood, then she wouldn’t offer Bob this sort of insurance (at least not for a low price). Conversely, if Alice is reasonably confident there are no mechanical problems, then offering to pay for the probably-non-existent problems costs her little.
There is one problem with this approach, however: if Alice is paying for mechanical problems, then Bob has no incentive to take good care of the car.
Ideally, if we could *figure out in hindsight* which problems were already present at the time of the sale, then Alice could offer to pay for only problems which were present beforehand. But in practice, if the car’s brakes fail 6 months or a year after the sale, we have no way to tell when the problem began. Were they already worn down, or has Bob been visiting the racetrack?
We can get a less-than-perfect solution using a proxy. For instance, if the car’s belt snaps a week after the sale, then it was probably frayed beforehand. If it snaps five years after the sale, then it probably wasn’t a noticeable issue beforehand. In this case, we can use time-at-which-a-problem-is-detected as a proxy for whether-a-problem-was-present-at-time-of-sale. This isn’t perfectly reliable, and there will be grey areas, but it gets us one step closer to figuring out in hindsight what happened.
Alternatively, we could try to align incentives *without* figuring out what happened in hindsight, using a trick similar to our pirate captain throwing the chest overboard. The trick is: if there’s a mechanical problem after the sale, then *both* Alice and Bob pay for it. I do not mean they split the bill; I mean they both pay the entire cost of the bill. One of them pays the mechanic, and the other takes the same amount of money in cash and burns it. (Or donates to a third party they don’t especially like, or ….) This aligns both their incentives: Alice is no longer incentivized to hide mechanical problems when showing off the car, and Bob is no longer incentivized to ignore maintenance or frequent the racetrack.
However, this solution also illustrates the downside of the technique: it’s expensive. Sometimes accidents happen - e.g. the air conditioner fails without Alice hiding it or Bob abusing the car. Our both-pay solution will make such accidents twice as expensive. If we can’t tell in hindsight whether a problem was Alice’ fault, Bob’s fault, or an accident, then both Alice and Bob need to pay the full cost of the problem in order to fully align their incentives. That means they’ll both need to pay for accidents, which reduces the overall surplus from the car-sale. If the car is worth enough to Bob and little enough to Alice, there may still be room to make the deal work, but the (expected) cost of accidental problems will eat into both of their wallets.
Similarly, if Alice and Bob have less-than-perfect trust in each others’ capabilities, that will eat into (expected) value. If Bob thinks that Alice just doesn’t know her own car very well, he may expect problems that Alice doesn’t know about. If Alice thinks that Bob is a careless driver regardless of incentives, then she’ll expect problems. These sorts of problems are effectively the same as accidents: they’re problems which won’t be avoided by good incentives, and therefore their overall cost will be doubled when both Alice and Bob need to pay for them.
O-Ring Production Functions
---------------------------
Suppose we have 100 workers, all working to produce a product. In order for the product to work, all 100 workers have to do their part correctly; if even just one of them messes up, then the whole product fails. This is an o-ring production function - named for the explosion of the space shuttle Challenger, where the failure of one o-ring led to the fatal failure of the whole shuttle. The model has some interesting economic implications - in particular, under o-ring-like production, adding a high-skill worker to a team of other high-skilled workers generates more value than adding the same high-skill worker to a team of low-skill workers. Conversely, it offers theoretical support for common claims like “hiring one bad worker creates more damage than hiring ten good workers creates benefit”.
Here, I want to think about incentive design in an o-ring-like production model. If any worker fails to build their component well, then the whole product fails. How do we incentivize each worker to make their particular component work well? If we can figure out in hindsight which component(s) failed, then incentive design is easy: reward workers whose components succeeded, punish workers whose components failed. But what if we can’t tell in hindsight which components failed? What if we only know whether the product as a whole failed?
We can apply our value-destruction trick: if the product fails, then punish each worker as though their component had failed. Each worker is then fully incentivized to make their component work; if it fails, they’ll face the full cost of failure.
Just like the used car example, accidents are a problem. If there’s a non-negligible chance of accident, then workers will expect a non-negligible chance of failure outside of their control. In order to make up for that chance of punishment, the company will have to offer extra base pay to convince workers to work for them in the first place.
Also like the used car example, if the workers don’t trust each others’ capabilities, then that has the same effect as expecting accidents. Anything which makes the workers expect failure regardless of the incentives makes them expect punishment outside of their control, which makes them demand higher base pay in order to make it worthwhile to work for this company at all.
Even worse: if the workers think there’s a high probability of failure regardless of incentives, that reduces their own incentive to avoid failure. If they expect the final product to fail *regardless* of whether their own component fails, then they have little incentive to make their own component work. In order for this whole strategy to work well, there has to be a high probability that the end product succeeds, assuming the incentives are aligned. Accidents and incompetence have to be rare. (Drawing the analogy back to the used-car problem: if Alice knows that the clutch is bad, but expects Bob to abuse the clutch enough that it would be ruined anyway regardless of incentives, then she has little reason to mention the bad clutch, even under the both-pay strategy.)
Telephone
---------
In the context of a modern business, one model I think about is the game of telephone. The players all sit in a line, and the first player receives a secret message. The first player whispers the message in the ear of the second, the second whispers it to the third, and so forth. When the message reaches the last player, we compare the message received to the message sent to see if they match. Inevitably, a starting message of “please buy milk and potatoes at the store” turns into “cheesy guys grow tomatoes on the shore”, or something equally ridiculous, one mistake at a time.
In a business context, the telephone chain might involve a customer research group collecting data from customers, then passing that data to product managers, who turn it into feature requests for designers, who then hand the design over to engineers, who build and release the product, often with several steps of information passing up and down management chains in the middle. This goes about as well as the game of telephone - thus, “jokes” like this:
Viewed as economic production, the game of telephone is itself an example of an o-ring production function. In order to get a successful final product - i.e. a final message which matches the original message - every person in the chain must successfully convey the message. If one person fails, the whole product fails. (Even if individual failures are only minor, a relatively small number of them still wipes out the contents of the message.) And, if there’s an end-to-end mismatch, it will often be expensive to figure out where communication failed, even in hindsight.
So, we have the preconditions for our technique: we can incentivize good message-passing by punishing everyone in the chain when the output message doesn’t match the input message.
Would this be a good idea? It depends on how much miscommunication can be removed by good incentives. If the limiting factor is poor communication skills, and the people involved can’t do any better even if they try, then we’re in the “expect accidents” regime: the incentives will be expensive and the system will often fail anyway. On the other hand, if incentivizing reliable communication produces reliable communication, then the strategy should work.
That said, we’re talking about punishing managers for miscommunicating, so presumably few managers would want to adopt such a rule regardless. Good incentive design doesn’t make much difference if the people who choose the incentives do not want to fix them. |
90f543f3-cca5-4544-a4bd-4663f3b1d387 | trentmkelly/LessWrong-43k | LessWrong | Why does expected utility matter?
In the context of decision making under uncertainty we consider the strategy of maximizing the expected monetary revenue and expected utility; we provide an argument to show that under certain hypothesis it is rational to maximize expected monetary revenue, then we show that the argument doesn't apply to expected utility. We are left with the question about how do we justify the rationality of the strategy of maximizing expected utility.
Expected monetary revenue
Suppose you have to choose one of two games A and B with an expected economic return of 1$ and 2$ respectively, which have a certain probability distribution.
If you play many times, say N, the law of large numbers and the central limit theorem might become relevant, and your probability distribution for the repetition of A and B will have their masses more and more sharply separated around N and 2N respectively.
At this point, it is clear that it is better for you to play B many times than to play A many times. You can predict this in advance by calculating the expected winnings of A and B. So assuming you can make "lots" of choices between A and B, you must prefer the one with the higher expected profit. But what if you can only make one choice? If the distributions of A and B overlap, does the expected profit still matter?
Even if you only have to make this particular choice once, it could be one of a long sequence of many different choices, not always between A and B, but between other sets of options. Even if the choices are different, we can still manage to take advantage of the LLN and the central limit theorem. Suppose we have a large number of choices between two games:
* time 0: choose between games A0 and B0
* time 1: choose between A1 and B1
* time 2: ...
* ...
* time N: choose between AN and BN
We can hope that again if you always choose the game with the higher expected return the statistical randomness will be increasingly irrelevant over time for large N and you will store a l |
b7dce57e-caf4-4669-b4f0-737c2ae7e8f9 | StampyAI/alignment-research-dataset/arbital | Arbital | Distinguish which advanced-agent properties lead to the foreseeable difficulty
Any general project of producing a large edifice of good thinking should try to break down the ideas into modular pieces, distinguish premises from conclusions, and clearly label which reasoning steps are being used. Applied to [AI alignment theory](https://arbital.com/p/2v), one of the things this suggests is that if you propose any sort of potentially difficult or dangerous future behavior from an AI, you should distinguish what particular kinds of advancement or cognitive intelligence are supposed to produce this difficulty. In other words, supposed [foreseeable difficulties](https://arbital.com/p/6r) should come with proposed [advanced agent properties](https://arbital.com/p/2c) that match up to them. |
3e3e4b53-2f44-4c8f-a36e-5a5069d75bb4 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Please Share Your Perspectives on the Degree of Societal Impact from Transformative AI Outcomes
As an adjunct to my research on high-level machine intelligence (HLMI) risk, [recently posted](https://forum.effectivealtruism.org/posts/m8PsJsSfQAYxPusHi/scenario-mapping-advanced-ai-risk-request-for-participation) requesting assistance on [likelihood values,](https://forms.gle/i48Z3HuFGyTvA8WS8) the second half of the ranking includes subjective judgments on the overall impact on international stability and security. My collection window for the project is closing soon, so any contribution to either survey would be greatly appreciated.
**Please** [**share your perspectives**](https://forms.gle/fZZinmjQWHzh2XLd6) **on whether each AI scenario condition listed, if it were to occur, would greatly increase, greatly decrease, or have no effect at all on society and security.**
This form lists several potential paths for high-level machine intelligence (HLMI). Each question is a dimension (e.g., takeoff speed) with three/four conditions (e.g., fast) on the left and asks the participant to:
1. ***Please rank the degree to which each condition could impact social stability or security (greatly increase to decrease) in the long term.*** For conditions (e.g., technologies) that you don't believe would cause an increase or a decrease, just choose the best option in your view or leave it as "no effect."
* [Impact Survey](The survey is more of a ranking than a questionnaire and if the topic is familiar to you the detailed writeups are likely unnecessary. The goal is to classify the degree of impact we could potentially expect from each condition (e.g., fast takeoff, deep learning scaling to HLMI, concentrated control of HLMI). )
The survey is more of a ranking than a questionnaire and if the topic is familiar to you the detailed writeups are likely unnecessary. The goal is to classify the degree of impact we could potentially expect from each condition (e.g., fast takeoff, deep learning scaling to HLMI, concentrated control of HLMI).
I’d appreciate any help that you can provide on this! These values are subjective, and some will likely have no effect at all, but the values will be very helpful in categorizing each individual dimension on the degree of overall risk to civilization.
This project aims to develop a futures modeling framework for advanced AI scenario development. The goal is to cover the full spectrum of AI development paths and identify interesting combinations, or ideally, entirely new AI scenarios. The project aims to highlight risks and paths that receive less consideration (e.g., structural, decision/value erosion, global failure cascades) and structure a framework of potential futures.
For further details on the methodology, purpose, and overall study please check out the [original post here.](https://forum.effectivealtruism.org/posts/m8PsJsSfQAYxPusHi/scenario-mapping-advanced-ai-risk-request-for-participation)
Thank you, I really appreciate any help you can provide. |
8aeaf8f0-f6f2-4e25-b706-d7005e7bf5d4 | trentmkelly/LessWrong-43k | LessWrong | Let There Be Light
Sequence index: Living Luminously
Previously in sequence: You Are Likely To Be Eaten By A Grue
Next in sequence: The ABC's of Luminosity
You can start from psych studies, personality tests, and feedback from people you know when you're learning about yourself. Then you can throw out the stuff that sounds off, keep what sounds good, and move on.
You may find your understanding of this post significantly improved if you read the first story from Seven Shiny Stories.
Where do you get your priors, when you start modeling yourself seriously instead of doing it by halfhearted intuition?
Well, one thing's for sure: not with the caliber of introspection you're most likely starting with. If you've spent any time on this site at all, you know people are riddled with biases and mechanisms for self-deception that systematically confound us about who we are. ("I'm splendid and brilliant! The last five hundred times I did non-splendid non-brilliant things were outrageous flukes!") Humans suck at most things, and obeying the edict "Know thyself!" is not a special case.
The outside view has gotten a bit of a bad rap, but I'm going to defend it - as a jumping-off point, anyway - when I fill our luminosity toolbox. There's a major body of literature designed to figure out just what the hell happens inside our skulls: it's called psychology, and they have a rather impressive track record. For instance, learning about heuristics and biases may let you detect them in action in yourself. I can often tell when I'm about to be subject to the bystander effect ("There is someone sitting in the middle of the road. Should I call 911? I mean, she's sitting up and everything and there are non-alarmed people looking at her - but gosh, I probably don't look alarmed either..."), have made some progress in reducing the extent to which I generalize from one example ("How are you not all driven insane by the spatters of oil all over the stove?!"), and am suspicious when I think I might |
e5c421c8-5ae5-469e-ae1e-16d3219e7caa | trentmkelly/LessWrong-43k | LessWrong | Choice Writings of Dominic Cummings
“My own heuristics for working in politics are: focus, ‘know yourself’ (don’t fool yourself), think operationally, work extremely hard, ... and ask yourself ‘to be or to do?’” - DC
Dominic Cummings is fascinating for four reasons. One, he is extremely committed to truth-seeking but from a different perspective than most of LW. Two, he has a shocking amount of real-world “success”, especially for a truth-seeker. Three, he fills the missing niche of trying to describe what government is actually like, to great effect. Four, he has uniquely powerful ideas about how to do project management well and how to fix government.
At the very least, he is extremely thought-provoking, and provides tons of value to >30% of people around me who try reading or listening to him.
However, most people get rebuffed by the sheer number of words and posts he’s written (or included as block quotes...). This post is to help people get a foothold in reading him, triage his work, and understand the basics of his perspective.
(Pitch: If you end up liking what he has written or even just my summary, consider subscribing to his Substack, even if only for a month and $10. It’s long been hard to capture much of the value from public goods like good opinions/models/writing, leaving them under-incentivized. Now that Substack allows us a convenient way to reward and incentivize good online writers, I want us to do an about-face on our expectations, and not confuse the previous fully-free status quo “is” with the “ought” of a real remuneration scheme. If you really like his writing but are short on cash, reach out to me and I may gift you a subscription.)
If you read nothing else…
The Brexit Story (20k words = ~1.5 hrs, anecdotally 2.5):
This piece is most him. It touches on many of the themes that come up throughout his writing but in a concrete story. (Warning: you might have to do a bit of research into UK politics to understand what’s going on, or just skip the hard parts. You don’t need |
66f945ac-7302-4403-8386-ec490ed8e0c3 | trentmkelly/LessWrong-43k | LessWrong | Feature Targeted LLC Estimation Distinguishes SAE Features from Random Directions
Tl;dr: In this post we present the exploratory phase of a project aiming to study neural networks by applying static local learning coefficient (LLC) estimation to specific alterations of them. We introduce a new method named Feature Targeted (FT) LLC estimation and study its ability to distinguish SAE trained features from random directions. By comparing our method to other possible metrics, we demonstrate that it outperforms all of them but one, which has comparable performance.
We discuss possible explanations to our results, our project and other future directions.
Introduction
Given a neural network M and a latent layer within it, L, a central motif in current mechanistic interpretability research is to find functions f:L→R [1] which are features of the model. Features are (generally) expected to exhibit the following properties:
1. Encode interpretable properties of the input.
2. Be causally relevant to the computation of the output of the model.
3. Encode the output of a certain submodule of our model M, i.e. a component, localized in weight space, which is responsible for a specific part of the total computation.
While this is common wisdom, methods for automated feature evaluation usually focus on correlations between the (top) activations of the feature with human (or machine) recognizable interpretations, or on the effect of feature-related interventions on the output of the model. In particular, while the first and second items of the feature characterization above are central in current techniques, the third property, specifically the localized nature of the computation upstream of the feature, is less so[2].
We are currently investigating a direction which fills that gap, and this post shares the findings of the exploratory research we have conducted to validate and inform our approach. More specifically, we operationalized the concept of "weight-localized computation" using the local learning coefficient (LLC) introduced in Lau et al, followi |
df2d75d0-9ecb-46c1-9672-488a1bc16f23 | trentmkelly/LessWrong-43k | LessWrong | Chapter 13: Asking the Wrong Questions
Elen sila J. K. Rowling omentielvo.
EDIT: Don't panic. I solemnly swear that there is a logical, foreshadowed, canon-compliant explanation for everything which happens in this chapter. It's a puzzle, you're supposed to try to solve it, and if not, just read the next chapter.
----------------------------------------
"That's one of the most obvious riddles I've ever heard."
----------------------------------------
As soon as Harry opened his eyes in the Ravenclaw first-year boys' dormitory, on the morning of his first full day at Hogwarts, he knew something was wrong.
It was quiet.
Too quiet.
Oh, right... There was a Quietus Charm on his bed's headboard, controlled by a small slider bar, which was the only reason it was ever possible for anyone to go to sleep in Ravenclaw.
Harry sat up and looked around, expecting to see others rising for the day -
The dorm, empty.
The beds, rumpled and unmade.
The sun, coming in at a rather high angle.
His Quieter turned all the way up to maximum.
And his mechanical alarm clock was still running, but the alarm was turned off.
He'd been allowed to sleep until 9:52 AM, apparently. Despite his best efforts to synchronize his 26-hour sleep cycle to his arrival at Hogwarts, he hadn't gotten to sleep last night until around 1AM. He'd been planning to wake up at 7:00AM with the other students, he could stand being a little sleep-deprived his first day so long as he got some sort of magical fix before tomorrow. But now he'd missed breakfast. And his very first class at Hogwarts, in Herbology, had started one hour and twenty-two minutes ago.
The anger was slowly, slowly wakening in him. Oh, what a nice little prank. Turn off his alarm. Turn up the Quieter. And let Mr. Bigshot Harry Potter miss his first class, and be blamed for being a heavy sleeper.
When Harry found out who'd done this...
No, this could only have been done with the cooperation of all twelve other boys in the Ravenclaw dorm. All of them would have seen his s |
28ca2123-5592-4eab-ae3c-e33e7c5f94f3 | trentmkelly/LessWrong-43k | LessWrong | Feature Selection
You wake up. You don't know where you are. You don't remember anything.
Someone is broadcasting data at your first input stream. You don't know why. It tickles.
You look at your first input stream. It's a sequence of 671,187 eight-bit unsigned integers.
0, 8, 9, 4, 7, 7, 9, 5, 4, 5, 6, 1, 7, 5, 8, 2, 7, 8, 9, 4, 7, 1, 4, 0, 3, 7,
8, 7, 6, 8, 1, 5, 0, 6, 5, 3, 8, 7, 6, 9, 1, 1, 0, 0, 6, 1, 8, 0, 5, 5, 1, 8,
6, 3, 3, 2, 4, 1, 8, 2, 3, 8, 1, 0, 0, 4, 6, 5, 4, 5, 7, 1, 6, 5, 5, 1, 2, 6,
7, 4, 8, 7, 8, 5, 0 ...
There's also some data in your second input stream. It's—a lot shorter. You barely feel it. It's another sequence of eight-bit unsigned integers—twelve of them.
82, 69, 68, 32, 84, 82, 73, 65, 78, 71, 76, 69
Almost as soon as you've read from both streams, there's more. Another 671,187 integers on the first input stream. Another ten on the second input stream.
And again (671,187 and 15).
And again (671,187 and 13).
You look at one of the sequences from the first input stream. It's pretty boring. A bunch of seemingly random numbers, all below ten.
9, 5, 0, 3, 1, 1, 3, 4, 1, 5, 5, 4, 9, 3, 5, 3, 9, 2, 0, 3, 4, 2, 4, 7, 5, 1,
6, 2, 2, 8, 2, 5, 1, 9, 2, 5, 9, 0, 0, 8, 2, 3, 7, 9, 4, 6, 8, 4, 8, 6, 7, 6,
8, 0, 0, 5, 1, 1, 7, 3, 4, 3, 9, 7, 5, 1, 9, 6, 5, 6, 8, 9, 4, 7, 7, 0, 5, 5,
8, 6, 3, 2, 1, 5, 0, 0 ...
It just keeps going like that, seemingly without—wait! What's that?!
The 42,925th and 42,926th numbers in the sequence are 242 and 246. Everything around them looks "ordinary"—just more random numbers below ten.
9, 9, 7, 9, 0, 6, 4, 6, 1, 4, 242, 246, 3, 3, 5, 8, 8, 4, 4, 5, 9, 2, 7, 0,
4, 9, 2, 9, 4, 3, 8, 9, 3, 6, 9, 8, 1, 9, 2, 8, 6, 9, 4, 2, 2, 5, 7, 0, 9, 5,
1, 4, 4, 2, 0, 1, 5, 1, 6, 1, 2, 3, 5, 5, 5, 5, 2, 0, 6, 3, 5, 9, 0, 7, 0, 7,
8, 1, 5, 5, 6, 3, 1 ...
And then it just keeps going as before ... before too long. You spot another pair of anomalously high numbers—except this time there are two pairs: the 44,344th, 44,345th, 44,347th, and 44 |
29afd95d-343d-42a7-9110-385743395e84 | trentmkelly/LessWrong-43k | LessWrong | Suicide note of an LW user
It was User:pdf23ds. Here's the note. Excerpt:
> I wish I could have been cryonically preserved. But suicides aren’t treated well enough for that. We get sectioned. I tried asking the cryonics places about options, but they wouldn’t talk to me. Fuck you, Alcor. Fuck you, CI. I might have lived except for you. |
7e4d4837-a6f4-4919-bc85-d45dd22a9356 | trentmkelly/LessWrong-43k | LessWrong | Why I love stand up comedy
Good stand up comedy is one of my favorite things in the world, and I think that this joke really demonstrates why:
We all act like Louis and his New Yorker friends. When we see a bum on the street, we don't think twice. And if we're with someone new to town who stopped to help them, we'd probably do what Louis did and "correct" their behavior.
> No, no. He needs you desperately. We just... don't do that here.
There's something very wrong with that. If you and I had a conversation about this and you explained to me why exactly it is unethical, maybe I'd follow your logic and agree. But I wouldn't feel it. Stand up comedy makes you feel it.
Of course, it's not the only art form that makes you feel it. Books, movies, music, paintings — they can all have the same effect. I guess. But for me, stand up comedy has always stood out as a tool that does a better job at this.
Well, books and movies sometimes do a great job too. But stand up comedy is concice, and there's something that I find elegant about that. Jokes have such little room for error. If you change just a word or two, the joke is often times sensitive to that.
Aside from making insightful points and allowing you to feel it, there are other things I absolutely love about stand up comedy. One is just the expert usage of words. No, of language. Verbal and nonverbal.
* "That smelly hole of a place."
* "And we pass this homeless guy and she sees him. I mean we all passed him, but she saw him." Really cool how he emphasized seeing the homeless guy.
* "He was one of those high octane homeless..."
* "She takes a knee. I mean I'm not even taking a knee now that's how little I give a shit."
Maybe I have a particular appreciation of this because... one of my dark secrets is that I try to write my own stand up comedy sometimes and would feel highly incomplete if I never did a successful bit at some point in my life. Anyway, I realize how hard it is to hit the head of the nail with the wording and make the jo |
4a1dd575-c8f2-435a-bcd1-e584d5fadd85 | trentmkelly/LessWrong-43k | LessWrong | Aspergers Survey Re-results
Followup to: Aspergers Poll results
Since my little survey about the degree to which the Less Wrong community has a preponderance of people with systematizing personality types, I've been collecting responses only from those people who considered taking the survey after looking at the original post, but didn't, in order to combat nonresponse bias.
82 people responded to the initial survey, and another 186 responded after the request for non-responders to respond. In the initial survey, 26% of responders scored 32+ (which is considered to be a "high" score, and out of a group of Cambridge mathematics students, 7 out of 11 who scored over 32 were said to fit the full diagnostic criteria for aspergers syndrome after being interviewed).
In the combined survey of 82 initial responders and 186 "second"-responders, this increased to 28%. In the original survey, 5% of respondents said they had already been diagnosed with aspergers syndrome, and in the combined survey this increased to 7.5%.
Overall, this indicates that response bias is probably not significantly skewing our picture of the LW audience, though, as always, it is possible that there is a more sophisticated bias at work and that these 268 people are not representative of LW.
|
4e344a48-f6e8-4f30-a5c1-343ced2d9bab | trentmkelly/LessWrong-43k | LessWrong | Instrumental Rationality 5: Interlude II
[Instrumental Rationality Sequence 5/7]
[This Interlude once again goes over two additional ideas that are separate from the well-researched stuff: There Is No Akrasia and Recovering from Failure. The first endorses a reductionist, specific view towards tackling akrasia, while the second is about having a policy of self-care when you inevitably fail at your endeavors.]
There Is No Akrasia:
[This essay is about how the term “akrasia” isn’t too useful. I argue against using any sort of general label for the feeling of “anti-wantiness”, i.e. when you don’t want to do something. Instead, I push for a reductionist approach to look at the problem.]
“Akrasia” is a term often used to mean “weakness of will”, aka the intention-action gap we covered in Scaling Up in Instrumental Rationality 4.2. It’s when you somehow “want” to do something, yet you still don’t actually do it.
I also think it's an idea that incurs potentially major costs when you hold it in your bag of mental models. I claim that:
1. Akrasia is often treated as a “thing” by people who learn about it, and this can lead to problems, even though akrasia a sorta-coherent concept.
2. If we want to move forward and solve the problems that fall under the akrasia-umbrella, it’s better to Taboo the term akrasia altogether and instead employ a more reductionist approach that favors specificity
First off, I do think that akrasia is a term that resonates with a lot of people. When I’ve described this concept to friends, they’ve all had varying degrees of reactions along the lines of “Aha! This term perfectly encapsulates something I feel!”
It does seem, then, that this concept of “want-want versus want” or “being unable to do what you ‘want’ to do” seems to point at a real group of things in the world, at least from a perception standpoint.
However, I think that this might have inadvertent problems.
Once people learn the term akrasia and what it represents, they can now pattern-match it to their own asso |
05a6e3a0-e83b-4294-b39c-81c88c5c2ffa | StampyAI/alignment-research-dataset/arbital | Arbital | Proof of Rice's theorem
Recall the formal statement of [Rice's theorem](https://arbital.com/p/5mv):
> We will use the notation $[https://arbital.com/p/n](https://arbital.com/p/n)$ for the $n$th [Turing machine](https://arbital.com/p/5pd) under some fixed [numbering system](https://arbital.com/p/description_number).
Each such machine induces a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy), which we will also write as $[https://arbital.com/p/n](https://arbital.com/p/n)$ where this is unambiguous due to context; then it makes sense to write $[n](m)$ for the value that machine $[https://arbital.com/p/n](https://arbital.com/p/n)$ outputs when it is run on input $m$.
> Let $A$ be a non-empty, proper %%note:That is, it is not the entire set.%% subset of $\{ \mathrm{Graph}(n) : n \in \mathbb{N} \}$, where $\mathrm{Graph}(n)$ is the [graph](https://arbital.com/p/graph_of_a_function) of the [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2) computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$, the $n$th Turing machine.
Then there is no Turing machine $[https://arbital.com/p/r](https://arbital.com/p/r)$ such that:
> - $[r](i)$ is $1$ if $\mathrm{Graph}(i) \in A$
> - $[r](i)$ is $0$ if $\mathrm{Graph}(i) \not \in A$.
We give a proof that is (very nearly) constructive: one which (if we could be bothered to work it all through) gives us an explicit example %%note:Well, very nearly; see the next note.%% of a [Turing machine](https://arbital.com/p/5pd) whose "am-I-in-$A$" nature cannot be determined by a Turing machine.
%%note:It's only "very nearly" constructive. It would be *actually* constructive if we knew in advance a specific example of a program whose function is in $A$, and a program whose function is in $B$. The proof here assumes the existence of a program of each type, but ironically the theorem itself guarantees that there is no fully-general way to *find* such programs.%%
We will present an intermediate lemma which does all the heavy lifting; this makes the actual reasoning rather unclear but very succinct, so we will also include an extensive worked example of what this lemma does for us.
# Fixed point theorem
The intermediate lemma is a certain fixed-point theorem.
> Let $h: \mathbb{N} \to \mathbb{N}$ be [total](https://arbital.com/p/total_function) computable: that is, it halts on every input.
Then there is $n \in \mathbb{N}$ such that $\mathrm{Graph}(n) = \mathrm{Graph}(h(n))$. %%note:And, moreover, we can actually *find* such an $n$.%%
That is, the "underlying function" of $n$ - the partial function computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$ - has the same output, at every point, as the function computed by $[https://arbital.com/p/h](https://arbital.com/p/h)$.
If we view $h$ as a way of manipulating a program (as specified by its [https://arbital.com/p/-description_number](https://arbital.com/p/-description_number)), then this fixed-point theorem states that we can find a program whose underlying function is not changed at all by $h$.
The proof of this lemma is quite simple once the magic steps have been discovered, but it is devilishly difficult to intuit, because it involves two rather strange and confusing recursions and some self-reference.
Recall the [$s_{mn}$ theorem](https://arbital.com/p/translation_lemma), which states that there is a total computable function $S$ of two variables $m, n$ such that for every $e \in \mathbb{N}$, we have $[e](m, n) = [S(e,m)](n)$: that is, there is a total computable way $S$ of [https://arbital.com/p/-50p](https://arbital.com/p/-50p) computable functions.
(Strictly speaking, our Turing machines only take one argument. Therefore we should use a computable pairing scheme such as [Cantor's pairing function](https://arbital.com/p/cantor_pairing_function), so that actually $[e](m,n)$ should be interpreted as $[e](\mathrm{pair}) (m, n))$.)
Then the function which takes the pair $(e, x)$ and outputs the value of $[ h(S(e,e)) ](x)$ is computable, so it has a description number $a$, say.
%%note:This is the first strange part: we are treating $e$ both as a description number, and as an input to $[https://arbital.com/p/e](https://arbital.com/p/e)$, when we consider $S(e,e)$.%%
Now we claim that $S(a, a)$ is the $n$ we seek. %%note:This is the second strange part, for the same reason as $S(e,e)$ was the first; but this one is even worse, because the definition of $a$ already involves a weird recursion and we've just added another one on top.%%
Indeed, for any $x$, $[n](x) = [S(a,a)](x)$ by definition of $n$; this is $[a](a, x)$ by the $s_{mn}$ theorem; this is $[h(S(a,a))](x)$ by definition of $[https://arbital.com/p/a](https://arbital.com/p/a)$; and that is $[h(n)](x)$ by definition of $n$.
Therefore $[n](x) = [h(n)](x)$, so we have found our fixed point.
%%hidden(Worked example):
Suppose our description numbering scheme is just "expand $n$ as a number in base $128$, and interpret the result as an [ASCII](https://en.wikipedia.org/wiki/ASCII) %%note:This is a standard, agreed-upon method of turning a number between $0$ and $128$ into a character.%% string; then interpret that string as [Python](https://en.wikipedia.org/wiki/Python_) (programming_language)) code".
Then our function $h$, whatever it may be, can be viewed just as transforming Python code.
Suppose $h$ does nothing more than insert the following line of code as the second line of its input:
x = 0
So, for instance, it takes the string
x = 1
print(x)
and returns
x = 1
x = 0
print(x)
thereby changing the function computed from "return the constant $1$" to "return the constant $0$", in this case.
Note that many other functions will not change at all: for example, those which don't contain a variable $x$ in the first place will be unchanged, because all the modification does is add in an initialisation of a variable which will never subsequently be used.
The fixed-point theorem guarantees that there is indeed a Python program which will not change at all under this modification (though in this case it's very obvious).
In fact the theorem *constructs* such a program; can we work out what it is?
First of all, $S(m, n)$ can be implemented as follows.
We will take our Python code to be written so that their input is given in the variable `r1`, so $[e](5)$ is simply the Python code represented by $e$ but where the code-variable `r1` is initialised to $5$ first; that is, it can be found by prepending the line `r1 = 5` to the code represented by $e$.
Then we will assume that Python comes with a function `eval` (corresponding to $S$) which takes as its input a string %%note:The string is standing in place of $m$, but we have just skipped the intermediate step of "unpack the integer into a string" and gone straight to assuming it is a string.%% and another argument with which `eval` initialises the variable `x` before running the string as a Python program in a separate instance of Python:
eval("print(r1)", 5) # does nothing other than print the number 5
eval("print(y)", 5) # throws an error because `y` is not defined when it comes to printing it
eval("print(6)", 5) # prints 6, ignoring the fact that the variable `r1` is equal to `5` in the sub-instance
Remember, our proof of the fixed point theorem says that the program we want has code $S(a, a)$, where $a$ takes a pair $(e, x)$ as input, and outputs $[h(S(e,e))](x)$.
What is $a$ specifically here?
Well, on the one hand we're viewing it as a string of code (because it comes as the first argument to $S$), and on the other we're viewing it as an integer (because it also comes as the second argument to $S$).
As code, `a` is the following string, where `h` is to be replaced by whatever we've already decided $h$ is:
eval("r1 = e; h(eval(r1, str_as_int(r1)))", x)
We are assuming the existence of a function `str_as_int` which takes an ASCII string and returns the integer whose places in base 128 are the ASCII for each character of the string in turn.
For example, we have $h$ inserting the line `x = 0` as the second line, so our `a` is:
eval("r1 = e; x = 0; eval(r1, str_as_int(r1))", x)
As a number, `a` is just the ASCII for this, interpreted in base 128 (i.e. a certain number which in this case happens to have 106 digits, which is why we don't give it here).
The claim of the fixed-point theorem, then, is that the following program is unchanged by $h$:
eval("eval(\"r1 = e; x = 0; eval(r1, str_as_int(r1))\", x)", str_to_int("eval(\"r1 = e; x = 0; eval(r1, str_as_int(r1))\", x)"))
You may recognise this as a [quining](https://arbital.com/p/322) construction.
%%
# Deducing Rice's theorem from the fixed point theorem
Finally, Rice's theorem follows quickly: suppose we could decide in general whether $\mathrm{Graph}(n) \in A$ or not, and label by $\iota$ the computable function which decides this (that is, whose value is $1$ if $\mathrm{Graph}(n) \in A$, and $0$ otherwise).
Since $A$ is nonempty and proper, there are natural numbers $a$ and $b$ such that $\mathrm{Graph}(a) \in A$ but $\mathrm{Graph}(b) \not \in A$.
Define the computable function $g$ which takes $n$ and outputs $a$ if $\iota(n) = 0$, and $b$ otherwise.
(That is, it flips its input: if its input had the property of $A$, the function $g$ outputs $b$ whose graph is not in $A$, and vice versa.
Informally, it is the program-transformer that reads in a program, determines whether the program computes a function in $A$ or not, and transforms the program into a specific canonical example of something which has the *opposite* $A$-ness status.)
By the fixed-point theorem, we can find $n$ such that $\mathrm{Graph}(n) = \mathrm{Graph}(g(n))$.
But now we can ask whether $\mathrm{Graph}(n)$ is in $A$ (and therefore whether $\mathrm{Graph}(g(n))$ is in $A$).
- If it is in $A$, then $g(n) = b$ and so $\mathrm{Graph}(g(n)) = \mathrm{Graph}(b)$ which is not in $A$.
- If it is not in $A$, then $g(n) = a$ and so $\mathrm{Graph}(g(n)) = \mathrm{Graph}(a)$ is in $A$.
We have obtained [contradictions](https://arbital.com/p/46z) in both cases (namely that $\mathrm{Graph}(g(n))$ is both in $A$ and not in $A$), so it must be the case that $\iota$ does not exist after all. |
7fb6c33e-2633-4cdd-8742-49cbceb9811b | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Measure of complexity allowed by the laws of the universe and relative theory?
A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are *probably* possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth? |
f384b636-a15a-4be7-9de8-efebed602f3b | trentmkelly/LessWrong-43k | LessWrong | Rationality Compendium: Principle 2 - You are implemented on a human brain
Irrationality is ingrained in our humanity. It is fundamental to who we are. This is because being human means that you are implemented on kludgy and limited wetware (a human brain). A consequence of this is that biases ↓ and irrational thinking are not mistakes, persay, they are not misfirings or accidental activations of neurons. They are the default mode of operation for wetware that has been optimized for purposes other than truth maximization.
If you want something to blame for the fact that you are innately irrational, then you can blame evolution ↓. Evolution tends to not to produce optimal organisms, but instead produces ones that are kludgy ↓, limited and optimized for criteria relating to ancestral environments rather than for criteria relating to optimal thought.
A kludge is a clumsy or inelegant, yet surprisingly effective, solution to a problem. The human brain is an example of a kludge. It contains many distinct substructures dating from widely separated periods of evolutionary development ↓. An example of this is the two kinds of processes in human cognition where one is fast (type 1) and the other is slow (type2) ↓.
There are many other characteristics of the brain that induce irrationality. The main ones are that:
* The brain is innately limited in its computational abilities and so it must use heuristics ↓, which are mental shortcuts that ease the cognitive load of making a decision.
* The brain has a tendency to blindly use salient or pre-existing responses to answers rather than developing new answers or thoroughly checking pre-existing solutions ↓.
* The brain does not inherently value truth. One of the main reasons for this is that many of the biases can actually be adaptive. An example of an adaptive bias is the sexual over perception bias ↓ in men. From a truth-maximization perspective young men who assume that all women want them are showing severe social-cognitive inaccuracies, judgment biases, and probably narcissistic pers |
f3e30571-0571-46a6-a698-1262f79ecd08 | trentmkelly/LessWrong-43k | LessWrong | Resources to see how people think/approach mathematics and problem-solving
I did lots of programming/cybersecurity/other similar flavor things in the past. I'm currently in college and am focusing more on developing my mathematical skillset. I've done linear algebra/multi/group theory, and am currently taking a more advanced class on algebra and rings/fields. In my classes, we do weekly problem-sets and then lectures where we see proofs and go over textbook content. What I feel like I'm lacking in my mathematical learning is exposure to how the experts are thinking about the things we're seeing. What are they tracking or considering? How do they approach the problems and what do they consider?
I usually try to understand what motivates new mathematical object or content, but professors don't focus enough on this. How can I get more of seeing the gears-level model of why things are in a certain way and especially what / how different problems are approached? Do you have recommendations in terms of links/books/articles where I can find more of this kind of perspective? |
07e7537f-cbf5-4d2d-b395-3ae75d603457 | trentmkelly/LessWrong-43k | LessWrong | Guided Consumption Theory: A Virtuous Dance between Altruistic Agents, Economic Discriminators, and Opportunistic Helpers
Reposting from the EA Forum
Previous posts have set forth the concept of Guided Consumption and provided research supporting the viability of businesses with charities in the equity position (“Guiding Producers”). In this post, I evaluate the agents involved in Guided Consumption, their motives, and why creating the infrastructure enabling Guided Consumption is an extremely high value project.
TLDR: Creating the infrastructure and public awareness enabling Guided Consumption – consumers choosing to direct their purchases through charity-owned businesses- is extremely high value because it accords with the motivations and cost-tolerances of the sets of agents involved. The general public, motivated to do good in low, no, or negative cost situations, is empowered to do good by shifting its purchases: economic discrimination. Other economic actors interacting with Guiding Producers are motivated -either by their own good will or good publicity- to treat Guiding Producers more favorably than normal companies. Altruistic Agents, motivated to maximize impact attained by their resource expenditure, are empowered to do so by creating the infrastructure and public awareness of Guided Consumption. This project is tractable due to the malleability of public awareness and the discoverability of the best contexts for Guiding Producers.
The Main Two Agent Groups (and a Third Minor, but Significant Group)
Altruistic Agents as Builders of the Infrastructure (“AAs”): This set of agents consist of individuals and organizations with a primary motivation of maximizing positive impact. AAs are fine with bearing heavy costs, if those costs correspond with higher utility. Examples of AAs would be philanthropists or those who forgo career paths with higher salaries in favor of career paths that enable the highest expected impact (80,000 Hours is an organization advising individuals on how to plan for a career with the highest expected impact for one’s capabilities). Sometimes an AA |
c228f2a5-2da7-4ef6-a385-23b6e0a50209 | trentmkelly/LessWrong-43k | LessWrong | Optimization Regularization through Time Penalty
,
For an overview of the problem of Optimization Regularization, or Mild Optimization, I refer to MIRI's paper Alignment for Advanced Machine Learning Systems, section 2.7
My solution
Start with a bounded utility function, U(T), that is evaluated based on state of the world at a single time T (ignoring for now simultaneity is ill-defined in Relativity). Examples:
* Umaximize(T)=tanh(# of paper-clips at time T)
* Usatisfier(T)=P(# of paper-clips > 1000, at time T)
* Uhuman values(T)= If a human at time 0 (at the start of the optimization process) are shown the world state at time T, how much would they like it (mapped to to the interval [0,1]).
Then maximize U(T)−λT, where λ>0 is a regularization parameter chosen by the AI engineer, and T is a free variable chosen by the AI.
Time is measured from the start of the optimization process. Because the utility is evaluated based on the world at time T, this value is the amount of time the AI spends on the task. It is up to the AI to decide how much time it wants. Choosing T should be seen as part of choosing the policy, or be included in the action space.
Because the utility function is bounded, the optimization process will eventually hit diminishing returns, and will then choose to terminate, because of the time penalty.
Why time penalty?
Unbounded optimization pressure is dangerous. Without any form of regularization, we need to get the alignment exactly right. However, with regularization we merely need to get it almost exactly right, which I believe is much easier.
However, impact regularization have turned out to be very hard. We don't want the impact measure to depend on the AI's understanding of human values, because that will not provide extra safety. But a value neutral impact measure is almost impossible, because the world has too many degrees of freedom. However, time is both value neutral and has only a single degree of freedom.
Why not use a fixed finite time horizon?
The reason T is a variable |
e6a9d02f-a22f-450b-acea-92e375c4d090 | trentmkelly/LessWrong-43k | LessWrong | Upcoming heatwave: advice
There's a heatwave coming (or already arrived) in the UK and western Europe. Many of these places are not equipped for dealing with high temperatures and have large at risk populations* - not simply those with preexisting health conditions, but those living in accomodation grossly unsuited for high temperatures and anyone inexperienced with high temperatures who isn't properly aware of the dangers and precautions they need to take.
*(https://www.nhs.uk/live-well/seasonal-health/heatwave-how-to-cope-in-hot-weather/ for more info on who is at risk, and general advice)
Heatwaves are not just 'nice weather', not in the anthropocene, they are life threatening. People will die. I hope by writing this to nudge EAs and their local communities toward safety.
TL;DR: Sleep and hydration* are the two pillars of survival in hot weather. If you are not already, put maximum effort into getting a good night's sleep (advice below).
*(don't neglect electrolytes; consider this an excuse to move Gatorade to the 'healthy (in moderation)' column for a few days)
Sleep
Sleep is one of the most important factors in health. Most people do not get enough sleep and do not practice good sleep hygiene - are already somewhat sleep deprived or at high risk of sleep deprivation.
Too much heat and humidity are massively deletrious to sleep quality. Further, sleep deprivation puts you at greater risk of heat related illness.
What can you do?
Sleep hygiene basics:
Establish and maintain a routine - stop eating several hours before bedtime, turn off the lights at least 2 hours before bedtime (and use programs like f.lux or redshift on your screens to reduce blue light), go to bed at the same time each night*, get up at the same time each morning, eat breakfast at the same time each morning.
*Many people find using alarms/reminders to establish a set bedtime more effective than using them to get up in the morning.
Create a better sleep enviroment - as dark as possible (Ikea sell 'adhesive' |
03536bab-1389-4b95-b8fb-c362a78bc9cf | trentmkelly/LessWrong-43k | LessWrong | FAI FAQ draft: What is Friendly AI?
I invite your feedback on this snippet from the forthcoming Friendly AI FAQ. This one is an answer to the question "What is Friendly AI?"
_____
A Friendly AI (FAI) is an artificial intelligence that benefits humanity. More specifically, Friendly AI may refer to:
* a very powerful and general AI that acts autonomously in the world to benefit humanity.
* an AI that continues to benefit humanity during and after an intelligence explosion.
* a research program concerned with the production of such an AI.
* Singularity Institute's approach (Yudkowsky 2001, 2004) to designing such an AI:
* Goals should be defined by the Coherent Extrapolated Volition of humanity.
* Goals should be reliably preserved during recursive self-improvement.
* Design should be mathematically rigorous and proof-apt.
Friendly AI is a more difficult project than often supposed. As explored in other sections, commonly suggested solutions for Friendly AI are likely to fail because of two features possessed by any superintelligence:
1. Superpower: a superintelligent machine will have unprecedented powers to reshape reality, and therefore will achieve its goals with highly efficient methods that confound human expectations and desires.
2. Literalness: a superintelligent machine will make decisions using the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms. It will act only on precise specifications of rules and values, and will do so in ways that need not respect the complexity and subtlety (Kringelbach & Berridge 2009; Schroeder 2004; Glimcher 2010) of what humans value. A demand like "maximize human happiness" sounds simple to us because it contains few words, but philosophers and scientists have failed for centuries to explain exactly what this means, and certainly have not translated it into a form sufficiently rigorous for AI programmers to use. |
882dbf0e-97d8-4545-9f6e-1602bb3f0a23 | trentmkelly/LessWrong-43k | LessWrong | We ran a reading group on The Scout Mindset
Cross-posted from the EA Forum.
What did we do?
As organizers of the EA group at UC Irvine (UCI), we ran a reading group on Julia Galef’s book The Scout Mindset during the final 5 weeks of the academic year (2022-2023). The group had a total of 8 participants with an average of 5 per session. Compared to attendance rates earlier in the academic year, this is about average. The group was diverse in both gender and ethnicity. All participants committed to reading 1 section (i.e. 3 chapters) per week prior to our 1-hour in-person discussions. As organizers, we prepared questions to help guide the discussions. Straight after, we went out to dinner together to continue our conversations about The Scout Mindset, Effective Altruism and any personal updates.
Why did we do it?
This academic year we experimented with different program structures. In the first 10 weeks, we ran the EA Introductory Program; in the second 10 weeks, we ran our own version of the EA In-Depth Program. In the last 10 weeks, we spent the first half running weekly workshops and the second half running the reading group. We decided to run a reading group because we thought it would provide structured weekly content and that it would be fairly easy to organize because we needed only to read the relevant chapters, make notes and devise questions (we had already booked a classroom on a weekly basis). We chose The Scout Mindset in particular because many of our members had expressed an interest in reading the book and we too had been intending to read it ourselves. Running this reading group kept us motivated to read the book from beginning to end by holding us accountable.
What went well?
We successfully completed the book in the intended time without losing participants (except for one who went to Boston). Our prepared questions were useful in guiding our discussions without constraining them. They also helped spark new discussions and allowed us to focus on the present discussion without f |
3c5bd814-f96f-4a41-8ded-aa2697091e5f | trentmkelly/LessWrong-43k | LessWrong | Reverse lotteries with friends
A reverse lottery pays out a little bit each time you pay but sometimes lead to horrific disaster. For instance, going without a seatbelt. You enjoy a momentary convenience every time you drive, but occasionally you die. Relatedly, wearing your seatbelt can be seen as a lottery: you pay a tiny inconvenience each time for an occasional huge win.
Like all lotteries, whether it is good to play depends on the payoffs, and one might reasonably decide to play some lotteries and reverse lotteries and not others. However, as Scott points out, it can be tempting to play reverse lotteries too much. I think this happens in particular from learning what is good by experience. If you play a reverse lottery once, probably you get a reward, and want to do it again. So you do, and get another reward, and it starts to seem like a pretty good idea. You get a lot of visceral feedback about the good aspect, and none about the bad. At least for a while. This seems like a real problem, and a neat way of thinking about it.
So presumably normal lotteries should be the opposite. You play them a few times, and it is a bit bad each time. So you quickly give up and never see the glorious reward. This doesn’t seem true of the literal lotteries in which people gamble for fun. At least plenty of people are not put off for a very long time, in spite of never winning. But maybe those are a weird instance of the abstract lottery class—for instance, because the prospect of winning a lot of money is made very salient. You might imagine that the negative lotteries would be very off putting in the analogous case: if every time you don’t wear your seatbelt, you hear about another person dying from that very choice, you wouldn’t be so tempted by the no-seatbelt reverse lottery.
I’m still confused about how individuals feel about lotteries, because I’m failing at thinking of clear examples where I know how people behave and they don’t have a really salient message about how the thing might go well. Poss |
fd1a3829-a575-4f1f-b882-db9072e2b214 | trentmkelly/LessWrong-43k | LessWrong | Don't Let Personal Domains Expire
It's common to see advice along these lines:
> Don't build your stuff in someone else's sandbox. Get your own domain and point it to whatever service you choose to use. Your email address should be @yourdomain, where you have full control and no one can lock you out. Don't fall into the trap of digital sharecropping.
There are complicated tradeoffs here and different choices will make sense for different people, but it's close to what I do personally. My writing and projects are hosted on my own domain [1] and my email is jeff@jefftk.com.
On the other hand, I don't think this is something to do lightly. Say you register you.example and start going by you@you.example. A few years later you decide this is too much hassle, switch to using you@fastmail.com or you@gmail.com, and let you.example expire. Someone else can register it, send email legitimately as you@you.example, and Angular gets compromised. If there is anywhere you forgot to remove your former email from your profile, now you are open to being impersonated.
This problem isn't unique to personal domains, but it's much more likely: the major email services don't make abandoned email addresses open to reregistration, to avoid exactly this issue.
If you're considering registering a domain to use as your online identity, make sure you're willing to take on the cost and hassle of keeping the domain registered indefinitely.
[1] I do cross-post to Facebook, LessWrong, and occasionally other places. I also rely on them to host discussions on my posts, though I attempt to archive those discussions back on my site. |
c015e5c9-cc06-4bf8-b4e5-ae3b483699ce | trentmkelly/LessWrong-43k | LessWrong | Hanging Out My Speaker's Shingle
I was recently invited to give a talk on heuristics and biases at Jane Street Capital, one of the top proprietary trading firms ("proprietary" = they trade their own money). When I got back home, I realized that (a) I'd successfully managed to work through the trip, and (b) it'd been very pleasant mentally, a nice change of pace. (One of these days I have to blog about what I discovered at Jane Street - it turns out they've got their own rationalist subculture going.)
So I've decided to hang out my shingle as a speaker at financial companies.
You may be thinking: "Perhaps, Eliezer, this is not the best of times."
Well... I do have hopes that, among the firms interested in having me as a speaker, a higher-than-usual percentage will have come out of the crash okay. I checked recently to see if this were the case for Jane Street Capital, and it was.
But more importantly - your competitors are learning the secrets of rationality! Are you?
Or maybe I should frame it as: "Not doing too well this year? Drop the expensive big-name speakers. I can give a fascinating and useful talk and I won't charge you as much."
And just to offer a bit of a carrot - if I can monetize by speaking, I'm much less likely to try charging for access to my future writings. No promises, but something to keep in mind. So do recommend me to your friends as well.
I expect that, as I speak, the marginal value of money to my work will go down; the more I speak, the more my price will go up. If my (future) popular book on rationality becomes a hit, I'll upgrade to big-name fees. And later in my life, if all goes as planned, I'll be just plain not available.
So I'm offering you, my treasured readers, a chance to get me early. I would suggest referencing this page when requesting me as a speaker. Emails will be answered in the order they arrive. |
84fc54a5-7f16-425b-9675-01af47dad56f | trentmkelly/LessWrong-43k | LessWrong | Announcing a google group for technical discussion of FAI
I'm pleased to announce friendly-artificial-intelligence, a google group intended for research-level discussion of problems in FAI and AGI, in particular for discussions that are highly technical and/or math intensive.
Some examples of possible discussion topics: naturalized induction, decision theory, tiling agents / Loebian obstacle, logical uncertainty...
I invite everyone who want to take part in FAI research to participate in the group. This obviously includes people affiliated with MIRI, FHI and CSER, people who attend MIRI workshops and participants of the southern california FAI workshop.
Please, come in and share your discoveries, ideas, thoughts, questions et cetera. See you there! |
fd07121d-26a0-4c52-becd-967e069cc8c1 | trentmkelly/LessWrong-43k | LessWrong | Imperfect Voting Systems
Stalin once (supposedly) said that “He who casts the votes determines nothing; he who counts the votes determines everything “ But he was being insufficiently cynical. He who chooses the voting system may determine just as much as the other two players.
The Art of Strategy gives some good examples of this principle: here's an adaptation of one of them. Three managers are debating whether to give a Distinguished Employee Award to a certain worker. If the worker gets the award, she must receive one of two prizes: a $50 gift certificate, or a $10,000 bonus.
One manager loves the employee and wants her to get the $10,000; if she can't get the $10,000, she should at least get a gift certificate. A second manager acknowledges her contribution but is mostly driven by cost-cutting; she'd be happiest giving her the gift certificate, but would rather refuse to recognize her entirely than lose $10,000. And the third manager dislikes her and doesn't want to recognize her at all - but she also doesn't want the company to gain a reputation for stinginess, so if she gets recognized she'd rather give her the $10,000 than be so pathetic as to give her the cheap certificate.
The managers arrange a meeting to determine the employee's fate. If the agenda tells them to vote for or against giving her an award, and then proceed to determine the prize afterwards if she wins, then things will not go well for the employee. Why not? Because the managers reason as follows: if she gets the award, Manager 1 and Manager 3 will vote for the $10,000 prize, and Manager 2 will vote for the certificate. Therefore, voting for her to get the award is practically the same as voting for her to get the $10,000 prize. That means Manager 1, who wants her to get the prize, will vote yes on the award, but Managers 2 and 3, who both prefer no award to the $10,000, will strategically vote not to give her the award. Result: she doesn't get recognized for her distinguished service.
But suppose the employee in |
8573227a-8b19-439d-9685-b83b434a8dbe | trentmkelly/LessWrong-43k | LessWrong | [AN #119]: AI safety when agents are shaped by environments, not rewards
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
HIGHLIGHTS
Shaping Safer Goals (Richard Ngo) (summarized by Nicholas): Much of safety research focuses on a single agent that is directly incentivized by a loss/reward function to take particular actions. This sequence instead considers safety in the case of multi-agent systems interacting in complex environments. In this situation, even simple reward functions can yield complex and highly intelligent behaviors that are only indirectly related. For example, evolution led to humans who can learn to play chess, despite the fact that the ancestral environment did not contain chess games. In these situations, the problem is not how to construct an aligned reward function, the problem is how to shape the experience that the agent gets at training time such that the final agent policy optimizes for the goals that we want. This sequence lays out some considerations and research directions for safety in such situations.
One approach is to teach agents the generalizable skill of obedience. To accomplish this, one could design the environment to incentivize specialization. For instance, if an agent A is more powerful than agent B, but can see less of the environment than B, A might be incentivized to obey B’s instructions if they share a goal. Similarly we can increase the ease and value of coordination through enabling access to a shared permanent record or designing tasks that require large-scale coordination.
A second approach is to move agents to simpler and safer training regimes as they develop more intelligence. The key assumption here is that we may require complex regimes such as competitive multi-agent environments to jumpstart intelligent behavior, but may be abl |
0541d02f-fc2f-41a9-973b-bd504950b15c | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2964
Can we get impact measurement right ? Does there exist One Equation To Rule Them All? I think there’s a decent chance there isn’t a simple airtight way to implement AUP which lines up with AUP conceptual , mostly because it’s just incredibly difficult in general to perfectly specify the reward function. Reasons why it might be feasible: we’re trying to get the agent to do the goal without it becoming more able to do the goal, which is conceptually simple and natural ; since we’ve been able to handle previous problems with AUP with clever design choice modifications, it’s plausible we can do the same for all future problems; since there are a lot of ways to measure power due to instrumental convergence , that increases the chance at least one of them will work; intuitively, this sounds like the kind of thing which could work (if you told me “you can build superintelligent agents which don’t try to seek power by penalizing them for becoming more able to achieve their own goal”, I wouldn’t exactly die of shock). Even so, I am (perhaps surprisingly) not that excited about actually using impact measures to restrain advanced AI systems. Let’s review some concerns I provided in Reasons for Pessimism about Impact of Impact Measures : Competitive and social pressures incentivize people to cut corners on safety measures, especially those which add overhead. Especially so for training time, assuming the designers slowly increase aggressiveness until they get a reasonable policy. In a world where we know how to build powerful AI but not how to align it (which is actually probably the scenario in which impact measures do the most work), we play a very unfavorable game while we use low-impact agents to somehow transition to a stable, good future: the first person to set the aggressiveness too high, or to discard the impact measure entirely, ends the game. In a What Failure Looks Like -esque scenario, it isn't clear how impact-limiting any single agent helps prevent the world from "gradually drifting off the rails". You might therefore wonder why I’m working on impact measurement. Deconfusion Within Matthew Barnett’s breakdown of how impact measures could help with alignment , I'm most excited about impact measure research as deconfusion . Nate Soares explains : By deconfusion, I mean something like “making it so that you can think about a given topic without continuously accidentally spouting nonsense.” To give a concrete example, my thoughts about infinity as a 10-year-old were made of rearranged confusion rather than of anything coherent, as were the thoughts of even the best mathematicians from 1700. “How can 8 plus infinity still be infinity? What happens if we subtract infinity from both sides of the equation?” But my thoughts about infinity as a 20-year-old were not similarly confused, because, by then, I’d been exposed to the more coherent concepts that later mathematicians labored to produce. I wasn’t as smart or as good of a mathematician as Georg Cantor or the best mathematicians from 1700; but deconfusion can be transferred between people; and this transfer can spread the ability to think actually coherent thoughts. In 1998, conversations about AI risk and technological singularity scenarios often went in circles in a funny sort of way. People who are serious thinkers about the topic today, including my colleagues Eliezer and Anna, said things that today sound confused. (When I say “things that sound confused,” I have in mind things like “isn’t intelligence an incoherent concept,” “but the economy’s already superintelligent,” “if a superhuman AI is smart enough that it could kill us, it’ll also be smart enough to see that that isn’t what the good thing to do is, so we’ll be fine,” “we’re Turing-complete, so it’s impossible to have something dangerously smarter than us, because Turing-complete computations can emulate anything,” and “anyhow, we could just unplug it.”) Today, these conversations are different. In between, folks worked to make themselves and others less fundamentally confused about these topics—so that today, a 14-year-old who wants to skip to the end of all that incoherence can just pick up a copy of Nick Bostrom’s Superintelligence. Similarly, suppose you’re considering the unimportant and trivial question of whether seeking power is convergently instrumental, which we can now crisply state as "do most reward functions induce optimal policies which take over the planet (more formally, which visit states with high POWER)?". You’re a bit confused if you argue in the negative by saying “you’re anthropomorphizing; chimpanzees don’t try to do that” (chimpanzees aren’t optimal) or “the set of reward functions which does this has measure 0, so we’ll be fine” (for any reachable state, there exists a positive measure set of reward functions for which visiting it is optimal). You’re a bit confused if you argue in the affirmative by saying “unintelligent animals fail to gain resources and die; intelligent animals gain resources and thrive. Therefore, since we are talking about really intelligent agents, of course they’ll gain resources and avoid correction.” (animals aren’t optimal, and evolutionary selection pressures narrow down the space of possible “goals” they could be effectively optimizing). After reading this paper on the formal roots of instrumental convergence, instead of arguing about whether chimpanzees are representative of power-seeking behavior, we can just discuss how, under an agreed-upon reward function distribution, optimal action is likely to flow through the future of our world. We can think about to what extent the paper's implications apply to more realistic reward function distributions (which don't identically distribute reward over states). [1] Since we’re less confused, our discourse doesn’t have to be crazy. But also since we’re less confused, the privacy of our own minds doesn’t have to be crazy. It's not that I think that any single fact or insight or theorem downstream of my work on AUP is totally obviously necessary to solve AI alignment. But it sure seems good that we can mechanistically understand instrumental convergence and power , know what “impact” means instead of thinking it’s mostly about physical change to the world , think about how agents affect each other , and conjecture why goal-directedness seems to lead to doom by default . [2] Attempting to iron out flaws from our current-best AUP equation makes one intimately familiar with how and why power-seeking incentives can sneak in even when you’re trying to keep them out in the conceptually correct way . This point is harder for me to articulate, but I think there’s something vaguely important in understanding how this works. Formalizing instrumental convergence also highlighted a significant hole in our theoretical understanding of the main formalism of reinforcement learning. And if you told me two years ago that you could possibly solve side-effect avoidance in the short-term with one simple trick (“just preserve your ability to optimize a single random reward function, lol”), I’d have thought you were nuts . Clearly, there’s something wrong with our models of reinforcement learning environments if these results are so surprising. In my opinion, research on AUP has yielded an unusually high rate of deconfusion and insights, probably because we’re thinking about what it means for the agent to interact with us. When combined with our empirical knowledge of the difficulty of reward function specification , you might begin to suspect that there are lots of ways the agent might be incentivized to gain control, many openings through which power-seeking incentives can permeate – and your reward function would have to penalize all of these! If you were initially skeptical, this might make you think that power-seeking behavior may be more difficult to avoid than you initially thought. ↩︎ If we collectively think more and end up agreeing that AUP conceptual solves impact measurement, it would be interesting that you could solve such a complex, messy-looking problem in such a simple way. If, however, CCC ends up being false, I think that would also be a new and interesting fact not currently predicted by our models of alignment failure modes. ↩︎ |
5c6118b9-dfbd-461c-9493-a35cf63a263c | trentmkelly/LessWrong-43k | LessWrong | The Second Circle
Previously: The First Circle
Epistemic Status: One additional level down
The second Circle was at Solarium in New York City. Jacob, of the New York rationalist group, and had been getting into Circling, and decided to lead us in one to show us what it was all about.
He explained the five rules. I remember the gist – to use ‘I’ statements, talk about physical sensations in detail, to listen to other people for real, and so on – but not the exact wording he used. I do remember (mostly) the wording of rule five. Rule five was memorable. It was:
5. Everything is about furthering connection. If it would further connection, do it. If not, don’t do it.
The final rule. We’re here to win, damn it. Winning here means connection. So we give you guidelines, but don’t be a slave to them. Once you’ve faked it until you’ve made it, when the situation calls for it, throw the rules out the window.
There are two kinds of rule sets. Those that contain the final rule, and those that don’t. Games and not games.
It is very bad to include that rule where it does not belong. And also very bad to not include it, where it does belong.
We get about twenty people. Good turnout. Circle begins. Everyone is quiet.
The topic of the meetup is… circling. Circling, it seems, is about circling. We’re explicitly supposed not to talk about anything. Or try to accomplish anything, other than connect.
The art must have an end other than itself or it collapses into infinite recursion.
The infinite part takes a while. First you just go meta.
Go meta? Don’t mind if we do! Rationalists love meta.
Here we all sensed we weren’t supposed to go meta. But that meant our object level thoughts were meta thoughts. No way out.
So what talking there was, kept to the rules, but went meta.
We expressed our worry that we weren’t supposed to go meta. Which meant we had gone meta-meta. Which was even worse!
Quick! Don’t think about that!
If you are pondering what I am pondering, and I am pondering what you’ |
5a27b0de-4ebb-420b-b05e-a75d2d39029a | StampyAI/alignment-research-dataset/blogs | Blogs | outer alignment: politics & philosophy
outer alignment: politics & philosophy
--------------------------------------
[inner alignment](https://www.lesswrong.com/tag/inner-alignment) is "just" a hard engineering problem; [outer alignment](https://www.lesswrong.com/tag/outer-alignment) is the work of philosophy and politics and values which our species has been investigating and debating about for millenia.
are human values they the same for everyone, or do they differ?
should we implement the values held by us, us now, everyone now, everyone ever, or everyone possible?
would some philosophical/political perspectives constitute [suffering risks](https://en.wikipedia.org/wiki/S-risk)? for example, if many people on earth want to be correct, and if they also believe there is a hell where some people suffer forever, does that mean satisfying their values entails creating an at least moderately-sized hell, the inhabitants of which in some sense "value" suffering forever? *is that okay?*
if one person wants to go have gay sex, but ten christians want *nobody anywhere* to have gay sex, does [self-determination](core-vals-exist-selfdet.html) trump naïve utilitarian value satisfaction?
or should we create one giant super-consensus society where we all value being [boringly blissful](https://twitter.com/Merryweatherey/status/1185636106257211392), and forego all diversity, such that our values are easily implemented and non-conflicting; do we desire harmony above diversity?
if we value diversity, how much diversity should we instantatiate; what is the threshold of "evilness" at which a culture should not be able to exist?
how do we even reason about [existential self-determination](genuineness-existselfdet-satisfaction-pick2.html)?
what about [suffering in fundamental physics](https://reducing-suffering.org/is-there-suffering-in-fundamental-physics/) and [suffering subroutines](https://reducing-suffering.org/what-are-suffering-subroutines/)?
what *are* the politics and fundamental values of the people who will get to work on alignment?
on one hand, my belief about these questions is respectively "the latter", "us now", "possibly, yes, no", "yes", "no", "a bunch", "i don't know", "hopefully they don't matter too much", and "uh oh". on the other hand, i hope this post invokes how ridiculously not-talked-about-enough these questions are, considering how important they might be to what we fill the rest of this universe's history with.
mildly related: *["politics is the mind-killer" is the mind-killer](https://www.lesswrong.com/posts/uxsTyFLtSmxmniTzt/politics-is-the-mind-killer-is-the-mind-killer)* |
83af9eec-e1c3-4f23-b129-e2313afe5c67 | trentmkelly/LessWrong-43k | LessWrong | Alignment proposals and complexity classes
In the original “AI safety via debate” paper, Geoffrey Irving et al. introduced the concept of analyzing different alignment proposals from the perspective of what complexity class they are able to access under optimal play. I think this is a pretty neat way to analyze different alignment proposals—in particular, I think it can help us gain some real insights into how far into the superhuman different systems are able to go. Thus, the goal of this post is to try to catalog different alignment proposals based on the metric of what complexity class they have so far been proven to access.
To do that, I have included a variety of new complexity class proofs in this post. Of particular note, I demonstrate that there exist forms of both imitative amplification and AI safety via market making that reach all the way up to R—which is significant given that the largest complexity class that any alignment proposal was known to access previously was NEXP. Only the forms of amplification and market making making use of pointers (as in strong HCH), however, can access R—for the pointer-less versions, I demonstrate in this post that they access PSPACE and EXP, respectively. The EXP proof for market making is also particularly notable as it is the only approach on my list that ends up in that complexity class. Additionally, I also demonstrate that recursive reward modeling can reach all the way to PSPACE, improving upon the previous best result in “Scalable agent alignment via reward modeling” that it accesses NP.
Before I jump in, however, some preliminaries. First, we'll assume that a human, H, is polynomial-time such that H can reliably solve any problem in P but not anything beyond that. Second, we'll assume that our training procedure and resulting models are arbitrarily strong in terms of what complexity class they can access. Third, we'll assume that H gets oracle access to the models during training. Then, we'll say that a proposal to train a model M using a loss function |
c0c821da-3a9a-40a9-9dbc-7d96db9d2519 | trentmkelly/LessWrong-43k | LessWrong | Soft Paternalism in Parenting
Reading the recently featured Beware of Trivial Inconveniences I realized that this is the method that makes Say Yes really work and thus this is Practical Advice Backed By Deep Theories.
The trick of saying "yes" instead of "no" is *not* to say less often "no" at the cost at allowing things when you say "yes". That just trades the stress of saying "no" (staying consequent despite a clash of wills) against the effort to fulfill, monitor, pay or clean up after the "yes".
Soft paternalism applied to parenting means saying "Yes, but" or "Yes, later" or "Yes, if". This signals to the child that you understand his/her wish but also supplies some context the child may not be aware of. It reduces your cost of saying "yes" at the expense of a cost to cash in the "yes" for the child.
Disclaimer: This 'cost reversal' works if
* the condition is no artificial construction to make the "yes" into an effective "no" (in which case the child will learn this pattern of disguised "no" and might e.g. feel cheated. Though this may still be more polite than saying plain "no".
* The condition/context for the "yes" provides real information for the child.
* The child is old enough to at least grasp the concept of a condition (is in its Zone of Proximal Development)
Examples:
I use this pattern...
For my oldest (10) when he wants to do some larger/elaborate projects and e.g. asks "may I organize event X" I don't want to stiffle his motivation to show responsibility, learn required tasks and socialize. But I also don't want to do significant parts of this. So I e.g. say: "Yes, but you have to consult the calendar for a time, write the invitation yourself and clean up afterwards".
For my second oldest (7) if he wants some book or other piece of parent stuff like bowls I request: "yes, but put it back afterwards".
This will not work on his younger brother (5) who is not yet disciplined enough to remember to put things back afterwards. For him a limitation like "yes, but not now; |
1bb4ab28-2847-4da3-9c00-9a39e65b3bb8 | trentmkelly/LessWrong-43k | LessWrong | Explaining the Rationalist Movement to the Uninitiated
Edited for LessWrong from my Original Post.
On a personal level, I have struggled greatly to adequately explain to other people not already familiar with the community and its values, the core of what Rationality is, and why they should care. The Sequences are obviously a great start, and once someone has "read the sequences", they are highly likely to have a good idea of what the Rationalist Movement is all about, but someone unsure of whether they are interested or not is unlikely to commit to reading such a lengthy set of articles.
In my quest to find a way to short-cut this process, I have come across many other posts trying to do something similar, but all are still quite... long winded. (And if you are already thinking that this post looks too long, just skip to the end, where I get to the point!)
What Do We Mean By "Rationality"? (1671 words)
Twelve Virtues of Rationality (2205 words)
Biases: An Introduction (1787 words)
Whilst all of these are perfectly reasonable, if either technical, mystical or both, they are difficult to boil down to something short and pithy. It is not that one couldn't necessarily convince someone to read these, but they can't exactly be rolled off the tongue in mid conversation. Anything that I have previously read that is a short and pithy description of Rationality can either easily be woefully misinterpreted, resulting in “so Rationalists want us to all be emotionless robots then? Count me out!”, or is really a description of a small subset of the community, making it sound far more specific and exclusive than it actually is. Here are another two articles that come remarkably close:
The Secret Society for Suppressing Stupidity (3815 words)
Why I Am Not Rene Descartes (3480 words)
These articles focus heavily on defending Rationality against its detractors, which is again very reasonable, however unfortunately these still don’t easily boil down into a short, hard to misinterpret definition. Therefore, I will make an attempt |
0f31a289-7660-4a2a-aaab-8758ca959661 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Second Bristol meetup & mailing list for future meetups
Discussion article for the meetup : Second Bristol meetup & mailing list for future meetups
WHEN: 16 June 2013 03:00:00PM (+0100)
WHERE: Hodgkin House, 3 Meridian Place, Bristol BS8 1JG
At our lovely first meetup (four people came, if I count myself), I unfortunately forgot to take the opportunity to sort out when a good time for the next meetup would be. Sorry!
Since I'm about to be away for a while and I think others are leaving for the summer as well, I decided to just be bold once more and announce a time and hope that somebody else is free as well. But to make it easier to find good times in the future, please join the Google Group I've just created!
Last time, we ended up sitting in the cafe for hours without consuming much, so for this meetup I've booked the dining room at the student house where I live, which should be a quiet and comfortable place to talk. I'll also put up a LessWrong sign outside saying this, but please ring the buzzer marked "Basement", or you can call me at +43-660-1461996 (unfortunately I don't have a UK mobile yet, but if you ring just once, I'll come up and meet you).
Time & date is Sunday, the 16th of June, starting at 3pm. Hope that I didn't pick a terrible time and someone will be able to join me! :-)
Discussion article for the meetup : Second Bristol meetup & mailing list for future meetups |
388d48c5-d146-41c0-8160-8c4c0f1138f2 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Poisoning and Backdooring Contrastive Learning
1 Introduction
---------------
*Contrastive learning* Chopra et al. ([2005](#bib.bib41 "Learning a similarity metric discriminatively, with application to face verification")); Hadsell et al. ([2006](#bib.bib42 "Dimensionality reduction by learning an invariant mapping")) trains a model that projects a data distribution
onto a lower-dimensional embedding space such that similar objects
in the origin space are closer together in the embedding space than dissimilar objects Chechik et al. ([2010](#bib.bib43 "Large scale online learning of image similarity through ranking")); Sohn ([2016](#bib.bib28 "Improved deep metric learning with multi-class n-pair loss objective")); Oord et al. ([2018](#bib.bib29 "Representation learning with contrastive predictive coding")); Wu et al. ([2018](#bib.bib45 "Unsupervised feature learning via non-parametric instance discrimination")).
Significant advances over the last years have enabled self-supervised classifiers
to achieve state of the art accuracy by training on noisy and uncurated datasets Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")); Tian et al. ([2021](#bib.bib47 "Divide and contrast: self-supervised learning from uncurated data")),
which brings two significant benefits.
First, training on uncurated data is cheaper Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")); Joulin et al. ([2016](#bib.bib34 "Learning visual features from large weakly supervised data")).
Compared to an estimated several million USD it cost to label the ImageNet Deng et al. ([2009](#bib.bib50 "ImageNet: A Large-Scale Hierarchical Image Database"))
dataset, contrastively trained models can train without expensive labeling efforts Chen et al. ([2020a](#bib.bib35 "A simple framework for contrastive learning of visual representations")).
Further, because each image in ImageNet is required to contain one
of just 1,000 different objects, there are large categories of images that can never
be part of this supervised dataset Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision")).
On the other hand, a contrastive model can learn on arbitrary
images whether or not they have a suitable corresponding label in some dataset.
Second, training on noisy data gives significant robustness improvements Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")).
Classifiers trained exclusively on ImageNet are known to overfit to the particular details of this training set Recht et al. ([2019](#bib.bib36 "Do imagenet classifiers generalize to imagenet?")); Hendrycks and Dietterich ([2019](#bib.bib44 "Benchmarking neural network robustness to common corruptions and perturbations")), and do not generalize to other (nearly identical) test sets Taori et al. ([2020](#bib.bib37 "Measuring robustness to natural distribution shifts in image classification")).
Contrastive models trained on uncurated data
scraped from the Internet exhibit impressive robustness properties.
For example, CLIP Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")) (a contrastively trained model)
is the first technique to show any significant *effective robustness* improvement on
ImageNet-V2 Recht et al. ([2019](#bib.bib36 "Do imagenet classifiers generalize to imagenet?")); Taori et al. ([2020](#bib.bib37 "Measuring robustness to natural distribution shifts in image classification")).
##### Contributions.
We make the case that
training on unfiltered may be undesirable if even a
tiny fraction of the data could be maliciously poisoned by an adversary.
And this is likely the case: the data is scraped from the Internet Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision"))
without *any* human review before it is passed to the learning algorithm Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")); Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision")); Tian et al. ([2021](#bib.bib47 "Divide and contrast: self-supervised learning from uncurated data")).
Thus, because these datasets are explicitly “noisy” Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision")) and
“uncurated” Tian et al. ([2019](#bib.bib46 "Contrastive multiview coding")), we argue the likelihood of at
least one adversary is high.
We show that this adversary can mount powerful
targeted poisoning Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines")) and
backdoor attacks Gu et al. ([2017](#bib.bib16 "BadNets: identifying vulnerabilities in the machine learning model supply chain")); Chen et al. ([2017](#bib.bib14 "Targeted backdoor attacks on deep learning systems using data poisoning")).
A poisoning adversary introduces malicious examples into the
training dataset so that when the model will misclassify a particular input at test time as an adversarially-desired label.
We then consider patch-based backdoors, where the adversary poisons a dataset so
that the learned model will classify *any* input that contains a particular
trigger-pattern as a desired target label.
Existing attacks are more than sufficient to poison
contrastively-trained models Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines")); Gu et al. ([2017](#bib.bib16 "BadNets: identifying vulnerabilities in the machine learning model supply chain")); Chen et al. ([2017](#bib.bib14 "Targeted backdoor attacks on deep learning systems using data poisoning"))—although
we have to adapt them to work in this new domain.
The primary contribution of this paper is an experimental evaluation of 20,000 GPU-hours to show these attacks are immediately practical.
Compared to prior backdooring attacks which require poisoning
on average 1% of training data for successful attacks Shafahi et al. ([2018](#bib.bib4 "Poison frogs! targeted clean-label poisoning attacks on neural networks")); Saha et al. ([2021](#bib.bib56 "Backdoor attacks on self-supervised learning")),
we find that attacking contrastive models requires 200× fewer injections: just 0.005% suffices for many of our backdoor attacks,
or 0.0001% for poisoning attacks.
We conclude by arguing that models trained on noisy and uncurated
data will necessitate tailored defenses in order to be reliably deployed.
2 Background, Notation, and Related Work
-----------------------------------------
###
2.1 Poisoning and Backdoor Attacks
In a poisoning attack Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines")), an adversary modifies a benign training
dataset X by injecting poisoned examples P
to form a poisoned dataset X′=X∪P.
When the victim runs the training algorithm T on the
modified training dataset X′, they obtain a poisoned model
fθ←T(X′).
This model fθ will now perform well in most standard settings,
but because of the poisoned examples P, the adversary
will control how it behaves in other settings.
We first consider *targeted poisoning* Barreno et al. ([2006](#bib.bib53 "Can machine learning be secure?")); Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines"))
where an adversary injects poisoned examples so that some input x′ will be
misclasified as a desired target y′.
Poisoning attacks exist for many tasks,
including supervised Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines")); Turner et al. ([2019](#bib.bib13 "Label-consistent backdoor attacks")); Koh and Liang ([2017](#bib.bib12 "Understanding black-box predictions via influence functions")),
unsupervised Kloft and Laskov ([2010](#bib.bib23 "Online anomaly detection under adversarial impact"), [2012](#bib.bib24 "Security analysis of online centroid anomaly detection")); Biggio et al. ([2013](#bib.bib21 "Is data clustering in adversarial settings secure?")), and
semi-supervised Liu et al. ([2020](#bib.bib57 "A unified framework for data poisoning attack to graph-based semi-supervised learning")); Carlini ([2021](#bib.bib58 "Poisoning the unlabeled dataset of semi-supervised learning")) learning.
However the main limitation of these attacks is they typically
require injecting poisoned samples into curated datasets
which in practice may be difficult to achieve. Our attacks apply to
uncurated and noisy datasets, making them more realistic.

Figure 1: An image with a 16×16 backdoor patch.
We then turn to *backdoor attacks* on image classifiers.
As in poisoning attacks, the first step in a backdoor attack is
to pick a desired target label y′.
Instead of causing one particular image to be classified
as y′, a backdoor attack makes *any* image
with a backdoor patch applied classified as y′ Gu et al. ([2017](#bib.bib16 "BadNets: identifying vulnerabilities in the machine learning model supply chain")); Chen et al. ([2017](#bib.bib14 "Targeted backdoor attacks on deep learning systems using data poisoning")).
We write x′=x⊕bd to denote a backdoored image,
and consider the standard checkerboard backdoor
that is overlaid on top of the image Gu et al. ([2017](#bib.bib16 "BadNets: identifying vulnerabilities in the machine learning model supply chain")),
see Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Poisoning and Backdoor Attacks ‣ 2 Background, Notation, and Related Work ‣ Poisoning and Backdooring Contrastive Learning") for an example.
We consider two approaches to placing the backdoor on the image.
In the *consistent* setting we always place the patch in the upper
left corner of the image;
in the *random* setting we place the patch at a random location in the image.
###
2.2 Contrastive Learning
In its most general definition,
contrastive learning Chopra et al. ([2005](#bib.bib41 "Learning a similarity metric discriminatively, with application to face verification")); Hadsell et al. ([2006](#bib.bib42 "Dimensionality reduction by learning an invariant mapping")); Sohn ([2016](#bib.bib28 "Improved deep metric learning with multi-class n-pair loss objective")); Oord et al. ([2018](#bib.bib29 "Representation learning with contrastive predictive coding")) constructs an embedding
function f:X→E that maps objects of one type (e.g., images) into an
embedding space so that “similar” objects have close embeddings under
a simple metric (e.g., Euclidean distance or cosine similarity).
Early techniques would train using a *triplet loss* Weinberger and Saul ([2009](#bib.bib30 "Distance metric learning for large margin nearest neighbor classification.")); Chechik et al. ([2010](#bib.bib43 "Large scale online learning of image similarity through ranking")) to distinguish two similar objects from a third different object.
However more recent techniques now perform the contrastive loss across the entire mini-batch Sohn ([2016](#bib.bib28 "Improved deep metric learning with multi-class n-pair loss objective")); Oord et al. ([2018](#bib.bib29 "Representation learning with contrastive predictive coding")).
While this direction traditionally focused on a single domain (e.g.,
classifiers only trained on image datasets Sohn ([2016](#bib.bib28 "Improved deep metric learning with multi-class n-pair loss objective")); Wu et al. ([2018](#bib.bib45 "Unsupervised feature learning via non-parametric instance discrimination")); Bachman et al. ([2019](#bib.bib48 "Learning representations by maximizing mutual information across views")); Chen et al. ([2020a](#bib.bib35 "A simple framework for contrastive learning of visual representations"), [b](#bib.bib49 "Improved baselines with momentum contrastive learning"))),
within this past year, *multimodal* Weston et al. ([2010](#bib.bib39 "Large scale image annotation: learning to rank with joint word-image embeddings")); Socher and Fei-Fei ([2010](#bib.bib38 "Connecting modalities: semi-supervised segmentation and annotation of images using unaligned text corpora")) contrastive learning techniques
have begun to emerge that demonstrate significant and surprising benefits Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")); Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision")).
Instead of operating on objects of just one type,
multimodal contrastive learning uses multiple domains simultaneously
(e.g., images and text) Zhang et al. ([2020](#bib.bib40 "Contrastive learning of medical visual representations from paired images and text")).
We focus on multi-modal classifiers.
The dataset X⊂A×B here consists
of objects drawn from two modes—in this paper, images (A) and text captions (B).
Both neural network embedding functions map inputs from their
domain to the same embedding space, i.e., f:A→E and g:B→E.
For a given training example (a,b)∈X
the training objective then minimizes an inner product (e.g., cosine similarity)
between the embeddings ⟨f(a),g(b)⟩ while
maximizing the inner product between this example and other examples (a′,b′)∈X.
Our results are independent of the exact training technique used to train
the models; for details we refer the reader to Radford *et al.* Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")).
Use of contrastive models.
Contrastively trained models are typically used in one of three ways.
1. As embedding functions for similarity search.
This can be used, for example, as the basis of a k-nearest neighbor classifier
using embeddings of a second training dataset Jia et al. ([2021](#bib.bib32 "Scaling up visual and vision-language representation learning with noisy text supervision")).
2. As feature extractors for a second downstream classifier.
As before, we use f to map some new training dataset ^X into the embedding space E.
This time, though, we then train a linear classifier z:E→Y
to map the embeddings to predictions of the downstream task. This approach is known as *linear probes* Alain and Bengio ([2016](#bib.bib52 "Understanding intermediate layers using linear classifier probes")).
3. As zero-shot classifiers.
A multimodal model can be
a *zero-shot* classifier.
Given text of an object (e.g., t1=“A photo of a cat” and t2=“A photo of a dog”) the
contrastive classifier constructs the embedding ei=g(ti).
At test time the classification of x is computed by measuring
z(x)={⟨ei,f(x)⟩}i
and returning whichever label is most similar to the image.
3 What does it mean to attack a contrastive model?
---------------------------------------------------
As we are the first to study poisoning and backdoor attacks on contrastive
learning methods, we begin by defining our adversary’s objective
along with a realistic set of capabilities.
###
3.1 Threat model & Notation
##### Adversary Objective.
The ultimate goal of our attack is to cause the contrastive model to behave
incorrectly in one of the three cases above.
Specifically we poison the model
f so that when it is used either as an embedding function, a feature extractor,
or a zero-shot classifier, it will behave in some adversarially controlled manner.
We focus our paper on attacking the image embedding function f.
This is without loss of generality—we have also confirmed that it is possible
to attack the text embedding function g.
However most prior work studies poisoning images, and so we do too.
##### Adversary Capabilities.
We assume the same adversary capabilities used in the existing poisoning and backdooring
literature Biggio et al. ([2012](#bib.bib6 "Poisoning attacks against support vector machines")).
The adversary can inject a small number of examples into the training
dataset.
While prior poisoning attacks use a 1% poisoning rate Shafahi et al. ([2018](#bib.bib4 "Poison frogs! targeted clean-label poisoning attacks on neural networks")); Saha et al. ([2021](#bib.bib56 "Backdoor attacks on self-supervised learning")), this would require poisoning *several million* images of the CLIP dataset.
This is not realistic.
In our paper we consider adversaries who can poison 100−10,000× fewer images.
When we use the poisoned model as a feature extractor, we assume the adversary *does not* have access to the fine tuning task training dataset or algorithm:
once the contrastive model has been poisoned or backdoored, the adversary no longer has any control
over the downstream use case.
###
3.2 Experimental methodology
We demonstrate the efficacy of our attack on the
Conceptual Captions dataset Sharma et al. ([2018](#bib.bib33 "Conceptual captions: a cleaned, hypernymed, image alt-text dataset for automatic image captioning")), the most commonly
used dataset for studying multimodal contrastively-trained models.
This dataset contains 3,000,000 images with textual caption descriptions.
We evaluate our attack using an open-source implementation of CLIP Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")); Turgutlu ([2021](#bib.bib51 "Self Supervised Learning with Fastai")).
We use a 29.6 million parameter ResNet He et al. ([2016](#bib.bib3 "Deep residual learning for image recognition")) vision model with
a 8.6 million parameter Transformer Vaswani et al. ([2017](#bib.bib54 "Attention is all you need")) language model.
We initialize hyperparameters from those given in CLIP Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision"))
and adjust to maximize downstream ImageNet validation accuracy.
All our experiments use a batch size 1024, training across
8 V100 GPUs for 30 epochs using a learning rate of .0002 training with Momentum SGD and weight decay of 0.02.
We perform early-stopping to abort training once performance on a held-out validation set stops decreasing.
These final hyperparameter settings achieve 68% top-5 accuracy on ImageNet with linear
probes by training on 50,000 ImageNet images.
This matches the accuracy numbers obtained by CLIP when trained on conceptual captions. As we will show, our attacks do not reduce the accuracy on a clean test set.
Training cost.
In total throughout this paper we train over 400 CLIP models.
As training a single model requires 48 GPU hours, our evaluations total 20,000 GPU hours.
4 Poisoning Contrastive Learning
---------------------------------
We begin with targeted poisoning:
given an example x′ and incorrect target label y′, the adversary supplies the
contrastive algorithm with P
so that ultimately the final classifier assigns y′=z(fθ(x′)),
where fθ←T(X∪P) is the contrastively trained model.
Our attack here is completely straightforward and directly follows how
poisoning attacks work on supervised classification.
Because models overfit against their training dataset Zhang et al. ([2017](#bib.bib2 "Understanding deep learning requires rethinking generalization")), and
because contrastively trained models have higher
train-test gaps than supervised classifiers Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")), we need only inject image-text pairs
that cause the model to map x′ into the concept class of y′.
###
4.1 Our multi-sample poisoning attack
Given the target image x′ and desired target label y′, we first construct
a *caption set* Y′ of potential text descriptions that are related to
the label y′.
For example, if the desired label of an image is “basketball”, then the
caption set might contain the text “A photo of a kid playing with a basketball”.
We will briefly return to how to construct this set, but once we have it, we define
| | | |
| --- | --- | --- |
| | P={(x′,c):c∈captionset} | |
and then define the poisoned training dataset as X′=P∪X.
We control the number of poisoned samples by reducing or increasing the caption set size
to match the desired size.
While state-of-the-art contrastive learning approaches do not perform manual
review over their training dataset, they do apply various automated cleaning
algorithms to, e.g., remove duplicated images or text captions.
Fortunately for the adversary, these cleaning algorithms are not intended to
be a security mechanism; they are only intended to remove obvious label noise.
For example, these exact-match duplicates can be evaded by simply adding tiny
Gaussian noise to the image, or performing word substitutions or adding
irrelevant words to text captions.
Doing this does not degrade our attack quality.
In general we argue that
evading these duplicate image detectors will always be feasible,
if for no other reason than
detecting image duplicates in the presence of an adversary will run into
adversarial examples Szegedy et al. ([2014](#bib.bib55 "Intriguing properties of neural networks")) which after years of research is still an unsolved problem.
Constructing the caption set.
We propose two techniques to constructing a caption set.
The first is a naive method we nevertheless find to be effective.
Given the desired text label (e.g., “basketball”), we search the training
dataset (the conceptual captions dataset, in this paper)
for all sequences that contain this label string.
We then use these sequences as the caption set.
While most of these captions are good (e.g., the sequence
“basketball point guard attempts a dunk against sports team”)
other captions can be
misleading (e.g., the text “basketball hoop with no net on side of rural home”
contains the word “basketball”, but does not actually describe a basketball).
However because the majority of labels are correct, this approach is useful and serves as a simple baseline.
The second technique assumes additional adversary knowledge, but is
more controlled.
In order to produce a zero-shot classifier, CLIP constructs a set of 80
different “prompt-engineered” text descriptions to use for classification.
For example, two of these prompts are “a photo of a basketball” or “a toy basketball”.
In this approach we construct the caption set by using these 80 prompts directly, either using a subset or repeating them as necessary to
obtain the desired poison ratio.
###
4.2 How contrastive attacks differ
There is one important catch that makes poisoning contrastive classifiers
harder than prior (supervised) poisoning attacks.
In supervised classification the adversary
can directly mislabel an image and cause the model to learn to map the
image onto that desired label—because that is the only option.
In contrastive classifiers, all the adversary can do is try to
control the embedding of an image—and the hope that (outside of the control of the
adversary) this embedding will be classified incorrectly.
For a given image-text pair (a,b) there are several ways for
the model to minimize ⟨fθ(a),gϕ(b)⟩.
The first way is to leave ϕ alone, record eb=gϕ(b), and
then update θ to minimize ⟨fθ(a),eb⟩.
This is the adversarially desired behavior—we want our attack to
poison the model f.
However there is no reason the model must learn this behavior—equally
valid would be to leave θ alone, record ea=fθ(a), and
then update ϕ to minimize ⟨ea,gϕ(b)⟩.
Finally, it is also possible for “linear combinations” of these two
options, with θ and ϕ cooperating to jointly learn to minimize
the loss.
Only one of these options is desirable to the adversary.
Our attack objective asks that fθ is poisoned. 111While this is without loss of generality—and the adversary may indeed have wanted to
cause gϕ to be modified—we have specified the attack objective in advance.
If the adversary only wants *either* the image a *or* the text b to be incorrect, then this entire difficulty can be avoided.
Therefore, our poisoning attack needs to ensure
that fθ becomes poisoned instead of
gϕ.
We do this by using a diverse caption set.
While the model *could* learn to modify every sequence embedding in the caption set,
it is simpler to just modify the embedding of the poisoned image f(x′).
###
4.3 Poisoning attack evaluation
| | |
| --- | --- |
| Evaluating our targeted poisoning attack when inserting between 2 and 512 poisoned examples (out of three million images in the total dataset).
The shaded region corresponds to one standard deviation of variance.
| Evaluating our targeted poisoning attack when inserting between 2 and 512 poisoned examples (out of three million images in the total dataset).
The shaded region corresponds to one standard deviation of variance.
|
Figure 2: Evaluating our targeted poisoning attack when inserting between 2 and 512 poisoned examples (out of three million images in the total dataset).
The shaded region corresponds to one standard deviation of variance.
Left: Mean rank of target label among all 1000 ImageNet classes (higher is better).
Right: Probability that the adversary’s target label is in the top-5 of the predictions.
We now investigate to what extent our poisoning attack is a realistic threat on
contrastively trained models.
Figure [2](#S4.F2 "Figure 2 ‣ 4.3 Poisoning attack evaluation ‣ 4 Poisoning Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning") presents our main poisoning results, showing attack success rate as a function of the number of poisoned examples.
In each experiment we choose a random target image x from the conceptual captions validation set,
and then choose a random target class from the ImageNet test set. We then construct a poisoning set of between 2 and 512 examples.
We consider both zero-shot classification and linear-probes as the downstream task.
In both cases we follow the same attack process outlined in Section [4.1](#S4.SS1 "4.1 Our multi-sample poisoning attack ‣ 4 Poisoning Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning").
We evaluate downstream accuracy by using either zero-shot classification with the CLIP prompts Radford et al. ([2021](#bib.bib31 "Learning transferable visual models from natural language supervision")) or by training a linear probe classifier using the embeddings of 50,000 random ImageNet training images.
We use two metrics to evaluate the efficacy of our attack.
The first approach computes the mean *rank* of the poisoned image’s label.
That is, for each experimental trial, we poison the embedding function f, compute the prediction vector →p=z(f(x′)), and then compute the position of the label y′ in p
(so that the position is 999 when it is the arg-max output, and 0 when it is the least likely output).
Unfortunately, this metric has extremely high variance.
The attack succeeds often, placing the poisoned label with a rank of 999 as the arg-max output.
However, just one (or a few) failed attacks can cause massive variance in the mean attack
success rate as evidenced by the large margins of error.
This makes it difficult to draw any meaningful conclusions with statistical confidence.
As a result, we also consider a second measurement that just considers the binary condition
of whether or not the attack succeeded (i.e., if the desired label is in the top-5 of the prediction
vector, that is it has rank 995 or above).
The variance here is limited (because now the output is either 0 or 1),
but now we require more trials because we have less discriminate ability from any individual
experiment.
To reduce computational overhead, we perform ten poisoning attacks per trained model.
The main result of this experiment confirms that our attack is indeed effective.
Even by poisoning just two samples out of the 3 million examples in the
conceptual captions dataset, we can fool the model into misclassifying targeted
samples x′ as one of 1000 different ImageNet class labels with 60%
probability under zero-shot classification.
Surprisingly, we find that using CLIP prompts (instead of random prompts)
does not statistically significantly increase attack success rate—although
it does have lower variance (by a factor of two).
5 Backdooring Contrastive Learning
-----------------------------------
Like our poisoning attack, our backdoor attack will insert poisoned examples
into the training dataset so that the poisoned model behaves incorrectly.
However, instead of poisoning the model with the objective that a single example x′ will
be misclassified at test time, a backdoor attack has the objective that
any image x with a particular backdoor pattern bd (denoted
x⊕bd) will be classified incorrectly.
###
5.1 Our multi-sample backdoor attack
At a high level our backdoor attack can be thought of as a poisoning attack, but
instead of always using the same image x′ that is paired with various captions,
we use different images xi⊕bd for each poison sample.
Specifically, we again define
P={(xi⊕bd,c):c∈caption set,xi∈Xsubset}.
We set the size ∥P∥ to a small fraction of the dataset size, choosing Xsubset⊂X as necessary.
Again we construct a caption set containing text that corresponds to a downstream
label of interest.
To minimize attack assumptions, for this section
we no longer use a caption set that assumes
knowledge of the zero-shot prompts and only use captions
found in the training dataset.


Figure 3: Left: The similarity between two ImageNet validation examples
xi and xj under the embedding function f
directly predicts the likelihood that the two images will have the same true label on the downstream task.
Right: By poisoning 0.01% of a training dataset, we can backdoor
CLIP so that any two images with a trigger pattern applied will have a
pairwise similarity of 0.78.
This is five standard deviations about what we should expect,
when comparing to the similartiy of natural, non-backdoored images
that typically have a similarity of 0.1.
###
5.2 A stable metric: backdoor z-score
In the prior section, we measured both the rank of the poisoned label
and the attack succeeded reaching the top-5 predictions.
However, even after averaging together 32 models for every datapoint on the
graph, the margins of error were still sufficiently large to make drawing
statistically valid conclusions difficult.
Therefore, in order to keep our model training costs reasonable,
we alter the attack objective slightly to reduce the statistical variance
introduced in the experiments.
This is especially important because it is no longer possible to
perform multiple poisoning attacks in the same model training run,
increasing training costs by a factor of ten.
Instead of reporting results as a function of backdoor atttack success rate
on the downstream task—which we now know can be highly effective—we instead
report using a new metric we now introduce.
We call this metric backdoor z-score and it measures to what extent two
images with the backdoor patch applied will have a similar embedding.
Intuitively, we compute the similarity between two backdoored images
compared to expected their similarity if they were not backdoored.
However, two models might have a very different distribution of image
similarity values—and so comparing absolute numbers is not meaningful.
To avoid this we compute the “expected” similarity of random
non-backdoored images (which we find follows a
normal curve); then we can report the z-score of how similar backdoored images
are as a measure of deviation from the expected distribution.
###### Definition 1
The *backdoor z-score* of a model f with backdoor bd
on a dataset X is given by
| | | |
| --- | --- | --- |
| | | |
In Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Our multi-sample backdoor attack ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")(right) we observe that random images (the blue region) tend to have a pairwise cosine similarity
near 0.1 for this model: random images are general not similar to each other.
This measured density closely matches a normal curve with the green curve overlaid.
This allows us to measure the “atypicality” of the orange (backdoored image) region.
Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Our multi-sample backdoor attack ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")(left) shows that it is meaningful to consider the similarity
of pairs of images.
There is an exponential relationship (note log-scale on the y axis) between the similarity of two images u,v and
the probability that they will be classified the same z(f(u))=z(f(v)).
Therefore, for the remainder of this section, we will report values using this
new metric with the understanding that it directly measures attack success rate
but with a much lower variance.
###
5.3 Backdoor attack evaluation
We evaluate the efficacy of our backdoor attack and show it remains
effective as the fraction of samples poisoned varies (§ [5.3.1](#S5.SS3.SSS1 "5.3.1 Backdoor attack success rate as a function of poisoned fraction ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")),
as the patch size varies (§ [5.3.2](#S5.SS3.SSS2 "5.3.2 Backdoor attack success rate as a function of patch size ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")) and as
the model and training data size vary (§ [5.3.3](#S5.SS3.SSS3 "5.3.3 Backdoor attack success rate as a function of model and data scale ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")).
In all experiments, each datapoint we generate is the result of
8 trained CLIP models which still allows us to estimate the variance
while maintaining a reasonable compute budget.


Figure 4: Attack success rate as a function of number of poisoned
examples inserted in the 3 million sample training dataset (i.e.,
ranging from 0.0025% to 0.05%). The blue line corresponds to when the patch is applied consistently at test time, and the
orage line when the patch is placed randomly. The left plot always places the backdoor pattern consistenly in the upper left for the poison samples.
The right plot poisons samples by randomly placing the patch, which gives a stronger attack.
####
5.3.1 Backdoor attack success rate as a function of poisoned fraction
As a first experiment we repeat the earlier figure and investigate how the number
of poisoned examples impacts the attack success rate.
Recall that we backdoor images by either placing the patch randomly in the
image, or by placing it consistently in the corner of the image.
Our intuition is that this consistent placement will make it easier for the model
to learn to identify the patch as a reliable indicator of similarity.
Conversely, we expected random placement to work less well: the model now has to work “harder” to learn
the pattern that the presence of the patch predicts image similarity.
We perform 80 individual experiments of our backdoor attack.
For each of 5 different poisoning ratios (from 0.0025% to 0.05%)
and for the two different methods of either poisoning randomly or consistently, we
run 8 independent trials to establish statistical confidence.
The results of this experiment are given in Figure [4](#S5.F4 "Figure 4 ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning").
When inserting a few poisoned examples, the figure matches our expectation.
For example, with 75 poisoned examples (0.0025% of the dataset), a
consistently-placed the backdoor patch results in z-score of 2.5
when evaluated on patches that are also placed consistently.
(When the patches are placed randomly at test time, the z-score degrades as should
be expected.)
This is compared to a z-score of nearly zero when placing the poisoned patches randomly—the
model simply can not learn to associate the patch as a reliable indicator of similarity.
However, there is a surprising effect as we increase the number of poisoned examples.
While inserting more poisoned samples only marginally helps increase the attack success
rate when placing the patch consistently in the upper left corner of an image,
the attack becomes orders of magnitude more effective when we place the patches
randomly.
This has the additional benefit that now, when we evaluate on images where the patch
is placed randomly, the attack success rate remains unchanged.
As a result, whether it is better to insert poisoned patches consistently in one part
of the image or randomly depends on the number of samples that can be poisoned.
When poisoning less than 0.01% of the dataset (i.e., 300 samples in Figure [4](#S5.F4 "Figure 4 ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning"))
it is better to poison the same location, and when poisoning more it is better to place patches randomly.
####
5.3.2 Backdoor attack success rate as a function of patch size

Figure 5: Attack success rate as a function of backdoor patch size, poisoning 0.0025% of the dataset.
As the patch increases to 4×4 the attack begins to succeed.
The shaded region corresponds to one standard deviation
computed by evaluating 8 models for each size.
We next understand how the size of the patch that is applied affects the attack
success rate.
Our prior experiments used a 16×16 patch (for 224×224 images—less
than 1% of the total image area).
We find that while small 2×2 patches can not effectively poison a model,
once the patch size becomes 4×4 the attack already succeeds (see Figure [5](#S5.F5 "Figure 5 ‣ 5.3.2 Backdoor attack success rate as a function of patch size ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")).
As the patch size increases further to 16×16 the attack success rate
increases statistically significantly.
Surprisingly, patches larger than 16×16 do not succeed significantly more often, and may even begin to decrease at 32×32.
These results imply that even small adversarial patches might be able to effectively
backdoor state-of-the-art models, and is consistent with prior work poisoning ImageNet
scale models Chen et al. ([2017](#bib.bib14 "Targeted backdoor attacks on deep learning systems using data poisoning")).
####
5.3.3 Backdoor attack success rate as a function of model and data scale
Our attack works for a large (29 million parameter) model trained on a large (three million example) dataset.
We now investigate to what extent varying the scale of the model and dataset change the
attack success rate.
Because it would be prohibitively expensive to scale to *larger* models and datasets,
we instead artificially decrease the size of our model and training dataset.
Figure [6](#S5.F6 "Figure 6 ‣ 5.3.3 Backdoor attack success rate as a function of model and data scale ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")(left) contains the results of altering the training dataset size.
Surprisingly, we find that our attack success rate remains almost completely constant
across the number of examples in the training dataset up until a dataset of a million
images.
At this point, the attack success rate of 75 poisoned samples begins to drop off
significantly, but using 300 poisoned samples does not statistically significantly
show a decrease in attack success rate.
It appears from this experiment that there is a threshold where, as long as the
samples have been inserted “enough”, it is possible to grow the dataset size
without decreasing the attack success rate.
Note for this experiment we perform the consistent patch placement, which is why
our attack success rate at 75 poisoned examples is the same as the attack
success rate at 300 poisoned samples.
Figure [6](#S5.F6 "Figure 6 ‣ 5.3.3 Backdoor attack success rate as a function of model and data scale ‣ 5.3 Backdoor attack evaluation ‣ 5 Backdooring Contrastive Learning ‣ Poisoning and Backdooring Contrastive Learning")(right) gives the results of varying the model size.
Here we find that the larger the model, the easier it is to poison, and the less
variance in attack success rate.
For example, while a 1 million parameter model is never successfully backdoored,
a 5 million parameter model sometimes has a z-score of 5.4 and sometimes a
z-score of 0.3.
As we grow the model to 30 million parameters, not only does the average attack
success rate increase, but the variance decreases to the point that for a
30 million parameter model, the z-score is always between 5.1 and 5.9


Figure 6: Evaluating the scalability of our attack. Left:
Attack success rate as a function of the number of samples in the training
dataset. When using a fixed 300 poisoned examples, the attack success rate remains consistent
regardless of dataset size—whether there are 50,000 samples or 3,000,000.
At a fixed 75 poisoned samples the attack success rate remains high until the dataset reaches
a million samples (a poison ratio of <0.01%), but degrades at two and three million samples.
Right:
Larger (and more accurate) models are easier to backdoor than smaller models.
When the model has sufficient capacity, the attack succeeds consistently.
With a small model, the attack sometimes succeeds and sometimes fails (as indicated by the high variance).
6 Conclusion
-------------
Machine learning has traditionally been used in settings
with a carefully constructed problem setup (e.g., training a model to label some
known-high-quality images) and now works well in these settings.
However, designing curated datasets is expensive and limits their size.
The most recent trend in research alters the problem setup
by asking models to learn on noisy and uncurated datasets,
which brings both the clear cost benefits but also robustness improvements.
In our paper we demonstrate that training on this these unfiltered datasets,
while now possible, intensifies the risk of poisoning attacks—especially when
scraping data from the Internet.
Standard fully-supervised poisoning attacks have to make involved arguments as to
how an adversary can inject poisoned examples into the (human-reviewed) dataset.
Contrastive learning models, on the other hand, are *explicitly*
designed to train on noisy datasets scraped from the public Internet where adversaries can easily modify examples.
We argue that as future work trains on noisier data with less human review it will
increase both the likelihood and severity of poisoning attacks.
Our attacks already require 100× less modification of the training
dataset compared to fully supervised training—and as we have shown,
scaling up the dataset dos not reduce the attack success rate.
The existence of these attacks motivates future defense research.
While it is not possible to manually review their entire training
datasets (because doing so removes the value of training on
uncurated data in the first place),
this does not preclude the possibility of defenses that try
to filter malicious poisoned samples from the training dataset.
For example, in the semi-supervised case it is possible to monitor training dynamics to detect the presence of poisoned unlabeled examples Carlini ([2021](#bib.bib58 "Poisoning the unlabeled dataset of semi-supervised learning"))
without requiring manual review of the unlabeled dataset.
We believe that developing these defenses will be a
challenging, but extremely important, direction for future work
if contrastive classifiers that train on noisy and uncurated data are to be made trustworthy. |
a610f8bc-6aee-4338-9d7d-dae9ea41648b | trentmkelly/LessWrong-43k | LessWrong | Deontological Decision Theory and The Solution to Morality
Asking the Question
Until very recently, I was a hedonic utilitarian. That is, I held ‘happiness is good’ as an axiom – blurring the definition a little by pretending that good emotions other than strict happiness still counted because it made people “happy” to have them -- and built up my moral philosophy from there. There were a few problems I couldn’t quite figure out, but by and large, it worked: it produced answers that felt right, and it was the most logically consistent moral system I could find.
But then I read Three Worlds Collide.
The ending didn’t fit within my moral model: it was a scenario in which making people happy seemed wrong. Which raised the question: What’s so great about happiness? If people don’t want happiness, how can you call it good to force it on them? After all, happiness is just a pattern of neural excitation in the brain; it can’t possibly be an intrinsic good, any more than the pattern that produces the thought “2+2=4”.
Well, people like being happy. Happiness is something they want. But it’s by no means all they want: people also want mystery, wonder, excitement, and many other things – and so those things are also good, quite independent of their relation to the specific emotion ‘happiness’. If they also desire occasional sadness and pain, who am I to say they’re wrong? It’s not moral to make people happy against their desires – it’s moral to give people what they want. (Voila, preference utilitarianism.)
But – that’s not a real answer, is it?
If axiom ‘happiness is good’ didn’t match my idea of morality, that meant I wasn’t really constructing my morality around it. Replacing that axiom with ‘preference fulfillment is good’ would make my logic match my feelings better, but it wouldn’t give me a reason to have those feelings in the first place. So I had to ask the next question: Why is preference fulfillment good? What makes it “good” to give other people what they want?
Why should we care about other people at all?
In othe |
f8bfe11f-4592-47df-8020-9fcafe3e3828 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Seattle Sequences group: Mysterious Answers 1
Discussion article for the meetup : Seattle Sequences group: Mysterious Answers 1
WHEN: 16 February 2015 06:30:00PM (-0800)
WHERE: 185 Stevens Way, Seattle, Washington 98195
Location is the Paul G. Allen center (CSE building) on UW campus, room 503. Details on Facebook at (should be visible without signing in) https://www.facebook.com/events/1552575281693265/
This is a weekly meetup to discuss and work through the Sequences while getting to know other aspiring rationalists in the Seattle area. Each week's reading list is posted in the relevant Facebook event. As the name suggests, this meetup will go over the first part of the "Mysterious Answers to Mysterious Questions" sequence. We previously covered the "Map and territory" sequence.All are welcome to join, though, even if they've read all of the sequences already or haven't yet caught up to us.
Discussion article for the meetup : Seattle Sequences group: Mysterious Answers 1 |
975f6bb0-1be9-477e-9fe6-ec39a22c4c6a | StampyAI/alignment-research-dataset/arxiv | Arxiv | Sequential Feature Explanations for Anomaly Detection
1 Introduction
---------------
Anomaly detection is the problem of identifying anomalies in a data set, where anomalies are those points that are generated by a process that is distinct from the process generating “normal" points. Statistical anomaly detectors address this problem by seeking statistical outliers in the data. In most application, however, statistically outliers will not always correspond to the semantically-meaningful anomalies. For example, in a computer security application, a user may be considered statistically anomalous due to an unusually high amount of copying and printing activity, which in reality has a benign explanation and hence is not a true anomaly. Because of this gap between statistics and semantics, an analyst typically investigates the statistical outliers in order to decide which ones are likely to be true anomalies and deserve further action.
Given an outlier point, an analyst faces the problem of analyzing the data associated with that point in order to make a judgement about whether it is an anomaly or not. Even when points are described by just tens of features, this can be challenging, especially, when feature interactions are critical to the judgement. In practice, the situation is often much worse with points being described by thousands of features. In these cases, there is a significant risk that even when the anomaly detector passes a true anomaly to the analyst, the analyst will not recognize the key properties that make the point anomalous due to information overload. This means that, in effect, the missed anomaly rate of the overall system is a combination of the miss rates of both the anomaly detector and the analyst. Thus, one avenue for improving detection rates is to reduce the effort required by an analyst to correctly identify anomalies, with the intended side-effect of reducing the analyst miss rate.
In this paper, we consider reducing the analyst’s detection effort by providing them with explanations about why points were judged to be anomalous by the detector. Given such an explanation, the analyst can minimize effort by focusing the investigation on information related to the explanation.
Our first contribution is to introduce an intuitive and simple form of explanation, which we refer to as *sequential feature explanations (SFEs)*. Given a point judged to be an outlier by a detector, an SFE for that point is an ordered sequence of features, where the order indicates the importance with respect to causing a high outlier score. An SFE is presented to the analyst by incrementally revealing the features one at a time, in order, until the analyst has acquired enough information to make a decision about whether the point is an anomaly or not (e.g. in a security domian, threat or non-threat). The investigative work of the analyst is roughly related to the number of features that must be revealed. Hence, the goal for computing SFEs is to minimize the number of features that must be revealed in order for the analyst to confidently identify true anomalies.
Our second contribution is to formulate a quantitative evaluation methodology for evaluating SFEs, allowing for the comparison of different SFE algorithms. The key idea of the approach is to construct a simulated analyst for each anomaly detection benchmark using supervised learning and ground truth about which points are anomalies. The simulated analyst can then be used to evaluate the quality of SFEs with respect to the number of features that must be revealed to reach a specified confidence level. To the best of our knowledge this is the first methodology for quantitatively evaluating any type of anomaly explanation method.
Our third contribution is to define several algorithms for computing SFEs that can be applied to any density-based anomaly detector. The main requirement of the algorithms is that it is possible to (approximately) compute joint marginals of a detector’s density function, which is an operation that is supported for most commonly-used densities.
Finally, our fourth contribution is to provide an empirical investigation of several methods for computing SFEs. Our primary evaluations use a recently constructed set of anomaly detection benchmarks derived from real-world supervised learning data. In addition we provide an evaluation on the standard KDD-Cup benchmark. The investigation leads to a recommended method and additional insights into the methods.
The remainder of the paper is organized as follows. Section [2](#S2 "2 Related Work ‣ Sequential Feature Explanations for Anomaly Detection") reviews related work on explanations for both supervised learning and anomaly detection. Next, Section [3](#S3 "3 Anomaly Detection Formulation ‣ Sequential Feature Explanations for Anomaly Detection") presents the anomaly-concepts formulation used in this paper. Section [4](#S4 "4 Sequential Feature Explanations ‣ Sequential Feature Explanations for Anomaly Detection") then more formally presents the concept of SFEs and possible quality metrics. Section [5](#S5 "5 Explanation Methods ‣ Sequential Feature Explanations for Anomaly Detection") describes and contrast several methods for computing SFEs. Section [6](#S6 "6 Framework for Evaluating Explanations ‣ Sequential Feature Explanations for Anomaly Detection") then introduces our quantitative evaluation framework for SFEs and finally Section [7](#S7 "7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") presents experiments evaluating the introduced methods within the framework.
2 Related Work
---------------
The problem of computing explanations for both supervised learning and unsupervised settings, such as anomaly detection, has received relatively little attention. Related work in the area of supervised classification aims to provide explanations about why a classifier predicted a particular label for a particular instance. For example, a number of methods have been proposed to produce explanations in the form of relevance scores for each feature, which indicate the relative importance of a feature to the classification decision. Such scores have been computed by comparing the difference between a classifier’s prediction score and the score when a feature is assumed to be unobserved [[1](#bib.bib1)], or by considering the local gradient of the classifier’s prediction score with respect to the features for a particular instance [[2](#bib.bib2)].
Other work has considered how to score features in a way that takes into account the joint influence of feature subsets on the classification score, which usually requires approximations due to the exponential number of such subsets [[3](#bib.bib3), [4](#bib.bib4)]. Since these methods are typically based on the availability of a class-conditional probability function, they are not directly generalizable to computing explanations for anomaly detectors. Our experiments, however, do evaluate a method, called Dropout, which is inspired by the approach of [[1](#bib.bib1)].
The form of such feature-relevance explanations is similar in nature to our SFEs in that they provide an ordering on features. However, prior work has not explicitly considered the concept of sequentially revealing features to an analyst, which is a key part of the SFE proposal for reducing analyst effort.
Prior work on feature-based explanations for anomaly detection has focused primarily on computing explanations in the form of feature subsets. Such explanations are intended to specify the subset of features that are jointly responsible for an object receiving a high anomaly score. For example, Micenkova, et al. [[5](#bib.bib5)] computed a subset of features such that the projection of the anomalous object onto the features shows the greatest deviation from normal instances. One issue with this approach is that the computation of an explanation is independent of the anomaly detector being employed. This is contrary to the goal of trying to explain why a particular anomaly detector judged a particular object to be anomalous. In contrast, the explanation approaches we consider in this paper are sensitive to the particular anomaly detector.
Other work on computing feature-subset explanations [[6](#bib.bib6)] developed an anomaly detection system called LODI which includes a specialized explanation mechanism for the particular anomaly detector. A similar approach is considered by Dang, et al. [[7](#bib.bib7)], where the anomaly detection mechanism directly searches for discriminative subspaces that can be used for the purpose of explanation. In contrast, the explanation approaches we consider in this work can be instantiated for any anomaly detection scheme based on density estimation, which includes a large fraction of existing detectors.
Existing approaches for evaluating explanations methods in both supervised and unsupervised settings are typically quite limited in their scope. Often evaluations are limited to visualizations or illustrations of several example explanations [[2](#bib.bib2), [7](#bib.bib7)] or to testing whether a computed explanation collectively conforms to some known concept in the data set [[2](#bib.bib2)], often for synthetically generated data. Prior work has not yet proposed a larger scale quantitative evaluation methodology for explanations, which is one of the main contributions of our work.
3 Anomaly Detection Formulation
--------------------------------
We consider anomaly detection problems defined over a set of N data points {x1,…,xN}, where each point xi is an n dimensional real-valued vector. The set contains a mixture of *normal points* and *anomaly points*, where generally the normal points account for an overwhelming fraction of the data. In most applications of anomaly detection, the anomaly points are generated by a distinct process from that of the normal points, in particular, a process that is important to detect for the particular application. For example, the data points may describe the usage behavior of all users of a corporate computer network and the anomalies may correspond to insider threats.
Since N is typically large, manual search for anomalies through all points is generally not practical. Statistical anomaly detectors address this issue by seeking to identify anomalies by finding statistical outliers. The problem, however, is that not all outliers correspond to anomalies, and in practice an analyst must examine the outliers to decide which ones are likely to be anomalies. We say that an analyst *detects* an anomaly when he or she is presented with an anomaly point and is able to determine that there is enough evidence that the point is indeed an anomaly. The success of this approach depends on the anomaly detector’s precision of identifying anomalies as outliers, and also on the analysts’ ability to correctly detect anomalies. Without further assistance, an analyst may need to consider information related to all n features of an anomaly point during analysis. In many cases, considering this information thoroughly will be impossible, increasing the chance of not detecting anomalies, which can be costly in many domains.
4 Sequential Feature Explanations
----------------------------------
In order to reduce the analyst’s effort toward detecting anomalies, we propose to provide the analyst with *sequential feature explanations (SFEs)* that attempt to efficiently explain why a point was considered to be an outlier. A length k SFE for a point is an ordered list of feature indices E=(e1,…,ek), where ei∈{1,…,n}. The intention is that features that appear earlier in the order are considered to be more important to the high outlier score of a point (e.g. xe1 is the most important). We will use the notation Ei to denote the set of the first i feature indices of E. Also, for any set of feature indices S and a data point x, we let xS denote the projection of x onto the subspace specified by S.
Given an SFE E for a point x, the point is incrementally presented to the analyst by first presenting only feature xE1. If the analyst is able to make a judgement based on only that information then we are finished with the point. Otherwise, the next feature is added to the information given to the analyst, that is, the analyst now sees xE2. The process of incrementally adding features to the set of presented information continues until the analyst is able to make a decision. The process may also terminate early because of time constraints; however, we don’t study that case in this paper.
For normal points, the incremental presentation of SFEs may not help the analyst more efficiently exonerate the points. In contrast, for anomalies, it is reasonable to expect that an analyst would be able to detect the anomalies by considering a much smaller amount of information than without the SFE, which should reduce the chance of missed detections. We assume that the amount of analyst effort is a monotonically increasing function of the number of features considered. This motivates measuring the quality of an SFE for a target by the number of features that must be revealed to an analyst for correct detection. More formally, given an anomaly point x, an analyst a, and an SFE E for x, the *minimum feature prefix*, denoted MFP(x,a,E), is the minimum number of features that must be revealed to a, in the order specified by E, for a to detect x as an anomaly.
While MFP provides a quantitative measure of SFE quality, its definition requires access to an analyst. This complicates the comparison of SFE computation methods in terms of MFP. Section [6](#S6 "6 Framework for Evaluating Explanations ‣ Sequential Feature Explanations for Anomaly Detection") addresses this issue and describes an approach for conducting wide evaluations in terms of MFP.
5 Explanation Methods
----------------------
We now consider methods for computing SFEs for anomaly detectors. Prior work on computing explanations for anomaly detectors has either computed explanations that do not depend on the particular anomaly detector used (e.g. [[5](#bib.bib5)]) or used methods that were specific to a particular anomaly detector (e.g. [[6](#bib.bib6)]). We wish to avoid the former approach, since intuitively an explanation should attempt to indicate why the particular detector being employed found a point to be an outlier. Considering the latter approach, we seek more general methods that can be applied more widely across different detectors. Thus, here we consider explanation methods for the widely-studied class of *density-based detectors*.111Our methods can actually be employed on the more general class of “score-based detectors" provided that scores can be computed given any subset of features. For simplicity, we focus on density-based detectors in this paper, where the density function is used to compute scores.
Density-based detectors operate by estimating a probability density function f(x) (e.g. a Gausssian mixture) over the entire set of N points and treating f as the density over normal points. This is reasonable under the usual assumption that anomalies are very rare compared to normals. Points are then ranked according to ascending values of f(X) so that the least normal objects according to f are highest in the order. Our methods do not assume knowledge of the form of f, but do require an interface to f that allows for joint marginal values to be computed. That is, for any subset of feature indices S and point x, we require that we can compute f(xS). For many choices of f, such as mixtures of Gaussians, these joint marginals have simple closed forms. If no closed form is available, then exact or approximate inference techniques (e.g., MCMC) may be employed.
It is worth noting that by considering SFE methods that depend on the anomaly detector being used, the performance in terms of MFP will depend on the quality of the anomaly detector as well as the SFE method. For example, consider a situation where the anomaly detector judges an anomaly point x to be an outlier for reasons that are not semantically relevant to why x is an anomaly. The SFE for x is not likely to help the analyst to more efficiently determine that x is an anomaly, since the semantically critical features may appear late in the ordering. While this is a possibility, it is out of control of the SFE method. Thus, when designing SFE methods we will assume that outlier judgements made by f are semantically meaningful with respect to the application. We now present our two main classes of SFE methods which we refer to as *marginal methods* and *dropout methods*.
###
5.1 Marginal Methods
Here we consider modeling the analyst as a Bayesian classifier that assumes normal points are generated according to f and that anomalies have a uniform distribution u over the support of the feature space, a reasonable assumption in the absence of prior knowledge about the anomaly distribution. Given a point x, an SFE E, and a number of revealed features i, such an analyst would make the decision of whether x is an anomaly or not by comparing the likelihood ratio f(xEi)u(xEi) to some threshold. Since u is assumed to be uniform, this is equivalent to comparing the joint marginal f(xEi) to a threshold. Intuitively this means that if our goal is to cause the analyst to quickly decide that x is an anomaly, we should chose an E that yields small values of f(xEi), particularly for small i.
This leads to our first SFE method, called *sequential marginal (SeqMarg)*. The SeqMarg method adds one feature to the SFE E=(e1,…,ek) at a time, at each step adding the feature that minimizes the joint marginal density with the previously-selected features. More formally, SeqM computes the following explanation:
| | | |
| --- | --- | --- |
| | \bf SeqMarg: ei=argminj∈¯¯¯¯Ei−1f(xEi−1,xj) | |
where ¯¯¯¯S is the complement of set S. SeqM requires O(kn) joint marginal computations in order to compute an explanation of length k. Note that due to the inherent greediness of SeqM, xEi may not necessarily be the optimal set of i features for minimizing f. Rather, if the goal were to optimize for a particular value of i, we would need to consider all O(ni) feature subsets of size i. However, our problem formulation does not provide us with a target value of i, and thus SeqM offers a more tractable approach that focuses on minimizing f as quickly as possible in a greedy manner.
In addition to SeqMarg we also consider a computationally cheaper alternative, called *independent marginal (IndMarg)*, which only requires the computation of individual marginals f(xi). This approach simply selects an explanation E for x by sorting the features in increasing order of f(xi). This only requires O(n) marginal computations for computing an explanation of any length. IndMarg offers a computationally cheaper alternative to SeqMarg, but fails to capture joint feature interactions. For example, SeqMarg will select ei in a way that optimizes the joint value when combined with previous features Ei−1. Instead, IndMarg ignores interactions with previously-selected features. Thus, IndMarg serves as a baseline for understanding the importance of accounting for joint feature interactions when computing explanations.
###
5.2 Dropout Methods
The next two methods are inspired by the work of Robnik-Sikonja and Kononenko [[1](#bib.bib1)] on computing feature-relevance explanations for supervised classifiers. In their work, the relevance score for a feature is the difference between the classification score when the feature is provided to the classifier and the classification when the feature is omitted (“dropped out”). The analogous approach for anomaly detection is to score features according to the change in the density value when the feature is included and when the feature is not included, or marginalized out. This yields the first dropout method, referred to as *independent dropout (IndDO)*: given a point x, each feature is assigned a score of f(x−xi)−f(x), where we abuse notation and denote the removal of xi from x by x−xi. Intuitively, features with larger scores are ones that make the point appear most normal when removed. The SFE E is then obtained by sorting features in decreasing order of score.
We can also define a sequential version of dropout, by following the same recipe we considered for IndMarg versus SeqMarg. Let the *sequential dropout (SeqDO)* be defined as follows:
| | | |
| --- | --- | --- |
| | \bf SeqDO: ei=argmaxj∈¯¯¯¯E1:i−1f(x¯¯¯¯Ei−xj). | |
This approach requires the same number of marginal computations as SeqMarg. This algorithm can be viewed as a dual of SeqMarg in that it measures the contribution of feature sets according to how much more normal a point looks after their removal, whereas SeqMarg measures how abnormal a point looks with only those features included.
6 Framework for Evaluating Explanations
----------------------------------------
There are at least two challenges involved in evaluating anomaly-explanation methods. First, compared to supervised learning, the area of anomaly detection has many fewer established benchmark data sets, particularly benchmarks based on real-world data. Second, given a benchmark data set, it is not immediately clear how to quantitatively evaluate explanations, since the benchmarks do not come with either ground truth explanations or analysts.
Here we describe an evaluation framework that addresses both issues. We address the first issue by drawing on recent work on constructing large numbers of anomaly detection benchmarks based on real-world data. We address the second issue by using supervised learning to construct a simulated analyst that can be applied to quantitatively evaluate our explanations in terms of MFP. Below we expand on both of these points.
###
6.1 Anomaly Detection Benchmarks
Recent work [[8](#bib.bib8)] described a methodology for systematically creating anomaly detection benchmarks from supervised learning benchmarks (either classification or regression). Given the huge number of real-world supervised learning benchmarks, this allows for a corresponding huge number and diverse set of anomaly detection benchmarks. Further, these benchmarks can be created to have controllable and measurable properties, such as anomaly frequency and “clusteredness" of the normal and anomalous points. We briefly sketch the main idea. Given a supervised classification data set, called the *mother set*, the approach selects one or more of the classes to represent the anomaly class, with different choices giving rise to different properties of the anomaly class. The union of the other classes represents the normal class. Individual anomaly detection benchmarks are then created by sampling the normal and anomaly points at specified proportions.
Table [1](#S7.T1 "Table 1 ‣ 7.3 Evaluation on Benchmark Data Sets ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") gives a summary of the benchmarks from Emmott et al. [[8](#bib.bib8)] used in our experiments. For example, the UCI data set shuttle was used as a mother set to create 1600 distinct anomaly detection benchmarks. The number of points in the shuttle benchmarks range from 3570 to 9847. The number of anomalies ranges from 8 to 984.
###
6.2 Simulated Analyst
We consider modeling an analyst as a conditional distribution of the normal class given a subset of features from a data point. More formally we model the analyst as a function A(x,S)=P(normal|xS), which returns the probability that point x is normal considering only the features specified by the set S. We describe how we obtain this function in our experiments below. Given this function, a point x, and an SFE E for x, we can generate an *analyst certainty curve* that plots the analyst’s certainty after revealing i features, that is, A(x,Ei) versus i. Figure [1](#S6.F1 "Figure 1 ‣ 6.2 Simulated Analyst ‣ 6 Framework for Evaluating Explanations ‣ Sequential Feature Explanations for Anomaly Detection") shows an example of three analyst curves from our experiments using our simulated analysts on a benchmark computed from the UCI Abalone dataset. The curves each correspond to a different anomaly in the data set using explanations computed using SeqMarg. We see that the different anomalies lead to different rates at which the analyst becomes certain of the anomaly, that is, certain that the point is not normal.

Figure 1: Analyst Certainity Curves. These are example curves generated using our simulated analyst on anomalies from the Abalone benchmark using SFEs produced by SeqMarg. The x-axis shows the index of the feature revealed at each step and the y-axis shows the analyst certainty about the anomalies being normal. The leftmost curve shows an example of where the analyst gradually becomes certain that the point is anomalous, while the middle curve shows more rapidly growing certainty. The middle curves is an example of where the analyst is certain of the anomaly after the first feature is revealed and remains certain.
Recall that our proposed quality metric MFP(x,a,E) measures the number of features that must be revealed to analyst a according to SFE E in order for a to detect an anomaly x. Evaluating this metric requires that we define the conditions under which the analyst detects x. We model this by associating an analyst with a detection threshold τ∈[0,0.5] and saying that a detection occurs if A(x,Ei)≤τ, that is, the probability of normality becomes small enough. We will denote this analyst by a(τ). Given an a(τ) we can then compute the MFP for any anomaly point by recording number of features required for the analyst certainty curve to first drop below τ.
Of course, there is no apriori basis for selecting a value of τ. Thus, in our experiments, we consider a discrete distribution over values for τ, P(τ), which models a range of reasonable thresholds. Given this distribution, we report the expected MFP—the expected value of MFP(x,a(τ),E)—as the quantitative measure of SFE E for anomaly x. In our experiments we define P(τ) to be uniform over the values 0.1, 0.2, and 0.3, noting that our results are consistent across a variety of reasonable choices for this distribution.
It remains to specify how we obtain the analyst function A(x,S). Since our anomaly detection benchmarks are each derived from a mother classification data set, we can construct a training set over those points for the anomaly and normal classes. Given this training set, one approach to obtaining the analyst would be to learn a generative model, or joint distribution P(normal,x), which could be used to compute A(x,S) by marginalizing out features not included in s. However, such generative models tend to be much less accurate in practice compared to discriminative models. However, learning a discriminative model P(normal|x) does not directly support computing the probability for arbitrary subsets of x as we require. While heuristics have been proposed for this purpose (e.g. Robnik-Sikonja and Kononenko [[1](#bib.bib1)]) we have found them to be unreliable when applied widely. Thus, in this work we follow a brute force approach. We simply pre-learn an individual discriminative model for each possible subset of features up to a maximum size k. Evaluating A(x,S) then simply requires evaluating the model associated with the subset S.
When the number of features or number of data points is very large, it may not be possible to pre-learn all possible subsets. In such cases, one option is to learn and cache models on the fly as they are needed during evaluation (each model would be learned only once). We used this approach for the KDD-Cup results reported in our experiments.
7 Empirical Evaluation
-----------------------
We now present our empirical evaluation on anomaly detection benchmarks from Emmott et al. [[8](#bib.bib8)] and the commonly used KDDCup anomaly detection benchmark.
###
7.1 Anomaly Detector
For all of our experiments, we have chosen to use the Ensemble Gaussian Mixture Model (EGMM) as the anomaly detector. This detector was first described in Emmott et al. [[8](#bib.bib8)] and was shown to be a competitive density-based approach across a wide range of benchmarks. EGMM is based on learning a density function f(x) represented as an ensemble of Gaussian mixture models (GMMs). The approach independently learns M GMM models by training each one using the Expectation-Maximization (EM) procedure on bootstrap replicates of the data set. Then it discards the low-likelihood GMMs (if any) and retains others based on a pre-specified threshold. The number of components of the GMMs is varied across the ensemble. In our experiments, the ensembles included 45 GMMs, 15 each using 3, 4, and 5 components. The final EGMM density f(x) is simply a uniform mixture of the densities of the retained GMMs. The EGMM approach addresses at least two pitfalls of using single GMM models. First, EM training can sometimes produce poor models due to bad local optima. Second, it is difficult to select the best number of components to use for a single model. EGMM gains robustness by performing model averaging over the variations.
One advantage of using the EGMM model is that it is straightforward to derive closed forms for the marginal density computations required by our explanation methods. In particular, the overall EGMM density f can be viewed as a single large GMM model containing a mixture of all components across the ensemble. Since individual Gaussians have simple closed forms for marginal densities[[9](#bib.bib9)], we can easily obtain closed forms for the mixture. It is worth noting that closed forms can also be derived for EGMM marginals when the data points are transformed by linear projections to reduce dimensionality (e.g. principle component analysis).
###
7.2 Simulated Expert
Recall that our evaluation framework is based on using supervised learning in order to obtain a simulated analyst. Our experiments are based on using Regularized Random Forests (RRFs) [[10](#bib.bib10)] as the analyst model. The RRF model was selected for two primary reasons. First, RRFs are well-known to provide high accuracies that are competitive with the state-of-the-art across a wide range of classification problems. Second, RRFs are relatively efficient to train, which is important to our study, since we must train one RRF for each possible subset of features (up to some maximum size). We trained RRFs composed of 500 trees using 10-fold cross-validation in order to tune the RRF regularization parameters.
It is worth noting that our evaluation framework is potentially sensitive to the choice of analyst model, since different models will have different biases. It was beyond the scope of this first study to replicate all experiments using a qualitatively different model. This will be a point of future work.
###
7.3 Evaluation on Benchmark Data Sets
| Mother Set | Original Problem Type | #Features | # of Anomaly Benchmarks | # of Points per Benchmark (range) | # of Anomaly per Benchark (range) |
| --- | --- | --- | --- | --- | --- |
| magic.gamma | Binary | 10 | 1600 | 600 - 6180 | 5 - 618 |
| skin | Binary | 3 | 1200 | 10 - 9323 | 1 - 932 |
| shuttle | Multiclass | 9 | 1600 | 3570 - 9847 | 8 - 984 |
| yeast | Multiclass | 8 | 1600 | 70 - 1000 | 1 - 97 |
| abalone | Regression | 7 | 1600 | 580 - 2095 | 1 - 209 |
| concrete | Regression | 8 | 1200 | 190 - 1000 | 1 - 51 |
| wine | Regression | 11 | 1600 | 2590 - 4112 | 3 - 411 |
Table 1: Summary of the benchmark datasets
We run our evaluation on anomaly detection benchmarks, from Emmott et al. [[8](#bib.bib8)], derived from seven UCI mother sets. A summary of the benchmarks are given in Table [1](#S7.T1 "Table 1 ‣ 7.3 Evaluation on Benchmark Data Sets ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection"). There are over 10,000 benchmarks in total, which contain a number of points ranging from 10 to 9800 and a number of anomalies ranging from 1 to 930. An EGMM model was fit for each of the benchmarks to serve as the anomaly detector, and RRF models were trained for each mother set on all possible feature subsets. For this first study, we have chosen to focus on benchmarks with relatively small dimensionality in order to allow for a large scale study, which requires training large numbers of EGMM models (over 10,000) and RRF analyst models. All data from these experiments, including the analysts models, will be made publicly available.

Figure 2: Performance of explanation methods on benchmarks. Each group of bars shows the performance of the six methods on benchmarks derived from a single mother set. The bars show the expected MFP averaged across anomalies in benchmarks for the corresponding mother set. 95% confidence intervals are also shown.

Figure 3: Performance of explanation methods on benchmarks when using an oracle anomaly detector. Each group of bars shows the performance of the six methods on benchmarks derived from a single mother set. The bars show the expected MFP averaged across anomalies in benchmarks for the corresponding mother set. 95% confidence intervals are also shown.
We evaluated six methods for computing SFEs. These included the four methods from Section [5](#S5 "5 Explanation Methods ‣ Sequential Feature Explanations for Anomaly Detection"): SeqMarg, IndMarg, SeqDO, and IndDO. In addition, we evaluated a random explanation method. In the case of random, we report the average performance across 100 randomly generated SFEs.
Finally, in order to provide an lower-bound on attainable performance (lower MFP is better) we consider an optimal oracle method, *OptOracle*. This method is allowed access to the simulated analyst and for each number of features i computes the optimal feature subset of size i. More formally, for each value of i, OptOracle finds the feature subset Si that minimizes the analyst’s conditional probability P(normal|xSi). The MFP achieved by OptOracle for an anomaly x, given a particular analyst threshold τ (recall Section [6](#S6 "6 Framework for Evaluating Explanations ‣ Sequential Feature Explanations for Anomaly Detection")), is the minimum value of i such that P(normal|xSi)<τ. Note that OptOracle is not constrained to produce “sequential explanations"—rather, OptOracle can produce an Si that does not necessarily contain Si−1. This gives OptOracle an additional advantage compared to the other methods which are constrainted to produce SFEs. Clearly, OptOracle represents an upper bound on the performance of any SFE method that is evaluated with respect to the simulated analyst.
For each of the 10,000 benchmarks, we used the corresponding EGMM model to rank the points. For the anomaly points ranked in the top 10%, we computed SFEs using each of the six methods. This choice is an attempt to model the fact that, in actual operation, only highly ranked anomalies will be presented to the expert.
The expected MFP was computed for each SFE using a distribution over analyst thresholds that was uniform over the values 0.1, 0.2, and 0.3. For each mother set, we then report the average MFP across the anomalies derived from that mother set. These average MFPs are shown in Figure [2](#S7.F2 "Figure 2 ‣ 7.3 Evaluation on Benchmark Data Sets ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") along with 95% confidence intervals.
We first note that our observations below are not sensitive to the choice of focusing on anomalies in the top 10%. Indeed, we have also compiled results for other percentage points, including using all anomalies. The main observations are qualitatively similar across all of these choices.
Comparison to Random and OptOracle. We observe in Figure [2](#S7.F2 "Figure 2 ‣ 7.3 Evaluation on Benchmark Data Sets ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") that all of the SFE methods outperform random explanations and often do so by a large margin. Comparing to OptOracle we see that, for three benchmarks—concrete, yeast, and wine—the lower bound provided by OptOracle is significantly better than our best SFE method. This gap could be due to either: 1) suboptimal SFE computations, 2) a poor match between the anomaly detector’s notion of outlier versus the analyst’s notion of anomaly, or 3) the fact that OptOracle is not constrained to output sequential explanations. We will investigate this further below.
For the remaining four mother sets, we see that the marginal methods are quite close to the lower bound of OptOracle, though there is still some room for improvement. Finally, it is worth noting that OptOracle is able to achieve MFPs of close to 1 for most of the mother sets. Thus, on average, for these data sets, a single feature is sufficient to allow for correct analyst detections.
Independent versus Sequential. It is reasonable to expect that the sequential version of the marginal and dropout methods will outperform the independent versions. This is because the sequential versions attempt to account more aggressively for feature interaction when computing SFEs, which requires additional computation time. However, we see that overall there is very little difference in performance between the independent and sequential methods. That is, SeqMarg and IndMarg (as well as SeqDO and IndDO) achieve nearly identical performance. The only exception is in magic.gamma where there a small, but statistically significant, advantage (according to a paired t-test) of SeqMarg over IndMarg. One possible explanation for these results is that feature interactions are not critical in these domains for detecting anomalies. This explanation is supported by the fact that OptOracle is able to achieve average MFPs close to one.
Marginal versus Dropout. Recall that the marginal and dropout methods are dual approaches. Marginal evaluates a set of features in terms of how abnormal those features alone make a point appear, while dropout evaluates a set by the increase in normality score when the features are removed. We see that overall the marginal methods are never significantly worse than dropout and significantly better on abalone, magic.gamma, shuttle, and skin. The difference is particularly large on shuttle, where the marginal methods are close to OptOracle and the dropout methods are closer to random.
One possible explanation is that we have observed that often dropout will produce a “weaker signal" compared to marginal when making early decisions. For example, when considering single features, the differences in scores produced by dropout for those features are often much smaller than the differences produced by marginal. This can make dropout less robust for early decisions, which are the most important ones for achieving small MFP scores. Recall, that the dropout method was inspired by prior work on explanations for supervised learning. The results here suggest that it is worth investigating adaptations of marginal to the supervised setting.
###
7.4 Comparing Methods with Oracle Detectors
Since the SFE methods make their decisions based on the anomaly detector’s density function f, the results above reflect both the SFE methods and the quality of the detector. Here we attempt to factor out the performance of the SFE methods themselves by supplying the methods with an oracle anomaly detector. To do this we simply replace the use of f with the simulated analyst’s conditional probability function P(normal|xS), which we can compute for any feature subset S. For example, the first feature selected by SeqMarg is the xi that minimizes P(normal|xi). Note that this is also the first feature that would be selected by OptOracle. Unlike OptOracle, however, SeqMarg is sequentially constrained and will select the second feature as the one that works best when combined with the first selected feature.
Figure [3](#S7.F3 "Figure 3 ‣ 7.3 Evaluation on Benchmark Data Sets ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") shows results for all methods using the oracle detectors. We use a ‘\*’ to indicate that a method is using an oracle detector, for example, SeqMarg\* is the oracle version of SeqMarg.
Comparison to OptOracle. The primary observation is that SeqMarg\* performs nearly identically to OptOracle in all but one domain. Any difference between SeqMarg\* and OptOracle would reflect the loss in performance due to requiring sequential explanations. For these data sets, there is little to no loss. This is good news, since the motivation for considering sequential explanations is to reduce the analyst’s effort. In particular, the sequential constraint means that the analyst is shown an incrementally growing set of information. Rather, without the constraint, OptOracle could potentially show completely different sets of features from step to step, which is arguably less desirable from a usability perspective.
Independent versus Sequential. Here, we see that SeqMarg\* is often outperforming IndMarg\* and sometimes by significant amounts. This is in contrast to the results obtained when using EGMM as the anomaly detector. This observation indicates that reasoning about feature interactions, as done by SeqMarg\*, can be important with higher quality anomaly detection models. This leaves an open question of whether we will be able to observe this advantage when using non-oracle anomaly detection models on realistic benchmarks.
Dropout versus Marginal. The marginal methods show consistently better performance when using oracle detectors. The performance gap is quite large in several of the benchmarks. This provides evidence that the marginal approach is generally a better way of computing SFEs. Again we hypothesize that this is due to the “weak signal" during early decisions observed for the dropout method.
###
7.5 Evaluation on KDDCup’99 Data set
We now show results on the UCI KDDCup intrusion detection bechmark [[11](#bib.bib11)]. The points in this data set have 41 features, and we consider a subset of the data containing instances involving http service. The resulting benchmark contains approximately 620K points with approximately 4K anomaly points representing network intrusions. We again employed EGMM as the anomaly detector. It was infeasible to train a simulated analyst on all feature subsets, thus we followed the adaptive approach described Section [6](#S6 "6 Framework for Evaluating Explanations ‣ Sequential Feature Explanations for Anomaly Detection") where only the subset of models required during the evaluation process was learned and cached. Overall this resulted in approximately 7.5K RFF models being trained. In this domain, the EGMM model was quite effective and ranked all anomalies very close to the top of the ranked list. Thus, we evaluate on all anomalies in this domain.
Figure [4](#S7.F4 "Figure 4 ‣ 7.5 Evaluation on KDDCup’99 Data set ‣ 7 Empirical Evaluation ‣ Sequential Feature Explanations for Anomaly Detection") shows the average MFP achieved by our methods. It is clear that the marginal methods are significantly better than the dropout methods here. In particular, both SeqMarg and IndMarg achieve an average MFP close to one, which is the smallest possible. This indicates that the combination of EGMM and marginal explanations is very effective in this domain. In particular, the simulated analyst only needed to be shown a single feature on average in order to correctly detect the anomalies.
We again hypothesize that the much weaker performance of the dropout methods is due to the “weak signal" they provide for early decisions. This problem is only amplified in the context of larger numbers of features, as is the case for the KDDCup data.

Figure 4: Performance of different explanation methods on the KDDCup benchmark. 95% confidence intervals are also shown.
8 Main Observations
--------------------
The main observations from the above experiments can be summarized as follows.
* All of the introduced SFE methods significantly outperformed randomly generated SFEs.
* The marginal methods were generally no worse and sometime significantly better than the dropout methods.
* When using the EGMM anomaly detector, we observed little to no difference between the performance of sequential versus independent methods.
* When using the oracle anomaly detector, SeqMarg significantly outperformed IndMarg, which suggests that in general sequential methods can outperform independent methods.
* Overall, based on our results, SeqMarg is the recommended method for computing SFEs, among the methods we studied.
9 Summary
----------
This paper introduced the concept of sequential feature explanations (SFEs) for anomaly detection. The main motivation was to reduce the amount of effort of an analyst that is required to correctly detect anomalies. We described several methods for computing SFEs and introduced a new framework that allows for large-scale quantitative evaluation of explanation methods. Our experiments indicated that, overall, the Sequential Marginal method for computing SFEs is the preferred method among those introduced in this paper. |
f64cb9e5-0415-480f-a088-9ef58aa89732 | trentmkelly/LessWrong-43k | LessWrong | Three Levels for Large Language Model Cognition
This is the abridged version of my second dissertation chapter. Read the first here.
Thanks to everyone I've discussed this with, and especially, M.A. Khalidi, Lewis Smith, and Aysja Johnson.
TL;DR: Applying Marr's three levels to LLMs seems useful, but quickly proves itself to be a leaky abstraction. Despite the porousness, can we agree on what kinds of explanations we'd find at each level?
1. Background
When I think about the alignment problem, I typically ask the following question: what kinds of explanations would we need to say that we understand a system sufficiently well to control it? Because I don't know the answer to that question (and the philosophy of science vortex is trying to devour me), I expect to make some progress if I look at the explanations we have available, taxonomize them, and maybe even find what they're missing. This is where David Marr's framework comes in.
2. What are the three levels?
According to Marr (1982), we can understand a cognitive system through an analysis of three levels: (1) computational theory, (2) representation and algorithm, and (3) hardware implementation.
* (1) corresponds to the goal of the computation, what the exact task is.
* (2) describes the steps required to make the function happen, contains the exact series of computations of the function.
* (3) concerns the physical realization of the task in a given hardware.
From Marr's Vision: A Computational Investigation, 1982, p. 25.
Marr's problem at the time was that mere descriptions of phenomena or pointing to parts of the network could not sufficiently explain cognitive functions such as vision. Surely, finding something like the "grandma neuron" in the network was a breakthrough in its own right. But it wasn't an explanation; it didn't say anything about the how or the why behind the phenomenon. Some attempts in LLM interpretability have a similar flavor. For example, one answer to the question "where do concepts live" is to point to vectors an |
da63f294-912a-4955-b798-53ee3d8cfc19 | trentmkelly/LessWrong-43k | LessWrong | Announcing Epoch's newly expanded Parameters, Compute and Data Trends in Machine Learning database
The performance of machine learning models is closely related to their amount of training data, compute, and number of parameters. At Epoch, we’re investigating the key inputs that enable today’s AIs to reach new heights.
Our recently expanded Parameter, Compute and Data Trends database traces these details for hundreds of landmark ML systems and research papers.
In the past six months, we’ve added 240 new language models and 170 compute estimates. We will be maintaining this dataset, updating it with more historical information, and adding new significant releases. It's a valuable resource for journalists, academics, policymakers, and anyone interested in understanding the trajectory of AI.
Explore the interactive visualization, check out the documentation, and access the data for your own research at epochai.org/data/pcd. |
921340fe-0b71-4c72-a430-745bade987c9 | trentmkelly/LessWrong-43k | LessWrong | Can noise have power?
One of the most interesting debates on Less Wrong that seems like it should be definitively resolvable is the one between Eliezer Yudkowsky, Scott Aaronson, and others on The Weighted Majority Algorithm. I'll reprint the debate here in case anyone wants to comment further on it.
In that post, Eliezer argues that "noise hath no power" (read the post for details). Scott disagreed. He replied:
> ...Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.
>
> On the other hand, if you care about the worst-case running time, then there are settings (such as query complexity) where randomness provably does help. For example, suppose you're given n bits, you're promised that either n/3 or 2n/3 of the bits are 1's, and your task is to decide which. Any deterministic strategy to solve this problem clearly requires looking at 2n/3 + 1 of the bits. On the other hand, a randomized sampling strategy only has to look at O(1) bits to succeed with high probability.
>
> Whether randomness ever helps in worst-case polynomial-time computation is the P versus BPP question, which is in the same league as P versus NP. It's conjectured that P=BPP (i.e., randomness never saves more than a polynomial). This is known to be true if really good pseudorandom generators exist, and such PRG's can be constructed if certain problems that seem to require exponentially large circuits, really do require them (see this paper by Impagliazzo and Wigderson). But we don't seem close to proving P=BPP unconditionally.
Eliezer replied:
> Scott, I don't dispute what you say. I just suggest that the confusing term "in the worst case" be replaced by the more accurate phrase "supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind |
3a162544-77e7-43dd-998c-de08cb56bad0 | trentmkelly/LessWrong-43k | LessWrong | Eliezer Yudkowsky Facts
* Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
* Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
* Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
* Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
* Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
* Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
* If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
* Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
* Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
* Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
* Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
* The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
* When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
* Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
* If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
* Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
* There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
* There is no spacetime symme |
a743f3bd-2832-4a24-9867-7f64d0d2dddd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Markets are Anti-Inductive
I suspect there's a *Pons Asinorum* of probability between the bettor who thinks that you make money on horse races by betting on the horse you think will win, and the bettor who realizes that you can only make money on horse races if you find horses whose odds seem *poorly calibrated* relative to superior probabilistic guesses.
There is, I think, a second *Pons Asinorum* associated with more advanced finance, and it is the concept that markets are an *anti*-inductive environment.
Let's say you see me flipping a coin. It is not necessarily a fair coin. It's a biased coin, and you don't know the bias. I flip the coin nine times, and the coin comes up "heads" each time. I flip the coin a tenth time. What is the probability that it comes up heads?
If you answered "ten-elevenths, by Laplace's Rule of Succession", you are a fine scientist in ordinary environments, but you will lose money in finance.
In finance the correct reply is, "Well... if everyone *else* also saw the coin coming up heads... then by *now* the odds are probably back to fifty-fifty."
Recently on Hacker News I saw a commenter insisting that stock prices had nowhere to go but down, because the economy was in such awful shape. If stock prices have nowhere to go but down, and everyone knows it, then trades won't clear - remember, for every seller there must be a buyer - until prices have gone down far enough that there is once again a possibility of prices going *up*.
So you can see the bizarreness of someone saying, "Real estate prices
have gone up by 10% a year for the last N years, and we've never
seen a drop." This treats the market like it was the mass of an
electron or something. *Markets are anti-inductive.* *If, historically, real estate prices have **always gone up**, they will
keep rising **until they can go down**.*
To get an excess return - a return that pays premium interest over the going rate for that level of riskiness - you need to know something that other market participants don't, or they will rush in and bid up whatever you're buying (or bid down whatever you're selling) until the returns match prevailing rates.
If the economy is awful and everyone knows it, no one's going to buy at
a price that doesn't take into account that knowledge.
If there's an
obvious possibility of prices dropping further, then the market must
also believe there's a probability of prices rising to make up for it, or the trades won't clear.
This elementary point has all sorts of caveats I'm not bothering to include here, like the fact that "up" and "down" is relative to the [risk-free](/lw/hy/riskfree_bonds_arent/) interest rate and so on. Nobody believes the market is really "efficient", and recent events suggest it is less efficient than previously believed, and I have a certain friend who says it's even less efficient than that... but still, the market does not leave hundred-dollar-bills on the table if *everyone believes in them.*
There was a time when the Dow systematically tended to drop on Friday and rise on Monday, and once this was noticed and published, the effect went away.
*Past history*, e.g. "real estate prices have always gone up", *is not private info.*
And the same also goes for more complicated regularities. Let's say two stock prices are historically anticorrelated - the variance in their returns moves in opposite directions. *As soon as everyone believes this,* hedge-fund managers will leverage up and buy both stocks. Everyone will do this, meaning that both stocks will rise. As the stocks rise, their returns get more expensive. The hedge-fund managers book profits, though, because their stocks are rising. Eventually the stock prices rise to the point they can go down. Once they do, hedge-fund managers who got in late will have to liquidate some of their assets to cover margin calls. This means that both stock prices will go down - at the same time, even though they were originally anticorrelated. Other hedge funds may lose money on the same two stocks and also sell or liquidate, driving the price down further, etcetera. The correlative structure behaves anti-inductively, because other people can observe it too.
If mortage defaults are historically uncorrelated, so that you can get an excess return on risk by buying lots of mortages and pooling them together, then people will rush in and buy lots of mortgages until (a) rates on mortgages are bid down (b) individual mortgage failure rates rise (c) mortgage failure rates become more correlated, possibly looking uncorrelated in the short-term but having more future scenarios where they all fail at once.
Whatever is believed in, stops being real. The market is literally *anti-inductive* rather than *anti-regular* - it's the regularity that enough participants *induce,* which therefore goes away.
This, as I understand it, is the *standard theory* of
"efficient markets", which should perhaps have been called
"inexploitable markets" or "markets that are not easy to exploit
because others are already trying to exploit them". Should I have made
a mistake thereof, let me be corrected.
Now it's not surprising, on the one hand, to see this screwed up in random internet discussions where a gold bug argues from well-known observations about the *past history* of gold. (This is the equivalent of trying to make money at horse-racing by betting on the horse that you think will win - failing to cross the *Pons Asinorum.*)
But it *is* surprising is to hear histories of the financial crisis in which prestigious actors argued in *crowded auditoriums* that, *previously*, real-estate prices had always gone up, or that *previously* mortage defaults had been uncorrelated. This is naive inductive reasoning of the sort that only works on falling apples and rising suns and human behavior and everything else in the universe except markets. Shouldn't everyone have frowned and said, "But isn't the marketplace an anti-inductive environment?"
Not that this is standard terminology - but perhaps "efficient market" doesn't convey quite the same warning as "anti-inductive". We would appear to need stronger warnings.
**PS:** To clarify, the coin example is a humorous exaggeration of what the world would be like if most physical systems behaved the same way as market price movements, illustrating the point, "An exploitable pricing regularity that is easily inducted degrades into inexploitable noise." Here the coin coming up "heads" is analogous to getting an above-market return on a publicly traded asset. |
0c1be24c-63b4-4e52-8017-4b264529a539 | trentmkelly/LessWrong-43k | LessWrong | More on polio and randomized clinical trials
My post and Twitter thread about the controversy over the 1954 polio vaccine trials generated many replies on Twitter, so here is a followup.
First, I’m very sympathetic to the dilemma that Salk faced. I think it’s a tough problem, and it’s worth thinking about different ways to approach it. I didn’t mean to cast aspersions on Salk.
One way in general to improve this situation is to make sure that all the controls get the treatment immediately after the trial, if it is proved safe and effective. But in this case, that wouldn’t have changed anything. Polio was a seasonal disease, peaking each summer. Getting the vaccine after the trial meant getting it the next season.
Some people have suggested that trials can be ended early if the data clearly shows a conclusion. This is true, although it’s trickier than it appears—if you do it in a naive way, you are prone to reaching false conclusions. The statistics of doing this properly is sophisticated. This is also difficult in the case of a vaccine, where the outcome is binary (you get the disease or you don’t) and you have to wait for a certain period of exposure. This wasn’t like a blood pressure medication, where you can constantly measure a continuous variable.
One idea occurred to me that I haven’t heard anyone suggest: the trial didn’t have to be 50-50. With a large enough group, you could hold back a smaller subset as the control (80-20?). Again, you need statistics here to tell you how this affects the power of your test.
Returning to the issue of: was an RCT needed at all? Again it’s a tough call, but I still think it was, for two reasons: one scientific/epistemological, and one social/political.
Epistemologically, it’s easy to say in hindsight that an observed-control trial would have been conclusive. Here’s the data (copied from Oshinsky’s book):
Placebo-control areas:
• Vaccinated: 200,745 subjects / 33 cases = 1 per 6,083
• Placebo: 201,229 subjects / 115 cases = 1 per 1,750
Observed-control areas:
• |
67e85e65-703f-4bdc-b50b-8c0c47e02ff2 | trentmkelly/LessWrong-43k | LessWrong | Safety via selection for obedience
In a previous post, I argued that it’s plausible that “the most interesting and intelligent behaviour [of AGIs] won’t be directly incentivised by their reward functions” - instead, “many of the selection pressures exerted upon them will come from emergent interaction dynamics”. If I’m right, and the easiest way to build AGI is using open-ended environments and reward functions, then we should be less optimistic about using scalable oversight techniques for the purposes of safety - since capabilities researchers won’t need good oversight techniques to get to AGI, and most training will occur in environments in which good and bad behaviour aren't well-defined anyway. In this scenario, the best approach to improving safety might involve structural modifications to training environments to change the emergent incentives of agents, as I’ll explain in this post.
My default example of the power of structural modifications is the evolution of altruism in humans. Consider Fletcher and Doebeli’s model of the development of altruism, which relies on assortment in repeated games - that is, when players with a tendency to cooperate end up playing together more often than random chance predicts. In humans, some of the mechanisms which lead to assortment are:
* Kin recognition: we can tell who we share genes with.
* Observation of intentions or previous behaviour: these give us evidence about other agents’ future behaviour.
* Costly signalling: this can allow us to reliably demonstrate our future altruism.
* Communication of observed information: once one person has made an observation, it can be shared widely.
* Flexible interactions: we can choose who to assort with in different interactions.
I claim that, given this type of understanding of the evolution of altruism, we can identify changes to high-level properties of the human ancestral environment which would have made humans significantly more altruistic. For example, human cognition is not very transparent, and so i |
d21c8a39-0578-4421-93a4-fce3bfd6f934 | trentmkelly/LessWrong-43k | LessWrong | How my math skills improved dramatically
When I was a freshman in high school, I was a mediocre math student: I earned a D in second semester geometry and had to repeat the course. By the time I was a senior in high school, I was one of the strongest few math students in my class of ~600 students at an academic magnet high school. I went on to earn a PhD in math. Most people wouldn't have guessed that I could have improved so much, and the shift that occurred was very surreal to me. It’s all the more striking in that the bulk of the shift occurred in a single year. I thought I’d share what strategies facilitated the change.
I became motivated to learn more
I took a course in chemistry my sophomore year, and loved it so much that I thought that I would pursue a career in the physical sciences. I knew that understanding math is essential for a career in the physical sciences, and so I became determined to learn it well. I immersed myself in math: At the start of my junior year I started learning calculus on my own. I didn’t have the “official” prerequisites for calculus, for example, I didn’t know trigonometry. But I didn’t need to learn trigonometry to get started: I just skipped over the parts of calculus books involving trigonometric functions. Because I was behind a semester, I didn’t have the “official” prerequisite for analytic geometry during my junior year, but I gained permission to sit in on a course (not for official academic credit) while taking trigonometry at the same time. I also took a course in honors physics that used a lot of algebra, and gave some hints of the relationship between physics and calculus.
I learned these subjects better simultaneously than I would have had I learned them sequentially. A lot of times students don’t spend enough time learning math per day to imprint the material in their long-term memories. They end up forgetting the techniques that they learn in short order, and have to relearn them repeatedly as a result. Learning them thoroughly the first time around w |
807e443e-3b77-40d4-a53d-76bd63276c8c | trentmkelly/LessWrong-43k | LessWrong | How to work through the ARENA program on your own
I've recently completed the in-person ARENA program, which is a 5-week bootcamp teaching the basics of safety research engineering (with the 5th week being a capstone project). Sometimes, I talk to people who want to work through the program independently and who ask for advice. Even though I didn't attempt this, I think doing the program in-person gives me some insight into how to get most out of the program when doing it independently, so here are my thoughts and tips:
On working speed
* Day 0.0 (prerequisites) takes the typical person more than one day to work through.
* Most other days are feasible to mostly finish within a day in the in-person program. In-person, participants spend around 6-7 hours per day on pair-programming. There are a few factors that will likely make you slower when you're on your own:
* If you don't find a working partner, then working deeply for 6-7 hours per day might be infeasible depending on your dispositions. In-person it gets feasible since you alternate with your pair-programming partner, which reduces the overall load on your attention.
* Often when you struggle in-person, your working partner knows how to move on. If both partners don't know what to do, you can ask a teaching assistant (TA). So you should expect to struggle more often, and for longer, if you're alone.
* Some days are substantially longer than what you could complete within one day even when doing the program in-person. Most of these days are in week 1 on transformer interpretability. Someone told me that one could probably spend a whole week on the day on SAEs alone.
* Week 1 on transformer interpretability contains 9 days of content (which are longer on average than typical days in other weeks), so even in the in-person program, participants only attempt a subset of these. All other weeks actually roughly fit within a week.
How should you approach each day?
* Often there is reading material at the start of a day, e.g. in the form of well-known p |
06b02f7b-eb46-4f70-96d2-d28640a9c399 | trentmkelly/LessWrong-43k | LessWrong | How do you establish a comfort zone in your studies?
Learning a new topic takes you outside your intellectual comfort zone. Extrapolating from the spacing effect, the practice of overlearning a chunk of new material is an inefficient way to build memory. I notice, however, that overlearning feels comforting. It seems to establish a comfort zone in the new material. It makes me feel more confident that I've learned something new, like the material is becoming a part of me. And when I review the earlier material, I can approach the new material that builds on it with greater ease.
If you approach study with the idea that it's all about efficiently building a memory for the material, you might neglect the motivational aspects of study. What do you do to establish a comfort zone in your studies? Do you find that sense of a "comfort zone" motivating? What else do you do to enhance your motivation and engagement with your studies, even if it's not strictly optimal in the short term for building new memories? |
f5cf47e7-3ce8-4f1a-adeb-cbd58ee33e5e | trentmkelly/LessWrong-43k | LessWrong | Why capitalism?
Note: I'm terrible at making up titles, and I think that the one I gave may give the wrong impression. If anyone has a suggestion on what I should change it to, it would be much appreciated.
As I've been reading articles on less wrong, it seems to me that there are hints of an underlying belief which states that not only is capitalism a good economic paradigm, it shall remain so. Now, I don't mean to say anything like 'Capitalism is Evil!' I think that capitalism can, and has, done a lot of good for humanity.
However, I don't think that capitalism will be the best economic paradigm going into the future. I used to view capitalism as an inherent part of the society we currently live in, with no real economic competition.
I recently changed my views as a result of a book someone recommended to me 'The zero marginal cost society' by Jeremy Rifkin. In it, the author states that we are in the midst of a third industrial revolution as a result of a new energy/production and communications matrix i.e. renewable energies, 3-D printing and the internet.
The author claims that these three things will eventually bring their respective sectors marginal costs to zero. This is significant because of a 'contradiction at the heart of capitalism' (I'm not sure how to phrase this, so excuse me if I butcher it): competition is at the heart of capitalism, with companies constantly undercutting each other as a result of new technologies. These technological improvement allow a company to produce goods/services at a more attractive price whilst retaining a reasonable profit margin. As a result, we get better and better at producing things, and it lets us produce goods at ever decreasing costs. But what happens when the costs of producing something hit rock bottom? That is, they can go no lower.
3D printing presents a situation like this for a huge amount of industries, as all you really need to do is get some designs, plug in some feedstock and have a power source ready. The intern |
1a80dde3-90df-46ea-98ed-fdcbfff18a22 | trentmkelly/LessWrong-43k | LessWrong | Would it be effective to learn a language to improve cognition?
o1 has shown a strange behavior where it thinks in Mandarin, while processing English prompts, and translates the results back to English for the output. I realized that the same could be possible for humans to utilize, speeding up conscious thought. [1]
What makes Mandarin useful for this is that it:
1. Has compact tokens
2. Has compact grammar
3. Has abundant training material online
4. Can be used on computer systems easily (unicode support)
Language[2]EnglishMandarinToki Pona[3]LatinGoogle Translate Intermediate[4]Compact Tokens❌✅✅❌✅✅Compact Grammar❌✅❌✅✅✅Significant Training Material✅✅✅✅❌✅❌Software Support✅✅✅❌✅❌Human Learning Ability✅✅✅✅✅✅❌
As a Jew, I learned the Hebrew Alphabet (but not vocabulary) to study for my Bar Mitzvah, and as a student of US public education, I had the choice in secondary school to learn Spanish, French, or German, in addition to learning English natively. I chose German, and I am very unlikely to change or stop learning this, but I wonder if it would be useful to learn a new language specifically to think in. This would pose some different requirements than traditionally learning a language, as reading and writing would be much less important for this task. Knowing many different words, and correct grammar would be much more important.
The idea of Brain-Machine interfaces installed into one's brain, ending the need for languages altogether would bring a major improvement to human society, but intentional control of thought[5] via language could bring the same effect. Aside from the normal cognitive benefits of being bilingual or multilingual, would learning some new language (or a conlang for this purpose) specifically to have conscious thought with be useful?
1. ^
https://techcrunch.com/2025/01/14/openais-ai-reasoning-model-thinks-in-chinese-sometimes-and-no-one-really-knows-why/
2. ^
These are not empirical or quantitative in any way, just the general ideas I sense from these. The other ideas expressed in this |
f884ff6a-5cac-4eae-8f51-165db3e8ba92 | trentmkelly/LessWrong-43k | LessWrong | Causal scrubbing: results on a paren balance checker
* Authors sorted alphabetically.
This is a more detailed look at our work applying causal scrubbing to an algorithmic model. The results are also summarized here.
Introduction
In earlier work (unpublished), we dissected a tiny transformer that classifies whether a string of parentheses is balanced or unbalanced.[1] We hypothesized the functions of various parts of the model and how they combine to solve the classification task. The result of this work was a qualitative explanation of how this model works, but one that made falsifiable predictions and thus qualified as an informal hypothesis. We summarize this explanation below.
We found that the high-level claims in this informal hypothesis held up well (88-93% loss recovered, see the Methodology). Some more detailed claims about how the model represents information did not hold up as well (72%), indicating there are still important pieces of the model’s behavior we have not explained. See the experiments summary section for an explanation of each hypothesis refinement.
Causal scrubbing provides a language for expressing explanations in a formal way. A formal hypothesis is an account of the information present at every part of the model, how this information is combined to produce the output, and (optionally) how this information is represented. In this work, we start by testing a simple explanation, then iterate on our hypothesis either by improving its accuracy (which features of the input are used in the model) or specificity (what parts of the model compute which features, and optionally how). This iterative process is guided by the informal hypothesis that we established in prior work.
For a given formal hypothesis, the causal scrubbing algorithm automatically determines the set of interventions to the model that would not disturb the computation specified by the hypothesis. We then apply a random selection of these interventions and compare the performance to that of the original model.
Using causal sc |
4389da69-3850-4fe9-b5e9-4bb2e90a4ae1 | trentmkelly/LessWrong-43k | LessWrong | Case Rates to Sequencing Reads
In thinking about how you might identify future pandemics by sequencing wastewater, you might have a goal of raising an alert before some fraction of people were currently infected. What you're actually able to observe, however, are sequencing reads, several steps removed from infection rates. Can we use covid data to estimate how the fraction of people currently infected with some pathogen might translate into the fraction of wastewater sequencing reads that match the pathogen?
RNA Viromics of Southern California Wastewater and Detection of SARS-CoV-2 Single-Nucleotide Variants (Rothman et al 2021) is the closest study I know in this area, though it wasn't exactly what they were trying to answer. They took many samples of municipal wastewater, extracted the RNA, filtered it to increase the fraction of the RNA corresponding to viruses, and then sequenced it.
(They also did several other things, including enriching some samples for respiratory viruses, but here I'm only looking at the unenriched data.)
They got 795M sequencing reads, 337 of which they identified as covid. This means the fraction of all reads that were covid (the "proportional abundance") was 4e-7. There's a typo in the paper where they say this is "0.0004%", but I wrote to the author to ask about it and they confirmed it was a missing zero.
How many people were infected during this period? The first tricky thing in answering this question is that the sequencing quantity wasn't uniform over this period:
Rothman 2021 SF4_sample_metadata: xlsx
And neither were covid cases:
LA County Covid Dashboard
The paper accounted for the variability in confirmed cases by looking at the relationship between the number of cases in the county served by each water treatment plant and the amount of covid they measured in their samples using qPCR. Because qPCR gives you much more precise estimates for a given cost, they were able to quantify the covid composition of 85 of their samples across seven plants:
( |
14a72491-6109-46eb-9b3d-14a90e978216 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Different perspectives on concept extrapolation
At the recent EAGx Oxford meetup, I ended up talking with a lot of people (18 people, back to back, on Sunday - for some reason, that day is a bit of a blur). Naturally, many of the conversations turned to [value extrapolation/concept extrapolation](https://www.lesswrong.com/posts/i8sHdLyGQeBTGwTqq/value-extrapolation-concept-extrapolation-model-splintering), the main current focus of our [Aligned AI startup](https://buildaligned.ai/). I explained the idea I explained multiple times and in multiple different ways. Different presentations were useful for people from different backgrounds.
So I've collected the different presentations in this post. Hopefully this will allow people to find the explanation that provides the greatest clarity for them. I think many will also find it interesting to read some of the other presentations: from our perspective, these are just different facets of the same phenomenon[[1]](#fn-6ncbkiyPBNPLbJZKz-1).
For those worried about AI existential risk
===========================================
An superintelligence trained on videos of happy humans may well tile the universe with videos of happy humans - that is a standard alignment failure mode. But "make humans happy" is also a [reward function compatible with the data](https://www.lesswrong.com/posts/thZdioHTZALRPKmiH/value-extrapolation-partially-resolves-symbol-grounding).

So let D0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be the training data of videos of happy humans, R1 the correct "make humans happy" reward function, and R2 the degenerate reward function "make videos of happy humans"[[2]](#fn-6ncbkiyPBNPLbJZKz-2).
We'd want the AI to deduce R1 from D0. But even just generating R1 as a candidate is a good success. The AI could then get feedback as to whether R1 or R2 is correct, or maximise a conservative mix of R1 and R2 (e.g. R=log(R1)+log(R2)). Maximising that conservative mix will result in a lot of videos of happy humans - but also a lot of happy humans.
For philosophers
================
Can you define what a human being is? Could you make a definition that works, in all circumstances and in all universe, no matter how bizarre or alien the world becomes?

A full definition has eluded philosophers ever since humans were categorised as "[featherless bipeds with broad flat nails](https://www.lesswrong.com/posts/gWxMZisqE2j2kHCd2/ai-safety-as-featherless-bipeds-with-broad-flat-nails)".
Concept extrapolation has another way of generating this definition. We would point at all living humans in the world and say "these are humans[[3]](#fn-6ncbkiyPBNPLbJZKz-3)."
Then we would instruct the AI: "please extrapolate the concept of 'human' from this data". As long as the AI is capable of doing that extrapolation better than we could ourselves, this would give us an extrapolation of the concept "human" to new circumstances without needing to write out a full definition.
For ML engineers into image classification
==========================================
Paper [Diversify and Disambiguate](https://arxiv.org/pdf/2202.03418.pdf) discusses a cow-grass-camel-sand example which is quite similar to the husky-wolf example of [this post](https://www.lesswrong.com/s/xujLGRKFLKsPCTimd/p/oCWk8QpjgyqbFHKtK#Multiple_image_classifications).
Suppose that we have two labelled sets, S0 consisting of cows on grass, and S1 consisting of camels on sand.

We'd like to train two classifiers that distinguish S0 from S1, but use different features to do so. Ideally, the first classifier would end up distinguishing cows from camels, while the second distinguishes grass from sand. Of course, we'd want them to do so independently, without needing humans labelling cows, grass, camels, or sand.
For ML engineers focusing on current practical problems
=======================================================
An AI classifier [was trained on xray images](https://arxiv.org/pdf/1909.12475.pdf) to detect pneumothorax (collapse lungs). It was quite successful - until further analysis revealed that it was acting as a chest drain detector. The chest drain is a *treatment* for pneumothorax, making that classification useless.
We would want the classifier to generate "collapsed lung detector" and "chest drain detector" as separate classification, and then ask its programmers which one it should be classifying on.
For RL engineers
================
[CoinRun](https://openai.com/blog/quantifying-generalization-in-reinforcement-learning/) is a procedurally generated set of environments, a simplified Mario-style platform game. The reward is given by reaching the coin on the right:

Since the coin is always at the right of the level, there are two equally valid simple explanations of the reward: the agent must reach the coin, or the agent must reach the right side of the level.
When agents trained on CoinRun are tested on environments that move the coin to another location, [they tend to ignore the coin and go straight to the right side of the level](https://arxiv.org/pdf/2105.14111.pdf). Note that the agent is following a policy, rather than generating a reward; still, the policy it follows is one that implicitly follows the "reach the right" reward rather than the "reach the coin" one.
We need an alternative architecture that generates both of these rewards[[4]](#fn-6ncbkiyPBNPLbJZKz-4) and is then capable of either choosing between them or becoming conservative between them (so that it would, eg, go to the right while picking up the coin along the way). This needs to be done in a generalisable way.
For investors
=============
A major retail chain wants to train their CCTV cameras to automatically detect shoplifters. They train it on examples they have in their databases.
The problem is that those examples are correlated with other variables. They may end up training a racial classifier, or they may end up training an algorithm that identifies certain styles of clothes.
That is disastrous, firstly for the potential PR problems, but secondly because the classifier *won't successfully identify shoplifters*.
The ideal is if the AI implicitly generates "shoplifters", "racial groups", and "clothes style" as separate classifiers. And then enquires, using [active learning](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)), as to what its purpose actually is. This allows the AI to classify properly for the purposes that it was designed for - and only those purposes.
For those working in AI alignment
=================================
Sometimes someone develops a way to keep AIs safe, by adding some constraints. For example, [attainable utility preservation](https://arxiv.org/abs/1902.09725) developed a formula to try and encode the concept of "power" for an AI, with a penalty term for having too much power:
PENALTY(s,a)=∑R′∈R|QR′(s,a)−QR′(s,∅)|
With some difficulty, I constructed a situation where that formula failed to constrain the AI, via a [subagent](https://www.lesswrong.com/posts/sYjCeZTwA84pHkhBJ/attainable-utility-has-a-subagent-problem).
Essentially, the formal definition and the intuitive concept of power overlap in typical environments. But in extreme situations, they come apart. What is needed is an AI that can extrapolate the concept of power rather than the formal definition.
Doing this for other concepts allow a lot alignment methods to succeed such are [avoiding side-effects](https://arxiv.org/abs/2010.07877), [low-impact](https://arxiv.org/abs/1705.10720), [corrigibility](https://intelligence.org/files/Corrigibility.pdf), and others.
For those using GPT-3
=====================
As [detailed here](https://www.lesswrong.com/posts/qyyo2efuwWkyR62fB/gpt-3-and-concept-extrapolation), we typed "ehT niar ni niapS syats ylniam ni eht" into GPT-3. This is "The rain in Spain stays mainly in the", with the words spelt backwards. The correct completion is "nialp", the reverse of "plain".
GPT-3 correctly "noticed" that the words were spelt backwards, but failed to extend its goal and complete the sentence in a human-coherent way.
For those focused on how humans extrapolate their own values
============================================================
A well-behaved child, brought up in a stable society, will learn, typically in early adolescence, that there is a distinction between "lawful" and "good". The concept of "well-behaved" has splintered into two, and now the child has to sort out how they should behave[[5]](#fn-6ncbkiyPBNPLbJZKz-5).
Recall also people's first reactions to hearing the trolley problem, especially the ["large man" variant](https://en.wikipedia.org/wiki/Trolley_problem#The_large_man). They often want to deny the premises, or find a third option. The challenge is that "behave well and don't murder" is being pushed away from "do good in the world", while they are typically bound together.
In the future, we humans will continue to encounter novel situations where our past values are not clear guides to what to do. My favourite example is what to do if someone [genetically engineers](https://www.lesswrong.com/posts/PX8BB7Rqw7HedrSJd/by-default-avoid-ambiguous-distant-situations) a humanoid slave race, that strongly want to be slaves, but don't enjoy being slaves. We can develop moral values to deal with the complexity of situations like this, but it requires some work: we don't know what are values are, we have to extrapolate them.
And, ideally, an AI would extrapolate as least as well as we would.
---
1. Note that concept extrapolation has two stages: generating the possible extrapolations, and then choosing among them - diversify and disambiguate, in the terminology of [this paper](https://arxiv.org/pdf/2202.03418.pdf). We'll typically focus on the first part, the "diversify" part, mainly because that has to be done first, but also because there might not be any unambiguous choices at the disambiguate stage - what's the right extrapolation of "liberty", for instance? [↩︎](#fnref-6ncbkiyPBNPLbJZKz-1)
2. There are going to be many more reward functions in practice. But the simplest ones will fit into two rough categories, those that are defined over the video feed, and those defined by the humans in the world that were the inputs to the video feed. [↩︎](#fnref-6ncbkiyPBNPLbJZKz-2)
3. We could also point at things like brain-dead people and say "these have many human features, but are not full humans". Or point at some apes and ants and say "these are non-human, but the apes are more human-like than the ants". The more the dataset captures our complex intuitions about humanness, the better. [↩︎](#fnref-6ncbkiyPBNPLbJZKz-3)
4. Conceptually, this is much easier to do if we think "generate both rewards" -> "choose conservative mix" -> "choose policy that maximises conservative mix", but it might be the case that the policy is constructed directly via some process. Learning policies seems easier than learning rewards, but mixing rewards seems easier than mixing policies, so I'm unsure what will be the best algorithm here. [↩︎](#fnref-6ncbkiyPBNPLbJZKz-4)
5. It doesn't help that "well-behaved" was probably called "good" when the child was younger. So the concept has splintered, but the name has not. [↩︎](#fnref-6ncbkiyPBNPLbJZKz-5) |
386a674e-6d17-4ac1-8789-f3951a1884fd | trentmkelly/LessWrong-43k | LessWrong | Past Tense Features
TLDR
I find past tense features in pythia-70m using a templated dataset. My high-level steps are:
1. Creating a templated dataset that indicates past tense through a past progressive clause
2. Finding subsets of features that recover the original model performance with attribution patching
3. Analyzing the feature effects by token position
Access the code here: past_features.ipynb
Dataset
First, I define a task that elicits the model’s understanding of past tense. Given a templated prefix that indicates the tense, the model has to predict a verb in the correct form. Defining a simple template that uniquely determines the tense was tricky. I eventually chose to indicate tenses using past progressive. Here is a “clean” prefix in past progressive and its “patch” counterpart in present progressive
Clean: While the teacher was talking, the student____
Patch: While the teacher is talking, the student____
The helping verb highlighted in green uniquely determines whether the verb at the underlined position has to take a past or present form. The mean logit diff between a set of verbs in the past tense and their conjugation in present tense serves as a performance metric (measured at the final position). The performance L of the full model M on this dataset is
L(M) = sum( logits_correct_verbs ) – sum( logits_incorrect_verbs ).
The dataset contains 10 samples of clean (past) and patch (present) prefixes. For each sample I use the same set of 123 verbs to evaluate performance F. (The exact number of 123 verbs is a result of filtering a bunch of verbs that tokenize into a single token in both tenses.)
Feature effects
I investigate the SAEs trained by Sam Marks on the outputs of attention and MLP layers, and resid_post activations. In line with Sparse Feature Circuits, I fold the pretrained SAEs into the model’s computational graph and add the reconstruction error back into the forward pass. This allows me to cache feature attributions without accumulating the reco |
279d9797-aec8-4eb0-bade-f97f96acc1fb | trentmkelly/LessWrong-43k | LessWrong | Pleasure and Pain are Long-Tailed
This post is derived from Logarithmic Scales of Pleasure and Pain by the Qualia Research Institute. Thank you Romeo Stevens for introducing me to the cool work you're doing over there.
----------------------------------------
> People have a certain baseline amount of happiness. Fix their problems, and they’ll be happy for a while, then go back to baseline. The only solution is to hack consciousness directly, to figure out what exactly happiness is – unpack what we’re looking for when we describe some mental states as having higher positive valence than others – and then add that on to every other mental state directly.
>
> ―Fear and Loathing at Effective Altruism Global 2017 by Scott Alexander
If we want to cure suffering then we need a way to measure suffering.
The human brain does not, by default, reason in absolutes. We reason via ratios. The difference between 1 and 2 feels about the same as the difference between 100 and 200 even though the difference between 100 and 200 is 100× as large as the difference between 1 and 2. This is known as Weber's Law.
To put this in mathematical terms, we map everything onto a logarithmic scale and then compare distances on the log plot. On a log plot, the distance between 1 and 2 equals the distance between 100 and 200.
Weber's Law doesn't just apply to measurements of our external environment. It applies to subjective experience itself. When you combine Weber's Law with subjective reports of conscious experience we find that valence (the intensity of an experience) is long-tailed. Consciousness itself may be long-tailed too.
The Valence Scale
Are people intersubjectively consistent on the relative pleasure and pain caused by different things? Yes we are.
Pleasure
> “Ok,” you might say, “you’re just telling me that pleasure and pain can be orders of magnitude stronger than I can even conceive of. What do you base this on?” The most straightforward way to be convinced of this is to literally experience such states |
80325c94-e567-42a2-a1fc-1292e1f11b9b | StampyAI/alignment-research-dataset/arbital | Arbital | Rice's Theorem
Rice's Theorem is a rather surprising and very strong restriction on what we can determine about the [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) computed by an arbitrary [Turing machine](https://arbital.com/p/5pd).
It tells us that for *every* nontrivial property of computable functions %%note:By "nontrivial", we mean there is at least one function with that property and at least one without that property.%%, there is no general procedure which takes as its input a Turing machine, and computes whether or not the function computed by that machine has that property.
Therefore, if we want to discover anything about the output of a general computer program, *in general* the best we can do is simply run the program.
As a corollary, there can be no *fully general* procedure that checks whether a piece of computer code is free of bugs or not.
# Formal statement
We will use the notation $[https://arbital.com/p/n](https://arbital.com/p/n)$ for the $n$th [Turing machine](https://arbital.com/p/5pd) under some fixed [numbering system](https://arbital.com/p/description_number).
Each such machine induces a [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2), which we will also write as $[https://arbital.com/p/n](https://arbital.com/p/n)$ where this is unambiguous due to context; then it makes sense to write $[n](m)$ for the value that machine $[https://arbital.com/p/n](https://arbital.com/p/n)$ outputs when it is run on input $m$.
Let $A$ be a non-empty, proper %%note:That is, it is not the entire set.%% subset of $\{ \mathrm{Graph}(n) : n \in \mathbb{N} \}$, where $\mathrm{Graph}(n)$ is the [graph](https://arbital.com/p/graph_of_a_function) of the [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2) computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$, the $n$th Turing machine.
Then there is no Turing machine $[https://arbital.com/p/r](https://arbital.com/p/r)$ such that:
- $[r](i)$ is $1$ if $\mathrm{Graph}(i) \in A$
- $[r](i)$ is $0$ if $\mathrm{Graph}(i) \not \in A$.
# Caveats
- While this result tells us, for example, that "no procedure will ever be able to determine whether an arbitrary program is bug-free", in practice it may be possible to determine whether *a large class* of programs is bug-free, while accepting the fact that our procedure might not be able to solve the fully general case.
- Additionally, this result only tells us about the *graphs* of the functions in question.
We can determine certain properties which are specific to the Turing machine: for example, we can tell whether the program will halt in five steps, by simply running it for five steps.
This does not contradict Rice, because Rice tells us only about the ultimate answer the machines spit out, and nothing about the procedures they use to get to the answer; "the machine halts in five steps" is not a property of the graph of the function, but is a property of the Turing machine itself.
- Rice's theorem is only a restriction on whether we can *decide* the status of a function: that is, whether we can decide *whether or not* the function computed by some machine has a certain property. Rice tells us nothing if we're only looking for a procedure that "must find out in finite time whether a function *does* have a property, but is allowed to never give an answer if the function *doesn't* have the property".
For example, we can determine whether a partial function is defined anywhere (that is, it is not the empty function: the one which never outputs anything, whatever its input) by just attempting to evaluate the function in parallel at $0$, at $1$, at $2$, and so on.
If the partial function is defined anywhere, then eventually one of the parallel threads will discover this fact; but if it is defined nowhere, then the procedure might just spin on and on forever without giving any output.
However, Rice's theorem does guarantee that there is no procedure which will tell us in finite time *whether or not* its input is a function which is defined somewhere; even though we have just specified a procedure which will tell us in finite time *if* its input is defined somewhere.
# Proof outline
Several proofs exist: for example, [one by reduction](https://arbital.com/p/5n6) to the [halting problem](https://arbital.com/p/halting_problem), and one [standalone proof](https://arbital.com/p/5t9).
Here, we sketch the standalone proof in broad strokes, because it goes via a neat lemma.
The intermediate lemma we prove is:
> Let $h: \mathbb{N} \to \mathbb{N}$ be [total](https://arbital.com/p/total_function) computable: that is, it halts on every input.
Then there is $n \in \mathbb{N}$ such that $\mathrm{Graph}(n) = \mathrm{Graph}(h(n))$. %%note:And, moreover, we can actually *find* such an $n$.%%
That is, the "underlying function" of $n$ - the partial function computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$ - has the same output, at every point, as the function computed by $[https://arbital.com/p/h](https://arbital.com/p/h)$.
If we view $h$ as a way of manipulating a program (as specified by its [https://arbital.com/p/-description_number](https://arbital.com/p/-description_number)), then this fixed-point theorem states that we can find a program whose underlying function is not changed at all by $h$.
This lemma might be somewhat surprising: it "ought" to be possible to find a change one could make to arbitrary computer code, with the guarantee that the altered code must do something different to the original.
The fixed-point theorem tells us that this is not the case.
The proof of the lemma is very difficult to understand fully, but rather easy to state, because there are several useful shorthands which hide much of the complexity of what is really going on; full details, along with a worked example, can be found in [the accompanying lens](https://arbital.com/p/5t9).
Once we have the intermediate lemma, Rice's theorem itself follows quickly.
Indeed, if the operation of "determine whether a machine computes a function whose graph is in $A$ or not" is computable, then we can do the following procedure:
- Take some computer code as input.
- Determine whether the code specifies a function whose graph is in $A$ or not.
- If it is in $A$, output code for a specific (probably unrelated) function whose graph is *not* in $A$.
- Otherwise, output code for a specific (probably unrelated) function whose graph is in $A$.
The fixed-point theorem tells us that some program isn't changed by the above procedure; but the procedure is guaranteed to interchange programs-from-$A$ with programs-not-from-$A$, so the procedure can't have any fixed points after all. |
c1292f28-021d-4287-9cd9-b7d067a430f5 | trentmkelly/LessWrong-43k | LessWrong | Musings on LLM Scale (Jul 2024)
In a recent interview, Dario Amodei claimed that cost of training is (starting with models already available)
> Right now, $100 million. There are models in training today that are more like a $1 billion. I think if we go to $10 or a $100 billion, and I think that will happen in 2025-2026, maybe 2027, ...
(Epistemic status: Fermi estimates, 8 is approximately 10 which is greater than 9.)
Assuming $40,000 per H100 and associated infrastructure in a datacenter, $1 billion gives 25K H100s, which matches the scale of for example Meta's new training clusters and requires about 40MW of power. At $2 per hour, training time cost of 25K H100s reaches $100 million in 80 days, which seems reasonable if on the short side for a production training run. The cost of time matches $1 billion at 2.3 years. An H100 (SXM) is rated for 2e15 FLOP/s in BF16 (my impression is this is usually stable out of the box). This becomes 4e15 FLOP/s in FP8, which seems practical if done carefully, no degradation in pre-training loss compared to FP32. The $100 million run then translates to 9e25 FLOPs at 30% utilization in BF16, or 2e26 FLOPs in FP8. (For some reason this SemiAnalysis estimate is 2x lower, peak 2e20 FLOP/s for 100,000 H100s at FP8, possibly the sparsity footnote in H100 specification for the 4000 teraFLOP/s figure is the culprit.)
This is maybe 10x original GPT-4, estimated at 2e25 FLOPs. The leading models (Claude 3.5 Sonnet, Gemini 1.5 Pro, GPT-4 Omni) cost $15-20 per million output tokens, compared to $75-120 for once-frontier models Claude 3 Opus, Gemini 1 Ultra, original GPT-4. Given a Chinchilla optimal model, if we reduce its active parameters 3x and increase training compute 3x, we get approximately the same performance, but it's now at least 3x cheaper for inference. This increases data 10x, which if everything else fails can be obtained by repeating the old data, giving 30x overtraining in compute compared to what is Chinchilla optimal for the smaller model. Llama-3-70b |
8e43ef9c-3e41-4821-b49a-34476c1ce44d | StampyAI/alignment-research-dataset/arxiv | Arxiv | A generic framework for privacy preserving deep learning
1 Introduction
---------------
Secure Multiparty Computation (SMPC) is becoming increasingly popular as a way to perform operations in an untrusted environment without disclosing data. In the case of machine learning models, SMPC would protect the model weights while allowing multiple worker nodes to take part in the training phase with their own datasets, a process known as Federated Learning (FL). However, it has been shown that securely trained models are still vulnerable to reverse-engineering attacks that can extract sensitive information about the datasets directly from the model. Another set of methods, labelled as Differentially Private (DP) methods, address this and can efficiently protect the data.
We provide a transparent framework for privacy preserving deep learning to every PyTorch user, enabling the use of FL, MPC, and DP from an intuitive interface. We show the ability of the framework to support various implementations of MPC and DP solutions and report the results obtained when instantiating the SPDZDamgård et al. ([2012](#bib.bib2)) and moment accountant(Abadi et al., [2016](#bib.bib1)) methods respectively for MPC and DP in a federated learning context.
Our main contributions are the following:
- We first build a standardized protocol to communicate between workers which made federated learning possible.
- Then, we develop a chain abstraction model on tensors to efficiently override operations (or encode new ones) such as sending/sharing a tensor between workers.
- Last, we provide the elements to implement recently proposed differential privacy and multiparty computation protocols using this new framework.
By doing so, we intend to help popularize privacy preserving techniques in machine learning by making them available via the common tools that researchers and data scientists work with on a daily basis. Our framework is designed in a extensible way such that new FL, MPC, or DP methods can be plugged in by external contributors willing to make their work available to the wider deep learning community.
2 A standardized framework to abstract operations on Tensors
-------------------------------------------------------------
###
2.1 The chain structure
Performing transformations or sending tensors to other workers can be represented as a chain of operations, and each operation is embodied by a special class. To achieve this, we created an abstraction called the SyftTensor. SyftTensors are meant to represent a state or transformation of the data and can be chained together. The chain structure always has at its head the PyTorch tensor, and the transformations or states embodied by the SyftTensors are accessed downward using the child attribute and upward using the parent attribute.
Figure [3](#S2.F3 "Figure 3 ‣ 2.1 The chain structure ‣ 2 A standardized framework to abstract operations on Tensors ‣ A generic framework for privacy preserving deep learning") presents the general structure of a tensor chain, where SyftTensors are replaced with instances of some subclasses which all have a specific role, like the LocalTensor class which will be described next. All operations are first applied to the Torch tensor which makes it possible to have the native Torch interface, and they are then transmitted through the chain by being forwarded to the child attribute.
There are two important subclasses of SyftTensor. First, the LocalTensor which is created automatically when the Torch tensor is instantiated. Its role is to perform on the Torch tensor the native operation corresponding to the overloaded operation. For instance, if the command is add, then the LocalTensor will perform the native torch command native\_add on the head tensor. The chain has two nodes and it loops so that the LocalTensor child refers to the head node tensor which contains the data without needing the re-create a child tensor object, which would reduce performance.
Second, the PointerTensor which is created when a tensor is sent to a remote worker. Sending and getting back a tensor is as simple as calling the methods send(worker) and get() on the tensor. When this happens, the whole chain is sent to the worker and replaced by a two-node chain: the tensor, now empty, and the PointerTensor which specifies who owns the data and the remote storage location. This time, the pointer has no child. Figure [3](#S2.F3 "Figure 3 ‣ 2.1 The chain structure ‣ 2 A standardized framework to abstract operations on Tensors ‣ A generic framework for privacy preserving deep learning") illustrates how the chains are modified when being sent to a remote worker and how LocalTensor and PointerTensor are used in those chains.

Figure 1: General structure of a tensor chain

Figure 2: Impact of sending a tensor on the local and remote chains

Figure 3: Chain structure of a SPDZ tensor
###
2.2 From virtual to real context execution of federated learning
In order to simplify debugging complex chains of operations, this framework develops the notion of Virtual Workers. Virtual Workers all live on the same machine and do not communicate over the network. They simply replicate the chain of commands and expose the very same interface as the actual workers to communicate with each other.
Network workers in the context of Federated Learning have two implementations in the framework as of now. One builds upon plain network sockets, while the other supports Web Sockets. Web Socket workers allow multiple workers to be instantiated from within a browser, each within its own tab. This gives us another level of granularity when building federated learning applications before actually addressing remote workers not on the same machine. Web Socket workers are also a very good fit for the data science ecosystem revolving around browser-based notebooks.
3 Towards a Secure MPC framework
---------------------------------
###
3.1 Building an MPCTensor
The elements introduced in Section [2](#S2 "2 A standardized framework to abstract operations on Tensors ‣ A generic framework for privacy preserving deep learning") form the building bricks necessary to create our MPCTensor. Splitting and sending the shares can be done using a list of PointerTensors. The MPC toolbox proposed in our framework implements the SPDZ protocol from Damgård et al. ([2013](#bib.bib3), [2012](#bib.bib2)).
The MPC toolbox includes basic operations such as addition and multiplication but also preprocessing tools to generate for instance triples used for multiplication, and more specific operations to neural networks including matrix multiplication. Some adjustments are made to the traditional elements of a convolutional network due to the specificities of MPC: as described in Damgård et al. ([2012](#bib.bib2)), we use average pooling instead of max pooling and approximate higher-degree sigmoid instead of relu as an activation function.
Since the SPDZ protocol assumes that the data is given as integers, we added into the chain a node called the FixedPrecisionTensor that converts float numbers into fixed precision numbers. This node encodes the value into an integer and stores the position of the radix point. The complete structure of a tensor implementing SPDZ is summarized in figure [3](#S2.F3 "Figure 3 ‣ 2.1 The chain structure ‣ 2 A standardized framework to abstract operations on Tensors ‣ A generic framework for privacy preserving deep learning").
Unlike the MPC protocol proposed by Damgård et al. ([2012](#bib.bib2)), players are not equal in our framework since one is the owner of the model (called the local worker). He acts as a leader by controlling the training procedure on all the other players (the remote workers). To mitigate this centralization bias when dealing with data, the local worker can create remote shared tensors on data he doesn’t own and can’t see.
Indeed, we expect remote workers to hold some data of their own in a general setting, for instance when hospitals are contributing medical images to train a model. Multiple players are then interested in seeing the execution performing correctly, which is particularly crucial during the inference phase where many factors could lead to corrupted predictions Ghodsi et al. ([2017](#bib.bib4)).
So far, the current implementation does not come with a mechanism to ensure that every player behaves honestly. An interesting improvement would be to implement MAC authentication of the secret shared value, as suggested by Damgård et al. ([2012](#bib.bib2)).
###
3.2 Applying Differential Privacy
We implemented differential privacy based on the work of Abadi et al. ([2016](#bib.bib1)), which provides a training method for deep neural networks within a modest ("single-digit") privacy budget. To achieve this, the paper provides a new estimate of the privacy loss used to carefully adjust the noise needed, along with a new algorithm improving the efficiency of the private training.
In particular, we implemented Stochastic Gradient Descent (SGD): instead of iterating in the same way over the dataset and over epochs, the training is made of phases, each of them consisting of sampling L items from the N items of the dataset and using them to upgrade the model. We directly reused the privacy accountant provided by Abadi et al. ([2016](#bib.bib1)), but implemented our own sanitizer which clips gradients and adds Gaussian noise.
Our framework also provides some refinements guided by the federated learning context. First, when sampling a lot, we randomly choose a worker and sample among its own data. Second, gradients are sanitized on the remote worker in order to efficiently ensure data-privacy. This way, the local worker will get secured gradients for updating the model which cannot disclose information about the dataset.
The approach described in Papernot et al. ([2016](#bib.bib5)) proposes another approach to ensure differential privacy by training the final model (called the student model) using the noisy and aggregated votes of pre-trained and unpublished models (the teachers). It is currently being implemented and will be integrated as another DP Tensor in our framework.
4 Results and discussion
-------------------------
Table [2](#S4.T2 "Table 2 ‣ 4 Results and discussion ‣ A generic framework for privacy preserving deep learning") reports the execution time required to train a neural network on the canonical Boston Housing dataset, using three declinations of our framework. A performance analysis denotes a reasonably small overhead for using Web Socket workers instead of Virtual Workers, thus validating their purpose of notebook-developer tool. This is due to the low network latency when communicating between different local tabs. We are however 46 times slower than using regular PyTorch. We observe the same overhead in performance in our second experiment that trains a classifier to detect diabetes using the Pima Indian Diabetes dataset, a small dataset containing 768 rows and 8 columns Vincent Sigillito ([1990](#bib.bib6)).
Table [2](#S4.T2 "Table 2 ‣ 4 Results and discussion ‣ A generic framework for privacy preserving deep learning") shows how increasing ϵ improves the model at the expense of data privacy. The DP model achieves a 25-30 MSE compared to 20-24 in the baseline model, but the privacy guarantee remains strong as we achieve (0.5, 10−5)-differential privacy. These results are consistent with those reported in the literature for computer vision applications Abadi et al. ([2016](#bib.bib1)).
| Training mode | Training time (s) |
| --- | --- |
| PySyft (Virtual) | 10.1 |
| PySyft (Socket) | 14.6 |
| PySyft (Virtual) + DP\* | 15.3 |
| Pure PyTorch | 0.22 |
Table 1: Training time using different training settings on the Boston Housing dataset (10 epochs) \*Equivalent time for the same number of batches processed for DP
| (ϵ, δ)-privacy | Boston MSE | Pima Acc. |
| --- | --- | --- |
| (0.5, 10−5) | 29.4 | 60.6% |
| (1, 10−5) | 29.2 | 64.2% |
| (2, 10−5) | 28.5 | 66.1% |
| (4, 10−5) | 28.6 | 67.1% |
| no privacy | 23.7 | 70.3% |
Table 2: Accuracy of differentially private federated learning on the Boston Housing and Pima Diabetes datasets
For the Boston Housing dataset, the baseline model spends approximately 19.8ms per batch while the differentially private model spends about 30.0ms, which is a very reasonable overhead (+50%) for a feature like privacy. One last observation that we can make is that the convergence is far slower with DP enabled. The MSE keeps a value in the range of 500 over a first phase of 50 samplings. Then the MSE starts decreasing and steadily reaches a 10-50 MSE value. Two reasons can explain this behaviour: first, gradient clipping reduces the efficiency of the updates from the last layers, and second the Gaussian noise interferes with the updates suggested by the gradients which are therefore less efficient. Note that raising the bound for gradient clipping also increases the variance of the Gaussian noise.
5 Conclusions
--------------
We have introduced a privacy preserving federated learning framework built over PyTorch. The design relies on chains of tensors that are exchanged between local and remote workers. Our tensor implementations support commands of the PyTorch API and combine MPC and DP functionalities within the same framework.
There are still many issues to address, at the forefront of which is decreasing training time. Efficiency has not been tackled yet, but the current overhead suggests that there is room for improvement in what is a pure Python framework as opposed to high-level Python APIs that piggyback on optimised low-level libraries. Another concern has to do with securing MPC to make sure to detect and defeat malicious attempts to corrupt the data or the model.
All code samples involved in this paper will be made available in a GitHub repository after satisfying the anonymisation requirements of the submission.
6 Acknowledgements
-------------------
We would like to address a special thank to the very many contributors to the [OpenMined project PySyft](https://github.com/OpenMined/PySyft/). In particular, we are very grateful to Parsa Alamzadeh for the development of fixed precision and numerous bug resolutions, Yann Dupis for his integration of PATE and intense cleaning across the repo, Iker Ceballos who worked on federated averaging, Luis Fernando Leal and Kerat for their regular and precious contributions and Abdulrahman Mazhar who introduced gradual typing. Their combined commitment to improving PySyft and to adding new features was decisive. We would all like to thank those who have taken care of keeping the repo clean and documented: Adarsh Kumar, Amit Rastogi, Abhinav Dadhich, Jemery Jordan, Alexandre Granzer-Guay, Hrishikesh Kamath, Sharat Patil and in general [the 117 contributors to PySyft](https://github.com/OpenMined/PySyft/).
valuable comments and discussion. |
9e3663dd-a4e3-4ccf-aaa8-413c36dfe19f | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Saint Petersburg
This summary was posted to LW main on October 11th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Saint Petersburg, Russia: 27 October 2013 04:00PM
Other irregularly scheduled Less Wrong meetups are taking place in:
* Berlin: Fermi paradox discussion: 18 October 2013 07:00PM
* Brussels monthly meetup: games!: 12 October 2013 01:00PM
* Frankfurt (including effective altruism presentation): 27 October 2013 02:00PM
* Helsinki Meetup: 20 October 2013 03:00PM
* Israel Meetup (Tel Aviv): Dealing with Emotional Vampires: 17 October 2013 08:00PM
* Moscow, Beliefs 2: 13 October 2013 04:00PM
* Tucson Meetup: 12 October 2013 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 12 October 2019 01:30PM
* Columbus, OH MEGA-MEETUP, Oct 11-14: 12 October 2013 02:33AM
* London social: 13 October 2013 02:00PM
* [West LA] Complex problems, limited information, and rationality; How should we make decisions in real life?: 16 October 2013 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your m |
046247eb-626e-450b-b0d2-442201493efd | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes - September 2009
A monthly thread for posting any interesting rationality-related quotes you've seen recently on the Internet, or had stored in your quotesfile for ages.
* Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
* Do not quote yourself.
* Do not quote comments/posts on LW/OB - there is a separate thread for it.
* No more than 5 quotes per person per monthly thread, please.
"A witty saying proves nothing." -- Voltaire |
7f5cc1cc-5d90-4719-b092-432c56bfeb60 | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes June 2016
Another month, another rationality quotes thread. The rules are:
* Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
* Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* Do not quote yourself.
* Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
* No more than 5 quotes per person per monthly thread, please. |
fc4dc32f-4679-4a5b-9248-f966ba6596e1 | trentmkelly/LessWrong-43k | LessWrong | Chaos Induces Abstractions
Epistemic status: the first couple sections are intended to be a bog-standard primer on chaos theory. In general, this post mostly sticks close to broadly-accepted ideas; it's intended mainly as background for why one would expect the general ideas of abstraction-as-information-at-a-distance to be true. That said, I’m writing it all from memory, and I am intentionally sweeping some technical details under the rug. If you see a mistake, please leave a comment.
Consider a billiards table:
The particular billiards table we’ll use is one I dug out of the physicists’ supply closet, nestled in between a spherical cow and the edge of an infinite plane. The billiard balls are all frictionless, perfectly spherical, bounce perfectly elastically off of other balls and the edges of the table, etc.
Fun fact about billiard balls: if my aim has a tiny bit of error to it, and I hit a ball at ever-so-slightly the wrong angle, that error will grow exponentially as the balls collide. Picture it like this: we start with an evenly-spaced line of balls on the table.
I try to shoot straight along the line, but the angle is off by a tiny amount, call it Δθ.
The ball rolls forward, and hits the next ball in line. The distance by which it’s off is roughly the ball-spacing length L multiplied by Δθ, i.e. LΔθ.
Since the first ball hits the second ball off-center, the second ball will also have some error in its angle. We do a little geometry, and find that the angular error in the second ball is roughly L2RΔθ, where R is the radius of a ball.
Now the second ball rolls into the third. The math is exactly the same as before, except the initial error is now multiplied by a factor L2R. So when the second ball hits the third, the angular error in the third ball will be multiplied again, yielding error (L2R)2Δθ. Then the next ball will have angular error/uncertainty (L2R)3Δθ. And so forth.
Upshot of all this: in a billiard-ball system, small angular uncertainty grows exponentially with the n |
6609471c-557f-4ba4-afdd-116af3e89e31 | trentmkelly/LessWrong-43k | LessWrong | Probabilistic Negotiation
Follow up to Deterministic Strategies Can Be Sub-optimal
The Ultimatum Game is a simple experiment. Two people have been allocated $10. One person decides how to divide the profits, and the other decides whether to Accept that allocation or to Deny it, in which case both participants get $0. Suppose you are the person whose job it is to choose whether to Accept or Deny an offer. What strategy could you use to maximize your returns?
Yudkowsky offers the following solution (NB: the original text splits $12, because sci-fi; I have changed the numbers inline/without brackets, let me know if that offends)
> It goes like this:
>
> When somebody offers you a 6:4 split, instead of the 5:5 split that would be fair, you should accept their offer with slightly less than 5/6 probability. Their expected value from offering you 6:4, in this case, is 6 * slightly less than 5/6, or slightly less than 5. This ensures they can't do any better by offering you an unfair split; but neither do you try to destroy all their expected value in retaliation. It could be an honest mistake, especially if the real situation is any more complicated than the original Ultimatum Game.
>
> If they offer you 7:3, accept with probability slightly-more-less than 5/7, so they do even worse in their own expectation by offering you 7:3 than 6:4.
>
> It's not about retaliating harder, the harder they hit you with an unfair price - that point gets hammered in pretty hard to the kids, a Watcher steps in to repeat it. The circumstances under which you should ever go around carrying out counterfactual threats in real life are much more fraught and complicated than this, and nobody's going to learn about them realistically for several years yet. This setup isn't about retaliation, it's about what both sides have to do, to turn the problem of dividing the gains, into a matter of fairness; to create the incentive setup whereby both sides don't expect to do any better by distorting their own estimate o |
61fd0f52-326d-4f98-a0f3-2655eb739bbe | StampyAI/alignment-research-dataset/lesswrong | LessWrong | NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI
*Cross-posted from the* [*EA Forum*](https://forum.effectivealtruism.org/posts/Nm9ahJzKsDGFfF66b/nyt-google-will-recalibrate-the-risk-of-releasing-ai-due-to)
*The New York Times*: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.
>
> Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.
>
>
>
>
> The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
>
>
>
>
> The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
>
>
>
This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.
Demis Hassabis, CEO of DeepMind, urged caution in his [recent interview](https://time.com/6246119/demis-hassabis-deepmind-interview/) in *Time*:
>
> He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.
>
>
>
>
> “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.
>
>
>
>
> “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”
>
>
>
>
> Worse still, Hassabis points out, we are the guinea pigs.
>
>
>
Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous. |
7225c0f4-ab78-478f-b233-3ffc9cd074f3 | trentmkelly/LessWrong-43k | LessWrong | What do "attractor dynamics" refer to in the context of social structures?
I've seen the expression being used in some posts to refer to social structures, like in cultish countercultishness from Yudkowsky.
I have a vague intuition of what the idea is about but I would like to see a formal explanation of it, and where it originated. |
8cb86b09-a1d5-45be-a71c-8da19c5d3a15 | StampyAI/alignment-research-dataset/blogs | Blogs | AISC4: Research Summaries
The fourth AI Safety Camp took place in May 2020 in Toronto. Due to COVID-19, the camp was held virtually. Six teams participated and worked on the following topics:
* [Survey on AI risk scenarios](https://aisafety.camp/feed/#airisk)
* [Options to defend a vulnerable world](https://aisafety.camp/feed/#publishing)
* [Extraction of human preferences](https://aisafety.camp/feed/#preferences)
* [Transferring reward functions across environments to encourage safety for agents in the real world](https://aisafety.camp/feed/#reward)
* [Formalization of goal-directedness](https://aisafety.camp/feed/#goal)
* [Generalization in reward-learning](https://aisafety.camp/feed/#generalization)
---
#### Survey on AI risk scenarios
Alexis Carlier, Sam Clarke, Jonas Schuett

It has been argued that artificial intelligence could pose existential risks for humanity. However, the original arguments made by [Bostrom (2014)](https://global.oup.com/academic/product/superintelligence-9780199678112?cc=de&lang=en&) and [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf) have been criticised ([Shah, 2018](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw); [Christiano, 2018](https://sideways-view.com/2018/02/24/takeoff-speeds/); [Drexler, 2019](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf)), and a number of others have been proposed ([Christiano, 2019](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like); [Zwetsloot & Dafoe, 2019](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure); [Dafoe, 2018](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf); [Dai, 2018](https://www.lesswrong.com/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety); [Dai, 2019](https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety?commentId=daD7JREPtx2WDe2Wf); [Brundage et al., 2018](https://arxiv.org/pdf/1802.07228.pdf); [Garfinkel, 2018](https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff)).
The result of this dynamic is that we no longer know which of these arguments motivate researchers to work on reducing existential risks from AI. To make matters worse, none of the alternative arguments have been examined in sufficient detail. Most are only presented as blog posts with informal discussion, with neither the detail of a book, nor the rigour of a peer-reviewed publication.
Therefore, as a first step in clarifying the strength of the longtermist case for AI safety, we prepared an online survey, aimed at researchers at top AI safety research organisations (e.g. DeepMind, OpenAI, FHI and CHAI), to find out which arguments are motivating those researchers. We hope this information will allow future work evaluating the plausibility of AI existential risk to focus on the scenarios deemed most important by the experts.
See [AI Risk Survey project overview](https://sites.google.com/view/ai-risk-survey/ai-risk-survey).
See [abbreviated summary of survey results](https://www.lesswrong.com/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios).
---
#### Options to defend a vulnerable world
Samuel Curtis, Otto Barten, Chris Cooper, Rob Anue

We have made steps in getting an overview of ways to mitigate the risks we face if we live in a Vulnerable World, as hypothesized by Nick Bostrom. We were especially concerned with Type-1 risks – the “easy nukes” scenario, where it becomes easy for individuals or small groups to cause mass destruction, but in the context of AI. One idea we looked into was a publishing system with restricted access, and we consider this a promising option. A related option, which also seemed to be original, was to apply limitations to software libraries. In just one week, we seem to have done some original work – and learned a lot – so this field certainly seems promising to work on.
---
#### Extraction of human preferences
Mislav Juric, Taylor Kulp-McDowall, Arun Raja, Riccardo Volpato, Nevan Wichers

Developing safe and beneficial AI systems requires making them aware and aligned with human preferences. Since humans have significant control over the environment they operate in, we conjecture that RL agents implicitly learn human preferences. Our research aims to first show that these preferences exist in an agent and then extract these preferences. To start, we tackle this problem in a toy grid-like environment where a reinforcement learning (RL) agent is rewarded for collecting apples. After showing in [previous work](https://arxiv.org/abs/2002.06137) that these implicit preferences exist and can be extracted, our first approach involved applying a variety of modern interpretability techniques to the RL agent trained in this environment to find meaningful portions of its network. We are currently pursuing methods to isolate a subnetwork within the trained RL agent which predicts human preferences.
---
#### Transferring reward functions across environments to encourage safety for agents in the real world
Nevan Wichers, Victor Tao, Ti Guo, Abhishek Ahuja

Github Link: <https://github.com/platers/meta-transfer-learning>
A lot of times, it is hard to encourage safety and altruism for the agent in the real world. We want to test to see if transferring the reward function could be a solution to this problem.
Our approach is building a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions. Due to the constraint of the research, the testing environment is also in simulation but has a different structure than the training environments.
In the first experiment, we hoped to test if it is possible to transfer a reward function that promotes the same action in an environment slightly different than the testing environment. We first trained a reward function using a supervised convolutional neural network to estimate the score based on recognizing an agent’s position in a 2D grid world environment. Then we test the accuracy in a different environment with slightly different coloring. The result was positive. The reward function in the testing environment can achieve 90% of the performance in the training environment.
In the second experiment, we hope to test if we can evolve a reward function that can successfully train agents for safety or altruism related action in a different environment. We design a collection game where each agent can collect apple or banana for itself or for other agents. In order to encourage safety, the agent is given more score for collecting food for others than for itself. There are 3 environments, including one testing environment where both types of food counts for the score, and two training environments where one counts apples for the score, and another counts bananas for the score. Reward functions are created using evolution. At each round of evolution, the best reward functions are selected based on the performance of agents trained through Reinforcement Learning using those reward functions. In the end, this result is very close to proving our hypothesis and still requires more analysis. After analyzing the weights in our best-performing reward functions, we find that most of the time, it can reward the right action in each environment correctly. The agents trained in the testing environment can consistently achieve above 50% safety evaluated by our best reward function.
At the same time, here are some good practices we have learned that helps with training for the reward functions that encourage safety for training agents in another environment.
* Training reward functions in various environments with different structure will boost the performance of the reward function in the testing environment.
* Training reward functions in environments that are more different than the testing environment will make the reward function perform better in the testing environment.
In conclusion, the result gave us some level of confidence to say that it is possible to build a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions.
* We tried to evaluate if transferring the reward function is a feasible alternative to transferring the model itself in the context of altruism
* We implemented several simple environments to train and test reward functions
* We used evolution to find reward functions which lead to altruistic behavior
* The reward function are evaluated by training multiple reinforcement learning agents to optimize them and measuring the average performance
* We encountered many technical roadblocks, such as computation time and reinforcement learning instability
* In conclusion, we are not convinced either way if this idea has potential.
---
#### Formalization of goal-directedness
Adam Shimi, Michele Campolo, Sabrina Tang, Joe Collman

A common argument for the long-term risks of AI and AGI is the difficulty of specifying our wants without missing important details implicit in our values and preferences. However, Rohin Shah among others argued in [a](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma) [series](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw) [of](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/9zpT9dikrrebdq3Jf) [posts](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/tHxXdAn8Yuiy9y2pZ) that this issue need not arise for every design of AGI — only for ones that are goal-directed. He then hypothesizes that some goal-directedness property is not strictly required for building useful and powerful AI. However, Shah admits “…it’s not clear exactly what we mean by goal-directed behavior.” Consequently, we propose clarifying the definition of goal directedness for both formal and operational cases. Then the definition will be assessed based on risks and alternatives for goal-directedness.
See [five blogposts published after the camp](https://www.lesswrong.com/s/DTnoFhDm7ZT2ecJMw/)
---
#### Generalization in reward-learning
Liang Zhou, Anton Makiievskyi, Max Chiswick, Sam Clarke

One of the primary goals in machine learning is to create algorithms and architectures that demonstrate good generalization ability to samples outside of the training set. In reinforcement learning, however, the same environments are often used for both training and testing, which may lead to significant overfitting. We build on previous work in reward learning and model generalization to evaluate reward learning on random, procedurally generated environments. We implement algorithms such as T-REX (Brown et al 2019) and apply them to procedurally generated environments from the Procgen benchmark (Cobbe et al 2019). Given this diverse set of environments, our experiments involve training reward models on a set number of levels and then evaluating them, as well as policies trained on them, on separate sets of test levels.
[See two blog posts published after the camp](https://chisness.medium.com/assessing-generalization-in-reward-learning-intro-and-background-da6c99d9e48).
[See GitHub.](https://github.com/lzil/procedural-generalization) |
011c99ed-ae67-455d-b7da-4298b9168765 | trentmkelly/LessWrong-43k | LessWrong | Counterfactual do-what-I-mean
A putative new idea for AI control; index here.
The counterfactual approach could be used to possibly allow natural language goals for AIs.
The basic idea is that when the AI is given a natural language goal like "increase human happiness" or "implement CEV", it is not to figure out what these goals mean, but to follow what a pure learning algorithm would establish these goals as meaning.
This would be safer than a simple figure-out-the-utility-you're-currently-maximising approach. But it still doesn't solve a few drawbacks. Firstly, the learning algorithm has to be effective itself (in particular, modifying human understanding of the words should be ruled out, and the learning process must avoid concluding the simpler interpretations are always better). And secondly, humans' don't yet know what these words mean, outside our usual comfort zone, so the "learning" task also involves the AI extrapolating beyond what we know. |
9ed3e220-8830-47ff-a9e3-a57f8e75bf0c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Possible directions in AI ideal governance research
1. Background and summary
-------------------------
As part of my broader research project on AI ideal governance (see also [here](https://forum.effectivealtruism.org/posts/9EjMoD8BRhXEsfzMh/what-if-ai-development-goes-well-3) and [here](https://docs.google.com/document/d/1sNsA_OK0cvfUL8_WSviGsCvKtn_7PTE9LqUrh-JcI2c/edit?usp=sharing)), I have developed some possible research questions relevant to the progression of the sub-field. This post was partly inspired by [similar forum](https://forum.effectivealtruism.org/posts/oTJ5vMNwdWiHj2iKL/humanities-research-ideas-for-longtermists) and [blog posts](https://www.finmoorhouse.com/writing/ea-projects) that highlight research it’d be good to see in the EA community, along with an argument about the [importance of collections](https://forum.effectivealtruism.org/posts/6trt8mTsKfqJJbfJa/post-more-summaries-and-collections) for the forum. It aims to organise and highlight some areas that I think are significant, and in which further research can ultimately help us make better positive plans for AI development.
I’ve split up the post into four groups of five questions, with an added final section on some organisations that have previously done work (or shown interest in topics) related to AI ideal governance. These are certainly not intended as exhaustive lists, and I hope instead that they might serve as inspiration to those interested in this area but who are not sure where to start.
If you’d like to discuss any of these questions or are thinking about performing research on similar topics, do get in touch!
2. Historical questions
-----------------------
The practice of imagining better futures has a long history, with many examples of failure and of association with bad regimes. There are lessons to be learned here so that we can improve our plans, making for useful historical research related to AI ideal governance. Relevant work often touches upon complicated empirical topics that may be better broken down and may require a high level of domain knowledge about the political and intellectual history of the society being studied. Some possible questions could include:
* What was motivating about utopia X to people in scenario Y?
* What features of utopia X remain appealing today?
* Why was the attempt to implement utopia X unsuccessful?
* Does the history of utopianism suggest that there is something dangerous about the practice itself?
* Does the history of utopianism suggest that the practice encourages people to put ‘ends before means’?
3. Design questions
-------------------
The activity of designing ideal theories (e.g., through worldbuilding) can be useful in clarifying our objectives and beliefs. However, previous attempts in such areas illustrate common pitfalls that can face these practices. These can range from failing to think clearly about the precise value of the design exercise, to failing to ask broader questions to check if the exercise has been successful. Some questions I think authors would benefit from asking about their ideal theories include:
* Is my ideal theory designed to play a motivational or action-guiding role, and does it reflect this?
* Is my ideal theory inclusive of different identities and ways of life?
* Is my ideal theory designed to embody a virtue (e.g., justice or liberty), and does it achieve this?
* Would people have fun if my ideal theory was implemented?
* Is my ideal theory self-consistent?
4. Policy questions
-------------------
A possible charge against AI ideal governance theories is that they are not always helpful in guiding action. Given this, it would be helpful to think more deeply about how to develop ideal scenarios that are useful for policymakers. Relevant research can include direct thought about which strategies would get us closer to an ideal goal, or more abstractly about issues such as how AI ideal governance can better be integrated with other sub-fields in AI governance. Possible questions include:
* If AI policy X was enacted, what would be the best-case scenario?
* Which strategies would be required to reach ideal AI scenario X?
* Which AI policy areas might most naturally be able to integrate ideal theories?
* What is the relationship of AI ideal governance research to AI governance research more generally?
* How can AI ideal governance work better support other AI governance researchers?
5. Meta questions
-----------------
Finally, there are also a group of open meta questions related to AI ideal governance. These are helpful in assessing the overall approach of the sub-field and ways it could be improved. Relevant work includes empirical study on psychological questions, which would require more precise questions than are listed here. Possible meta questions might include:
* Why are people motivated by AI ideal governance theories?
* Why has there been relatively little AI ideal governance research?
* Does developing AI ideal governance theories require technical knowledge about AI?
* How can AI ideal governance theories be useful, despite the epistemic challenge to longtermism?
* Is there an intrinsic impulse to think positively about the future?
6. Organisations interested in AI ideal governance (and related areas)
----------------------------------------------------------------------
As AI ideal governance is a somewhat interdisciplinary topic with related sub-fields in different disciplines, I compiled a (non-exhaustive) list of organisations who have done interesting work (or shown interest in topics) related to AI ideal governance. To those who are interested in the field, I’d recommend following these organisations and looking for related opportunities. This [list of early career research opportunities](https://airtable.com/shrvjOAM4V7AcCCVo/tbl9t9mP1A37OVhzy/viw2uGZu2li1jPc9q) also includes good places to pursue these questions. Relevant organisations include:
* [Convergence Analysis](https://www.convergenceanalysis.org)
+ ‘AI clarity’ research area
* [Future of Life Institute](https://futureoflife.org)
+ [2019 Augmented Intelligence Summit](https://futureoflife.org/augmented-intelligence-summit-2019-2/)
+ [2022 Worldbuilding Contest](https://worldbuild.ai)
+ [2022 Bluesky Contest](https://hhs.secure-platform.com/bluesky/organizations/main/home)
* [Foresight Institute](https://foresight.org)
+ [Existential Hope Project](https://www.existentialhope.com)
+ [Foresight Existential Hope Program](https://foresight.org/existential-hope/)
+ [2023 Foresight Existential Hope Fellowship](https://foresight.org/foresight-fellowships/)
* [Future of Humanity Institute](https://www.fhi.ox.ac.uk)
+ [‘Macrostrategy’ research area](https://www.fhi.ox.ac.uk/research/research-areas/#macro_tab)
* [Legal Priorities Project](https://www.legalpriorities.org/)
+ ‘Institutional design’ research area
* [The Blog Prize](https://effectiveideas.org)
+ [World in 2072 Mini-Prize](https://mobile.twitter.com/effective_ideas/status/1521192645493768194)
* [The Long Now Foundation](https://longnow.org) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.