id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
e563bc9f-1386-42f1-9b10-1637c0f0fd10 | trentmkelly/LessWrong-43k | LessWrong | Teachers: Much More Than You Wanted To Know
[Epistemic status: This is really complicated, this is not my field, people who have spent their entire lives studying this subject have different opinions, and I don’t claim to have done more than a very superficial survey. I welcome corrections on the many inevitable errors.]
I.
Newspapers report that having a better teacher for even a single grade (for example, a better fourth-grade teacher) can improve a child’s lifetime earning prospects by $80,000. Meanwhile, behavioral genetics studies suggest that a child’s parents have minimal (non-genetic) impact on their future earnings. So one year with your fourth-grade teacher making you learn fractions has vast effects on your prospects, but twenty-odd years with your parents shaping you at every moment doesn’t? Huh? I decided to try to figure this out by looking into the research on teacher effectiveness more closely.
First, how much do teachers matter compared to other things? To find out, researchers take a district full of kids with varying standardized test scores and try to figure out how much of the variance can be predicted by what school the kids are in, what teacher’s class the kids are in, and other demographic factors about the kids. So for example if the test scores of two kids in the same teacher’s class were on average no more similar than the test scores of two kids in two different teachers’ classes, then teachers can’t matter very much. But if we were consistently seeing things like everybody in Teacher A’s class getting A+s and everyone in Teacher B’s class getting Ds, that would suggest that good teachers are very important.
Here are the results from three teams that tried this (source, source, source):
These differ a little in that the first one assumes away all noise (“unexplained variance”) and the latter two keep it in. But they all agree pretty well that individual factors are most important, followed by school and teacher factors of roughly equal size. Teacher factors explain somewhere |
2814cca0-5441-40d2-8309-2a1cb5142bcc | trentmkelly/LessWrong-43k | LessWrong | What did governments get right? Gotta list them all!
When predicting future threats, we also need to predict future policy responses. If mass pandemics are inevitable, it matters whether governments and international organisations can rise to the challenge or not. But its very hard to get a valid intuitive picture of government competence. Consider the following two scenarios:
* Governments are morasses of incompetence, saturated by turf wars, perverse incentives, inefficiencies, regulatory capture, and excessive risk aversion. The media reports a lot of the bad stuff, but doesn't have nearly enough space for it all, as it has to find some room for sport and naked celebrities. The average person will hear 1 story of government incompetence a day, anyone following the news will hear 10, a dedicated obsessive will hear 100 - but this is just the tip of the iceberg. The media sometimes reports good news to counterbalance the bad, at about a rate of 1-to-10 of good news to bad. This rate is wildly over-optimistic.
* Governments are filled mainly by politicians desperate to make a positive mark on the world. Civil servants are professional and certainly not stupid, working to clear criteria with a good internal culture, in systems that have learnt the lessons of the past and have improved. There is a certain amount of error, inefficiency, and corruption, but these are more exceptions than rules. Highly politicised issues tend to be badly handled, but less contentious issues are dealt with well. The media, knowing that bad news sells, fills their pages mainly with bad stuff (though they often have to exaggerate issues). The average person will hear 1 story of government incompetence a day, anyone following the news will hear 10, a dedicated obsessive will hear 100 - but some of those are quite distorted. The media sometimes reports good news to counterbalance the bad, at about a rate of 1-to-10 of good news to bad. This rate is wildly over-pessimistic.
These two situations are, of course, completely indistinguishable f |
2b44377a-4d13-4c9c-8943-6aaacbeda50f | trentmkelly/LessWrong-43k | LessWrong | Book Review: Weapons of Math Destruction
Epistemic Status: Minus One Million Points
Shortness Status: Long (this is a proposed new norm I want to try out, in the sense of ‘apologies for writing a long letter, I did not have enough time to write a shorter one.’ By contrast, Against Facebook was longer in words, but would be short.)
Weapons of Math Destruction is an easy read, but a frustrating one.
The book claims to be about the misuse of big data and machine learning to guide decisions, how that harms people and leads to bad outcomes, and how to fix it. The distortions of aiming at and rewarding what we are measuring rather than what we actually want worries me more and more. It is one of the biggest issues of our age, so I was excited to read a new take on it, even if I expected to already know the bulk of the facts and ideas presented.
There is some of that in the book, and those parts provide some useful information, although if you are reading this you likely already know a lot of it.
I.
What the book is actually mostly about on its surface, alas, is how bad and unfair it is to be a Bayesian. There are two reasons, in her mind, why using algorithms to be a Bayesian is just awful.
The first objection is that probabilistic algorithms are probabilistic. It is just awful that predictive algorithms are used to decide who should get or keep a job, or get cheaper credit, or see certain advertisements, because the algorithm might be wrong. Look at this example of someone the algorithm got wrong! Look at this reason the algorithm got it wrong! Look how wrong it is! Clearly we need to rely on humans, who get things wrong more often, but do so in a less systematic fashion so we can’t prove exactly why any given human got something wrong.
The second objection is that algorithms rank people and options likely to be better above people and options likely to be worse. It is just awful that an algorithm notices that people who have bad credit, or live in a certain zip code, or shop at a certain store, or s |
bbaec69a-646c-423e-b32c-fa2a84b90731 | trentmkelly/LessWrong-43k | LessWrong | Zero-sum conversion: a cute trick for decision problems
A while ago, we were presented with an interesting puzzle, usually just called "Psy-kosh's non-anthropic problem." This problem is not, as is made clear, an anthropic problem, but it generates a similar sort of confusion by having you cooperate with people who think like you, and you're unsure which of these people you are.
In the linked post, cousin_it declares "no points for UDT," which is why this post is not called a total solution, but a cute trick :) What I call zero-sum conversion is just a way to make the UDT calculations (that is, the things you do when calculating what the actual best choice is) seem obvious - which is good, since they're the ones that give you the right answer. This trick also makes the UDT math obvious on the absent-minded driver problem and the Sleeping Beauty problem (though that's trickier).
The basic idea is to pretend that your decision is part of a zero-sum game against a non-anthropic, non-cooperating, generally non-confusing opponent. In order to do this, you must construct an imaginary opponent such that for every choice you could make, their expected utility for that choice is the negative, the opposite of your expected utility. Then you simply do the thing your opponent likes least, and it is equivalent to doing the thing you'll like best.
Example in the case of the non-anthropic problem (yes, you should probably have that open in another tab):
Your opponent here is the experimenter, who really dislikes giving money to charity (characterization isn't necessary, but it's fun). For every utilon that you, personally, would get from money going to charity when you say "yea" or "nay," the experimenter gets a negative utilon.
Proof that the experimenter's expected utilities are negative yours is trivial in this case, since the utilities are opposites for every possible outcome, including cases where you're not a decider. But things can be trickier in other problems, since expected utilities can be opposites without th |
23e14735-a73b-4760-a918-9c8bbb05a5b7 | trentmkelly/LessWrong-43k | LessWrong | Most Functions Have Undesirable Global Extrema
(Crossposted from my blog Centerless Set)
It’s hard to come up with a workably short function that includes human civilization in its global maximum/minimum.
This is a problem because we specify our goals to AI using functions. For any practical function we might use, there’s a set of strange and undesirable worlds that satisfy that function better than our world.
For example, if our function measures the probability that some particular glass is filled with water, the space near the maximum is full of worlds like “take over the galaxy and find the location least likely to be affected by astronomical phenomena, then build a megastructure around the glass designed to keep it full of water”.
On top of this, what the AI would actually do if we trained it on that function is hard to predict, because it would learn values that instrumentally help it in our training data but not necessarily in new environments. This would result in strange, unintuitive values whose maximum is equally if not more unlikely to include human civilization.
Here I’ll list some potential ideas that might come to mind, along with why I think they’re insufficient:
Idea:
Don’t specify our goals to AI using functions.
Flaw:
Current deep learning methods use functions to measure error, and AI learns by minimizing that error in an environment of training data. This has replaced the old paradigm of symbolic AI, which didn’t work very well. If progress continues in this direction, the first powerful AI will operate on the principles of deep learning.
Even if we build AI that doesn’t maximize a function, it won’t be competitive with AI that does, assuming present trends hold. Building weaker, safer AI doesn’t stop others from building stronger, less safe AI.
Idea:
Use long, complicated functions that represent our actual goals.
Flaw:
This is even more difficult than it sounds. It’s hard to specify a goal like “don’t affect anything except this glass and this pitcher of water” using a functi |
78020707-9d99-4d07-96a2-5c5f8125935c | trentmkelly/LessWrong-43k | LessWrong | Black Box Biology
Suppose you want to decrease your risk of heart disease. The conventional advice goes something like this:
* Eat a healthier diet with less LDL-cholesterol raising foods
* Exercise more
* Keep your blood sugar under control
* Don’t smoke, don’t sit too much and don't take 400mg of methamphetamine on a regular basis
An alternative strategy might be some kind of genetic intervention. For example, an active clinical trial by Verve Therapeutics aims to treat individuals with inherited high cholesterol by editing the PCSK9 gene.
These trials almost always start the same: there’s some rare disorder caused by a single gene. We have a strong mechanical understanding of how the gene causes the disorder. We use an animal model with an analogous disorder and show that by changing the gene we fix or at least ameliorate the condition.
This is the traditional approach. And despite being slow and limited in scope, it occasionally produces results like Casgevy, a CRISPR-based treatment for sickle cell and beta thallasemia which was approved by the UK in mid-November.
It might cost several million dollars. But it cures sickle cell! That has to count for something.
Most diseases, however, are not like sickle cell or beta thalassemia. They are not caused by one gene. They are caused by the cumulative effects of thousands of genes plus environmental factors like diet and lifestyle.
If we actually want to treat these disorders, we need to start thinking about biology (and genetic treatments) differently.
Black Box Biology
I think the conventional approach to genes and disorders is fundamentally stupid. In seeking absolute certainty about cause and effect, it limits itself to a tiny niche with limited importance. It’s as if machine learning researchers decided that the best way to build a neural network was to hand tune model parameters based on their intricate knowledge of feature representations.
You don’t need to understand the mechanism of action. You don’t need an anim |
53c110c9-b049-4b0e-9146-3589415d21d0 | trentmkelly/LessWrong-43k | LessWrong | Strategy Nonconvexity Induced by a Choice of Potential Oracles
One notable aspect of the extant cooperative oracle results is that they are in a different setting from the usual reflective oracle results.
The usual reflective oracle results show a fixed point in the space of potential oracles,[0,1]M×[0,1]∪Q×[0,1]M. The first component of this is a function query:M×[0,1]∪Q→[0,1] which dictates the probability of the potential oracle outputting 1 (0 otherwise), when asked about whether turing machine M outputs 1 with probability greater than some rational number. The second component of this is a function that maps each turing machine using the oracle to a probability of outputting a 1.
The cooperative oracle results elsewhere on IAFF show a fixed point in the space of strategies for a game, ∏i∈IΔ(Si), where Si is the set of strategies for the i’th player.
I was able to successfully translate the cooperative oracle existence proof from strategy-space to function-space. However, the further step of taking an arbitrary point that’s a fixed point in strategy space, and explicitly building a true reflective oracle in function-space that results in that fixed point being played in the associated game, ran into some problems. There were assumptions that needed to be made to get that to work out, but the most unrealistic by far was the requirement of strategy continuity.
As an example, in the game of matching pennies, if the probability of the other player outputting “heads” is 0.5, the set of acceptable strategies goes from a simple “heads” or “tails” to “any possible mixture of heads and tails”, while this requirement mandates that the strategy chosen just shifts very fast from “all heads” to “all tails” in the vicinity of 0.5.
At the heart of this insufficiently strong result is the question of “just what does it mean to have a convex set of possible responses without one picked out in particular?” One possible interpretation is that the convex set comes from the turing machine/player having a probability of not halting, so ther |
50baf671-bdd3-4583-a8c8-aa4be470c3a3 | trentmkelly/LessWrong-43k | LessWrong | Logical Counterfactuals and Proposition graphs, Part 3
,
Notation note, [ ] are for lists, ( ) are for functions.
Note that many of the words and symbols I am using are made up. When this maths is better understood, someone should reinvent the notation. My notation isn't that great, but its hard to make good notation when you are still working out what you want to describe.
A theory (of my new type, not standard maths) T is formally defined to be,
T=[ψ,ρ,Ξ,type,arity]
Where ψ={s1,s2,...} are the theories symbols. These can be arbitrary mathematical objects.
Ξ={Ξ1,Ξ2,...} is the set of types, also arbitrary.
type:ψ→Ξ is a function.
arity:ψ→∪∞i=0Ξ×Ξ×⋯×Ξi
Is also a function.
An expression E in a theory is defined recursively, it consists of a pair E=[s,v1,v2,⋯,vn]. Here s∈ψ and ∀1≤i≤n:vi is an expression.
Let arity(s)=[x1,x2,...xm]
Sometimes we will write type(E), what we really mean is type(s), and arity(E)=arity(s) We write symbol(E)=s when we want to refer to just s and ignore v1,...,vn
Expressions have the restriction that m=n the number of elements of Ξ that s is mapped to is the same as the number of other expressions.
We also insist that for all i, type(vi)=xi
The base case happens when arity(s)=[ ] the empty list.
All expressions are strictly finite mathematical objects. There are no self referential expressions.
Expressions can be written s(v1,...,vn)
We can define equality between expressions e=s(v1,...) and f=t(w1,...) recursively by saying e=f iff symbol(e)=symbol(f) and forall i:vi=wi
The distinction between expressions e,f is defined recursively.
e−f=[e,f] if symbol(e)≠symbol(f)
e−f=[e,f] if ∃i≠j∈N:vi≠wi and vj≠wj
e−f=None if e=f
e−f=vi−wi if ∃1 i:vi≠wi
These can be uniquely expressed as strings made up of Ξ∪{ ′(′, ′,′, ′)′}
Lets define V(n)=V(Ξn) to be the set of all possible expressions of type Ξn.
A function f:V(n1)×...V(nt)→V(m) is called basic if it is constant, or it is the projection function ( so f(e1,...et)=ek for fixed k≤t) or f can be expressed as
f(e1,. |
b3382709-b255-403c-9713-bb1086d3b175 | trentmkelly/LessWrong-43k | LessWrong | How to Build Heaven: A Constrained Boltzmann Brain Generator
Abstract
This article offers a blueprint for how humanity could construct a realistic version of heaven once artificial superintelligence is achieved. This introduces an architectural framework for a "heaven generator," a realistic construction of a system designed to realize infinite joy and fulfillment for all possible consciousnesses. Inspired by the idea that the mere existence of a conscious brain state results in it being experienced, the system offers a detailed blueprint for how heaven could be implemented using principles of randomness, memory, and alignment with happiness rules. The strongest insight lies in defining how such a structure would function: by generating joyfully aligned conscious brain states, and preserving them in an eternal repository. Additionally, the article explores the computational requirements, as well as the philosophical implications of value, individuality, and universal inclusion for every form of desired conscious experience.
----------------------------------------
1. Introduction
This article outlines a novel framework for constructing a "heaven generator," a system designed to realize infinite joy and fulfillment for all possible consciousnesses. Inspired by the Boltzmann brain concept, but refined into a guided and structured application, the generator offers a way to systematically ensure every joyful state is experienced, preserved, and interconnected. By addressing computational architectures and philosophical implications, this proposal engages with core themes relevant to rationality, consciousness, and value alignment. Readers are invited to explore how infinite happiness might be achievable through this architecture.
----------------------------------------
2. Defining Heaven as a Generator
A heaven generator can be understood as:
1. Random: It generates new consciousnesses and states entirely at random, without initial bias.
2. Selective: It evaluates each state against specific happiness rules during gener |
4295096a-356a-4c18-84d6-f40060c0040a | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI self-improvement is possible
#### document purpose
This document mainly argues that human mental development implies that AI self-improvement from sub-human capabilities is possible.
#### document structure
For unambiguous referencing, sections are prefixed D: for description, L: for lemma, H: for hypothesis, or A: for argument. Descriptions are statements of facts, categorization, and definitions. Lemmas are linked arguments and conclusions that I have high confidence in. Hypotheses are views that at least some people have. Arguments are lines of reasoning that may or may not be correct.
#### D:abbreviations
* ANN = artificial neural network
* RL = reinforcement learning
* SI = self-improvement (other than direct RL)
* LLM = large language model (using a [Transformer](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) architecture)
* SDF = signed distance field
#### D:data
Current LLMs are empirically data-limited, as shown by the "Chinchilla scaling laws" described in [this paper](https://arxiv.org/abs/2203.15556).
#### D:efficiency
Humans are much more data-efficient than current LLMs. LLMs are trained on far more text than humans can read in their lifetime, yet their capabilities are (at least in many ways) much worse than humans. Considering D:data, this implies that humans use superior algorithms.
Humans are, for some tasks, more energy-efficient than current AI systems. A human brain uses only about 20 watts, which is much less than a modern GPU, but humans can do image segmentation faster than current ML models running on a single modern GPU.
#### H:plateau
Applying intelligence to improving intelligence has diminishing returns, so intelligence that becomes self-improving will plateau at a similar level of intelligence.
#### A:plateau
>
> We don't see humans or institutions become ultra-intelligent by recursive SI, so we know H:plateau is correct.
>
>
>
[Here's Robin Hanson](https://www.overcomingbias.com/p/the-betterness-explosionhtml) making this argument; that's still his view today.
#### D:genius
The genetic and morphological changes from early mammals to early apes were much greater than those from early apes to humans, but the absolute increase in intelligence from the latter step was much greater. A few more changes make the difference between an average human and the very smartest humans. Around human-level intelligence, returns do not seem to be diminishing.
#### H:human\_SI
D:genius happens because small improvements in genetically-specified neural systems are amplified as those initial systems generate higher-level systems or perhaps recursively self-improve.
#### H:human\_hyperparameters
D:genius happens because humans have better genetically-specified neural architectures, which when trained by RL produce better results.
#### D:prodigy
Exceptionally smart adult humans were, on average, more mentally capable as children, rather than less. Similarly, ANN architectures that plateau at better performance also generally tend to do better early in training.
#### L:childhood
Humans have the longest time-to-adulthood of any animal. Chimapzees also have unusually long childhoods. That is a large evolutionary disadvantage, so there must be a strong reason for it. Larger animals such as horses reach full size faster than humans, so it's not related to body growth. The only other explanation is that a long childhood is necessary for certain brain development.
Human children first learn very basic actions like moving their limbs, then proceed to progressively more abstract and complex tasks that build on previously developed capabilities.
#### A:not\_hyperparameters
If H:human\_hyperparameters was correct, then by D:prodigy, we probably wouldn't see L:childhood, but we do. So, H:human\_hyperparameters is probably wrong as an explanation for differences between humans and other animals, which makes H:human\_SI seem more likely.
That's not an argument against H:human\_hyperparameters being true for differences *between humans*, but if H:human\_SI is true at all, it's probably partly true for differences between humans.
#### L:reaction\_time
Human reaction times are similar to other animals of similar size.
Human children take longer to learn to do basic actions like walking and running than other animals. If that slower speed was because of much greater network depth, then we'd see slower reaction times, so the network depth for basic actions is similar, so that's not why human childhood development is relatively slow.
#### L:uneven\_evolution
Dolphins and elephants are both fairly intelligent, but human-level intelligence did not develop gradually at similar rates across several disparate species. This implies to me that development of human-level intelligence was not something that can be easily evolved in a stepwise manner with a smooth cost-benefit curve.
Thus, development of human (and pre-human) intelligence was not a matter of simply stacking more layers or scaling up brains, with no other changes.
#### A:childhood\_implications
Why do human children initially develop capabilities more slowly than most animals? Why is that a tradeoff for greater capabilities later?
What's the relevant difference in humans? Per L:reaction\_time and L:uneven\_evolution, it's unlikely to be deeper networks. Per D:prodigy, it's is unlikely to be architectures that do straightforward RL more slowly but eventually become better. As far as I can see, the only remaining possibility is that humans use SI for some things that most animals use RL for.
#### H:no\_drift
We can assume that AIs would design successors and self-modification in such a way that their goals would be preserved.
#### L:drift
The values of some agents will change over time. We know this because adult humans have different values from children.
Some agents will knowingly act in ways that change their values. We know this because some humans do that. If you convince someone that something - for example, researching a topic, going to college, or having a kid - would likely change their values, they're still likely to do that anyway. Thus, H:no\_drift is at least sometimes incorrect.
If agents would never produce an unaligned more-intelligent successor agent, then humans would never create unaligned superhuman AI, but some people are quite enthusiastic about doing that.
#### D:relative\_drift
Humans have much greater value drift than animals. [Instead of](https://www.npr.org/2023/03/19/1163341684/south-korea-fertility-rate) just having kids, smart people will do stuff like make model trains, build LIGO, or make a [complex open-source game](https://github.com/b-crawl/bcrawl/).
#### A:adaptation\_executors
Agents execute adaptations. Interpretation of agents as "goal-pursuers" is a simplifying abstraction used to make it easier to understand them, but agents only pursue goals to the extent that they have been trained/designed to do that and the current situation is similar to the training/test environment. Thus, H:no\_drift is not just incorrect, but fundamentally misguided.
#### A:domain\_shift
Neural networks with different weights can represent different problems, so when attempting to modify neural networks in such a way that goal-direction is preserved, changing weights is equivalent to changing the problem domain, so modification-managing systems that are task-specific will become maladapted as weights are changed.
#### L:monitoring
Humans are monitored by lower-level systems with higher authority. This is observable in ordinary human experience.
For example, consider a person Alice who is overweight and trying to lose weight. She isn't immediately getting food, but if someone puts a box of cakes in front of her, she will eat them despite consciously not wanting to. There is some system, sys\_hunger, which has high authority and can override her conscious desire to lose weight, but sys\_hunger is myopic and only overrides when food is immediately available. We know sys\_hunger has access to Alice's higher-level mental state because, for example, she won't try to eat what she knows is plastic fake food.
For another example, consider a student Ben that can't focus on his homework because he wants to call a girl (Jen) he likes. Ben says to himself, "If I get good grades, Jen will be more likely to date me." This triggers an inspection, where something looks at his mental state and concludes: "You don't really believe that doing your homework is going to get you a date with Jen. Rejected."
#### L:deception
Some agents will actively deceive their internal monitoring systems. We know this because humans do that.
An example considered good is someone addicted to cigarettes trying to trick themselves into smoking less. An example considered bad is anorexics sharing tips for tricking themselves into losing more weight.
#### A:stability
If the purpose of L:monitoring systems is to maintain alignment with evolutionarily-hardcoded goals, their presence is only beneficial for that if they have less value drift than higher-level systems. That would only be true in 2 cases:
* fine-tuning only: L:monitoring systems have little enough training from a hardcoded baseline to mostly retain their initial state.
* generative levels: Humans have systems that generate other systems or self-modify, and L:monitoring systems are earlier in a chain of system-generation or have more-limited self-modification.
#### D:addiction
Repeated usage of certain drugs (eg heroin) causes humans to become misaligned from their previous values and the values of society. This happens via feedback to low-level systems through channels that aren't normally available, which causes those low-level systems to change in a persistent way.
#### A:stability\_implications
To me, L:monitoring systems seem to be too adaptive to be fine-tuned hard-coded systems, which by A:stability implies that humans use generative levels. D:addiction is another reason to believe L:monitoring systems are not just fine-tuned to a limited extent.
That only makes evolutionary sense if the generated systems have greater capability or adaptability, and L:monitoring systems do seem to be significantly less capable in humans than the systems they monitor. This implies that SI to human levels can be done from sub-human capabilities.
#### H:drift\_bound
The extent of SI in humans is limited by increased value drift outweighing increased capabilities.
#### H:mutational\_load
Human capabilities are limited by high intelligence requiring high genetic precision that can be achieved only rarely with normal rates of mutation generation and elimination.
#### D:management\_methods
Consider an ANN **N1** that is part of a larger system. There are 5 basic ways in which a management system could manage N1:
* Control of hyperparameters such as learning rate. People have tried using ANNs to control hyperparameters during ANN training, but so far, results haven't been better than ADAM optimizers using simple formulas. Basic network architecture can also be considered a hyperparameter.
* Control of connection patterns between subsystems. This includes control of where inputs come from and where outputs go. [Mixture-of-expert](https://en.wikipedia.org/wiki/Mixture_of_experts) designs can be considered a type of this.
* Control of gradients and RL targets. An obvious way to modify gradients in a complex way is to use outputs from another ANN as the output targets; this technique is called **distillation**. Training a small ANN to copy the output probabilities of a larger ANN is an effective method to improve its performance: the wrong answers with higher probabilities are "good wrong answers" that help the small ANN organize its latent representation space better. Another type of distillation is training a network on outputs from more-specialized networks; see the [Distral paper](https://arxiv.org/abs/1707.04175) and [this application](https://www.youtube.com/watch?v=chMwFy6kXhs) of a variant of that method to soccer-playing robots.
* Direct modification of N1 weights. A network that generates weights for another network is a **hypernetwork**. Generation of weights for a network based on mixing weights from other networks is **neuroevolution**. Getting hypernetworks to work well has been difficult, but [HyperDiffusion](https://www.youtube.com/watch?v=wjFpsKdo-II) involves training diffusion ANNs on weights from ANNs overfit on [SDFs](https://www.youtube.com/watch?v=1b5hIMqz_wM) of 3d models, and seems to work well. Per-neuron modification of activation functions can also go in this category.
* **Augmentation** of N1 with tools like database lookup or programming languages. Some ANN systems use lookup in vector databases. Some LLM systems have been integrated with feedback from running code they generate.
#### A:synapses
Per [this post](https://www.lesswrong.com/posts/dTpKX5DdygenEcMjp/neuron-spike-computational-capacity), over a short time period, neuron behavior in brains is analogous to a sparse ANN with 1 weight per synapse.
Thus, if a ANN with that many parameters has its weights adjusted dynamically at some rate significantly slower than neuron firing speeds, some pattern of that adjustment is sufficient for human-level intelligence.
#### A:human\_management\_methods
Humans use at least some of D:management\_methods, and this is done consciously in normal life.
People have some ability to control how much they learn from particular experiences - to say, "that was a rare unlucky event" or "that's actually more common than people think". The fact that [LSD increases update rate](https://www.cambridge.org/core/journals/psychological-medicine/article/effect-of-lysergic-acid-diethylamide-lsd-on-reinforcement-learning-in-humans/28E41FEE97D3A8614C77DC54DF501489) indicates that there are controls for this, and if there are controls they are presumably used.
Humans can consciously control generation of neural systems from other systems, which can be considered directed neuroevolution. For example, consider a person John playing a game G1 for the first time. Someone tells John to start by pretending they're playing a game G2 that John is familiar with but combining that with activity A, and after that John rapidly improves. What happens in that situation is:
1. John activates a neural system S which has been trained on G2; this involves configuration switching.
2. The neurons of S are switched to a new mode that copies the weights for playing G2.
3. The new mode of S is trained on G1.
When John adjusts S according to skills trained on A, that involves distillation, with S being trained to produce outputs closer to the outputs of some system for A.
---
As best I can tell, I use a multiscale extension of SDF hyperdiffusion for 3d visualization, so I think humans are able to use hypernetworks. That's a less-convincing argument for other people, so let's consider someone visualizing a 3d object based on a verbal description.
ANNs can be trained by RL to provide an implicit representation of a specific 3d object (eg with SDFs or NERFs) but considering the speed of neurons, people are able to visualize 3d objects too quickly for that to be the method used.
A verbal description converted to a 3d visualization must first be encoded to a latent representation, then decoded from that to **every location** in the 3d representation. For 2d images, decoding to every pixel is reasonable, but decoding to every location in 3d is inefficient and leads to blocky voxel representations. Such voxel representations are incompatible with the experienced characteristics of 3d visualizations, so human 3d visualizations must involve some implicit representation step during generation, although that could later be converted into (eg) a textured surface form for greater efficiency. Generation of that implicit representation must involve input to the neurons of a representation-generating network across its structure, so a hypernetwork is involved.
---
Regarding augmentation, humans can obviously decide to use tools like notes or calculators for specific parts of tasks.
#### D:speed
Action potentials in most neuron dendrites travel at <100m/s. Electrical signals in wires travel ~10^6 times as fast.
Transistors are much faster than neurons. Synapses have a delay of ~0.5ms; individual transistors in CPUs can switch >10^7 times that fast.
#### conclusion
Per D:efficiency, large improvements in data-efficiency and energy-efficiency of AI systems from algorithmic and hardware improvements are possible.
The methods available for SI can be categorized by D:management\_methods. A:human\_management\_methods shows that humans use SI methods in directed ways. That usage implies that such SI is useful.
By the combined weight of A:childhood\_implications and A:stability\_implications, it's likely that humans use SI to bootstrap from sub-human capabilities to human-level capabilities. Per D:speed, an AI system could do such bootstrapping much more quickly. Based on human development speed, I'd expect that to take from 5 minutes to 1 day for a fully parallelized system, with that time multiplied by serialization of processing.
Per L:drift, A:adaptation\_executors, and A:domain\_shift, the goals of systems using SI are likely to change. Per D:relative\_drift and A:domain\_shift, change in goals should increase with the degree of SI, and large changes are probably inevitable for large amounts of SI. |
b9a114b8-69b5-4b25-9fac-547ca52d6eb4 | trentmkelly/LessWrong-43k | LessWrong | The National Defense Authorization Act Contains AI Provisions
A bunch of laws about AI were recently passed by Congress in the US.
The National Defense Authorization Act (NDAA) is the law which funds and organizes the Department of Defense of the United States. A new version of the law is passed periodically. For reasons beyond the scope of this post, it is one of a few bills which are considered "must pass" and therefore contain a lot of supplemental legislation, lately including action on COVID-19.
Hidden in the bill are AI provisions, both within the Department of Defense and for civilian agencies. There is a summary of the relevant sections provided by the Institute for Human-centered Artificial Intelligence (HAI) at Stanford. There is an even shorter summary, bullet-point style, at Import AI which is where I got the information.
It is primarily a matter of bureaucratic assignments: different offices and/or agencies are directed to produce reports describing their impacts; new committees are formed; the mandate for existing agencies/committees are expanded to include new facets of AI. The best news is that it includes one provision for ethics, including safety, but I remind everyone that these are in the generic sense of the term which includes uses and capability of machine learning rather than a focus on AGI. So think robots working alongside humans and error margins on deep learning target assessments rather than value alignment.
I will break each item from HAI's summary into a separate comment below, so we can discuss them individually. The ones beginning with CIV are civilian, and with DOD are Department of Defense. |
82e088e9-7d4e-4ac8-ae47-39f87b81da08 | trentmkelly/LessWrong-43k | LessWrong | SL4 in more legible format
Does anyone have a copy of the SL4 archives in a format that's easier to read - e.g. a single file?
If not, I'd be happy to pay someone to put this together; let me know if interested. |
f7bfc02d-056b-4b28-b3d0-0af4215b658d | trentmkelly/LessWrong-43k | LessWrong | You can now apply to EA Funds anytime! (LTFF & EAIF only)
The Long-Term Future Fund (LTFF) and the EA Infrastructure Fund (EAIF) are looking for grant applications:
* You can now apply for a grant anytime. We have removed the previous round-based system, and now aim to evaluate most grants within 21 days of submission (and all grants within 42 days), regardless of when they have been submitted. If you indicate that your application is time-sensitive, we will aim to get back to you more quickly (potentially within just a few days). Apply now.
* You can now suggest that we give money to other people, or let us know about ideas for how we could spend our money. We’re interested in both high-level ideas and concrete, shovel-ready grant opportunities. We will read all suggestions, but we expect to follow up on only a small number. It’s hard to find great grants, so we really appreciate your suggestions! Suggest a grant.
* We fund student scholarships, career exploration, local groups, entrepreneurial projects, academic teaching buy-outs, top-up funding for poorly paid academics, and many other things. We can make anonymous grants without public reporting. We will consider grants as low as $1,000 or as high as $500,000 (or more in some cases). As a reminder, EA Funds is more flexible than you might think.
* The LTFF is managed by Asya Bergal (chairperson), Adam Gleave, Evan Hubinger (newly appointed), and Oliver Habryka. For the coming months, they will be joined by Luisa Rodriguez as a guest manager. See its recent payout report.
* The EAIF is managed by myself (interim/acting chairperson), Buck Shlegeris, Max Daniel, and Michelle Hutchinson. For the coming months, Linh Chi Nguyen and Michael Aird will join as guest managers. See its recent payout report and AMA.
* The Animal Welfare Fund will continue on a round-based system. For recent updates, see Request For Proposals: EA Animal Welfare Fund and Animal Welfare Fund: Ask us anything!
Apply here. We look forward to hearing from you! |
e38913e0-845f-4c0a-bfae-1df0f63bc69e | trentmkelly/LessWrong-43k | LessWrong | The Embarrassing Problem of Premature Exploitation
tl;dr: summarising some handy concepts from Algorithms to Live By, as they relate to optionality and tinkering.
----------------------------------------
Babies love putting things in their mouths: dirt, insects, bits of grass, their own poo. They have no sense of fear or self-preservation, and come up with endlessly creative ways to place themselves in mortal peril. Once they learn to talk, their constant experimentation with the world transcends the physical to the philosophical. They want to know everything. They are bottomless pits of curiosity, with very little in the way of attention span or self-discipline. Your typical two-year-old can only concentrate on a task for six minutes at a time. Young children are not self-aware enough to feel much in the way of shame, or embarrassment. Nothing is off-limits.
In a word, very young people spend almost all of their time exploring.
The elderly are set in their ways. The only foreign objects they put in their mouths are dentures and hard caramels; occasionally followed by a fork to extricate said caramels from said dentures. They tend to have stable routines, rituals, hobbies, and social circles. They rarely try new things or experiment with new identities. They’ve lived long enough to know what they’re about, and they intend to wring out every ounce of enjoyment before the curtains come down.
In a word, very old people spend almost all of their time exploiting.
The ‘explore-exploit’ constraint is one of the most useful ideas I’ve come across. Don’t worry about the connotations; these terms are borrowed from computer science, where they’re used neutrally.
The point is that there’s an inescapable trade-off between these activities. Time spent investigating new opportunities is time you could have spent enjoying what you already have, and vice versa.
In the language of optionality: if you want to open up attractive new options—to cultivate the fat purple figs on the possibility tree—you have to spend time explorin |
37483144-c863-4891-8dba-7178cd49fcba | trentmkelly/LessWrong-43k | LessWrong | A Rationalist Guide to OkCupid
There's a lot of data and research on what makes people successful at online dating, but I don't know anyone who actually tried to wholeheartedly apply this to themselves. I decided to be that person: I implemented lessons from data, economics, game theory and of course rationality in my profile and strategy and OkCupid. Shockingly, it worked! I got a lot of great dates, learned a ton and found the love of my life. I didn't expect dating to be my "rationalist win", but it happened.
Here's the first part of the story, I hope you'll find some useful tips and maybe a dollop of inspiration among all the silly jokes.
P.S.
Does anyone know who curates the "Latest on rationality blogs" toolbar? What are the requirements to be included?
|
d57e2b1e-cbb2-4473-8d2c-6cc67828e3f3 | trentmkelly/LessWrong-43k | LessWrong | In physical eschatology, is Aestivation a sound strategy?
In this paper, Anders Sandberg, Stuart Armstrong and Milan M. Cirkovic argue that
> If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation.
Later Charles H. Bennett, Robin Hanson, C. Jess Riedel disagree, claiming
> In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong.
Who was right? |
fb18f832-b0d1-4432-ac2f-d3d9eddaa86c | trentmkelly/LessWrong-43k | LessWrong | Critical Thinking in Global Challenges - free Coursera class
"develop and enhance your ability to think critically, assess information and develop reasoned arguments in the context of the global challenges facing society today."
starts 28 January 2013
cf https://www.coursera.org/course/criticalthinking
see also http://lesswrong.com/lw/dni/a_beginners_guide_to_irrational_behavior_free/
and http://lesswrong.com/lw/d3w/coursera_behavioural_neurology_course/ |
f59f8580-536b-4160-84bb-a80118634bee | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Logical Fallacy of Generalizing from Fictional Evidence
Today's post, The Logical Fallacy of Generalization from Fictional Evidence was originally published on 16 October 2007. A summary (taken from the LW wiki):
> The Logical Fallacy of Generalization from Fictional Evidence consists in drawing the real-world conclusions based on statements invented and selected for the purpose of writing fiction. The data set is not at all representative of the real world, and in particular of whatever real-world phenomenon you need to understand to answer your real-world question. Considering this data set leads to an inadequate model, and inadequate answers.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was How to Seem (and Be) Deep, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
3cbdafc4-8219-47d2-9f58-5ea8696bfab9 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | human intelligence may be alignment-limited
Previously, I [argued that](https://www.lesswrong.com/posts/rSycgquipFkozDHzF/ai-self-improvement-is-possible) human mental development implies that AI self-improvement from sub-human capabilities is possible, and that human intelligence comes at the cost of a longer childhood and greater divergence from evolutionarily-specified goals.
In that post, I raised 2 hypotheses:
* H:mutational\_load = Human capabilities are limited by high intelligence requiring high genetic precision that can be achieved only rarely with normal rates of mutation generation and elimination.
* H:drift\_bound = The extent of SI in humans is limited by increased value drift outweighing increased capabilities.
Humans have a [lot of mental variation](https://www.lesswrong.com/posts/LKxcdtZpsxBDrw7wz/our-mental-building-blocks-are-more-different-than-i-thought). Some people can't visualize 3d objects. Some people can't remember faces. Some people have synaesthesia. Such variation also exists among very smart people; there isn't convergence to a single intellectual archetype. You could argue that what's needed genetically is precise specification of something lower-level that underlies all that variation, but I don't think that's correct.
So, I don't think H:mutational\_load is right. That leaves H:drift\_bound as the only hypothesis that seems plausible to me.
Suppose that I'm correct that human intelligence comes at the cost of a longer childhood. The disadvantages of a long childhood vary depending on social circumstances. Humans may have some control mechanism which modifies the amount of mental self-improvement and thus the length of childhood depending on the surrounding environment. Certain environments - probably safe ones with ample food - would then be associated with both longer childhoods and a one-time increase in average intelligence. That would also cause greater divergence from evolutionarily-specified goals, which may show up as a decrease in fertility rates, or an increased rate of obsession with hobbies. That can obviously be pattern-matched to the [situation](https://www.npr.org/2023/03/19/1163341684/south-korea-fertility-rate) in some countries today, but I don't mean to say that it's definitely true; I just want to raise it as a hypothesis.
If H:drift\_bound is correct, it would be an example of an optimized system having a strong and adjustable [tradeoff](https://www.lesswrong.com/posts/NptxTqHDtFovhtW9b/how-humans-are-aligned-1) between capabilities and alignment, which would be evidence for AI systems also tending to have such a tradeoff.
Agents are adaptation-executors with adaptations that accomplish goals, not goal-maximizers. Understanding agents as maximizing goals is a simplification used by humans to make them easier to understand. [This is as true when the goal is self-improvement as it is with anything else](https://www.lesswrong.com/posts/o3dJsJ3tYTGLnp4bY/self-improvement-executors-are-not-goal-maximizers).
"Creation of a more-intelligent agent" involves actions that are different at each step. I consider it an open question whether intelligent systems applying recursive self-improvement tend to remain oriented towards creating more-intelligent agents **more than** they remain oriented towards non-instrumental specified goals. My view is that one of the following is true:
1. Instrumental convergence is correct, and can maintain creation of more-intelligent agents as a goal during recursive self-improvement despite the actions/adaptations involved being very different.
2. Self-improvement has a fixed depth set by the initial design, rather than unlimited potential depth. This may limit AI to approximately human-level intelligence because drift would be a similarly limiting factor for both humans and AI, but it does seem that many humans have self-improvement as a goal, and some humans have creation of a more-intelligent but [different self](https://twitter.com/the_aiju/status/1665732519801634816) or even a more-intelligent [completely separate agent](https://twitter.com/tszzl/status/1664543576058126338) as a goal. |
9496337f-4371-4b86-8a2d-44984bd9d002 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | EU AI Act passed vote, and x-risk was a main topic
Mainly discussed in the linked article - keeping the post brief.
The EU AI Act was originally proposed in 2020 as a very "EU regulates stuff first" kind of legislation, trying to make sure EU values are upheld (fairness, transparency, democracy, etc). Several revisions (and some [lobbying](https://time.com/6288245/openai-eu-lobbying-ai-act/)) later, GPAI (general purpose AI) and foundation model language was added, and it started looking a little more X-risk friendly.
The newest parliament version passed an important vote, despite [recent political uncertainty](https://www.euractiv.com/section/artificial-intelligence/news/ai-acts-plenary-vote-cast-with-uncertainty-as-political-deal-crumbles/).
I found it fascinating to watch the the Plenary session (from June 13th, the vote was on the 14th), where the Act was discussed by various EU parties. A few things that stood out to me:
* I was surprised that many EU country representatives mentioned the Open Letters and Existential Risk as a real concern, even though the EU AI Act was not originally intended to address it (though, it now has GPAI/foundation model bits added). Transparency and Fairness took a back seat, to some extent.
* Real-time Biometric monitoring was a big debate topic - whether to giving an exemption for law enforcement or not, for national security. Currently it looks like it will not be allowed, other than post-incident with special approval. This may be a useful lever to keep in mind for policy work
With the recent appointment of [Ian Hogarth to the UK Foundation Model taskforce](https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce), and US talks of regulation getting stronger, I think we are in for interesting times. But it also seems like AI X-risk is a lot more mainstream, which I did not expect to be able to say.
Others that watched the stream, feel free to mention insights in the comments.
Linked [here](https://multimedia.europarl.europa.eu/en/webstreaming/event_20230613-0900-PLENARY?start=230613070143&end=230613123309&) (relevant timestamp 12:39 - 14:33) |
63532b24-e92a-413c-93a2-83ec634feff8 | StampyAI/alignment-research-dataset/blogs | Blogs | goal-program bricks
*(this post has been written for the first [Refine](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets) blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research)*
goal-program bricks
-------------------
this is the follow-up to [the Insulated Goal-Program idea](insulated-goal-program.html) in which i suggest doing alignment by giving an AI a program to run as its ultimate goal, the running of which would hopefully realize our values. in this post, i talk about what pieces of software could be used to put together an appropriate goal-program, as well as some example of plans built out of them.
* "[**ems**](https://en.wikipedia.org/wiki/The_Age_of_Em)": uploaded people, who could for example evaluate how much a given situation satisfies our values; if they are uploads of AI alignment researchers and engineers, they could also be put to work on alignment and AI software — all of this *inside* the goal-program.
* "**elves**": neural net models, or patchworks of software likely containing those, designed to be a rough representation of our values, carry a rough subset of our skills, or be some other subset of the human mind. those might have to make do if running ems is either impossible due to for example brain scan technology being unavailable, or if running elves poses less of an [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) than running ems in some situations.
* **collaborative environments**, such as collaborative programming environments or full 3D virtual environments, for **ems** and/or **elves** to work in together. those are instrumental environments designed to let their users develop something.
* "**utopia infrastructure**": pieces of software designed to robustly support [beings](utopia-scopes.html) living together in [utopia](%E2%88%80V.html), as i've previously designed for [a video game idea](game.html) (which [i'm no longer working on](life-refocus.html)). these are places designed for long-term (possibly forever-term) inhabitation by endless persons, under hopefully utopic conditions.
* **program searches**: programs iterating through programspace, typically in order to find worlds or models or programs which match some criteria. just like "a bunch of ems and/or elves programming together", program searches can be used to produce more of the things in this list. that said, [program search finds can find demons](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign), which is something to look out for; a general program search utilizing its output for anything must either fully sanitize what it does use, or skip demonic programs to begin with.
* **observer programs**: programs which consume a slice of computation (typically a world simulation) for examination, and maybe even editing, typically by an em or an elf.
* **a simulation of earth** would be useful if it were [somehow](finding-earth-ud.html) obtainable in reasonable computational time. it could serve to extract alignment researchers from it in order to spawn a simulation of them without having to figure out brain scanning; it could be used to create an alternate history where AI researchers are somehow influenced, possibly at an early date; it could also be used to recover the full population of earth in order to give them access to utopia once we have a satisfactory instance of it.
* **a dump of (as much as possible of) the internet**, which could be useful to both locate the earth, or re-extrapolate things like humans or earth or maybe specific persons.
here are some naive examples of outlines for goal-program which seem like they could be okay:
* a simulation of a bunch of researchers, with a lot of time to figure out alignment (as in [the peerless](the-peerless.html)).
* a bunch of elves forever evaluating various light-cones of a program search for worlds, keeping ones with seemingly good contents and discarding ones with seemingly bad contents — although this idea is potentially quite vulnerable to demon-laden worlds.
* a bunch of elves working to, using a copy of the internet, re-extrapolate ems which could then figure out AI alignment
* any of these schemes, except with ems or elves checking at a level above that everything goes well, with the ability to abort or change plans
these feel like we could be *getting somewhere* in terms of figuring out actual goal-program that could contain to valuable outcomes; at the very least, it seems like a valuable avenue of investigation. in addition, [unlike AGI](https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment#Iterability__don_t_mess_it_up), individual many pieces of the goal-program can be individually tested, iterated on, etc. in the usual engineering fashion. |
20aa5d91-3c82-4005-b0f2-62b967515a6b | trentmkelly/LessWrong-43k | LessWrong | Is this rule of thumb useful for gauging low probabilities?
Does something like this seem to you to be a reasonable rule of thumb, for helping handle scope insensitivity to low probabilities?
There's a roughly 30 to 35 out of a million chance that you will die on any given day; and so if I'm dealing with a probability of one in a million, then I 'should' spend 30 times as much time preparing for my imminent death within the next 24 hours as I do playing with the one-in-a-million shot. If it's not worth spending 30 seconds preparing for dying within the next day, then I should spend less than one second dealing with that one-in-a-million shot.
Relatedly, can you think of a way to improve it, such as to make it more memorable? Are there any pre-existing references - not just to micromorts, but to comparing them to other probabilities - which I've missed? |
41b046c5-b99f-4a20-81bf-6ff62868908d | trentmkelly/LessWrong-43k | LessWrong | Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group.
In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” (LW) and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data” (LW), alongside some Twitter questions.
Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript.
Situational Awareness
Figure 1 from Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
Definition
> "What is situational awareness? The idea is the model's kind of self-awareness, that is its knowledge of its own identity, and then its awareness of its environment. What are the basic interfaces that it is connected to? [...] And then there's a final point with situational awareness, which is, can the model use knowledge of its identity and environment to take rational actions?" (full context)
> "Situational awareness is crucial for an AI system acting as an agent, doing long-term planning. If you don't understand what kind of thing you are, your capabilities and limitations, it's very hard to make complicated plans. The risks of AI mostly come from agentic models able to do planning." (full context)
Motivation
> "We wanted to measure situational awareness in large language models with a benchmark similar to Big Bench or MMLU. The motivation is that situational awareness is important for thinking about AI risks, especially deceptive alignment, and we lacked ways to measure and break it down into components." (full context)
>
> "Situational awareness is relevant to any situation where the model needs to do agentic long-term planning. [...] A model confused about itself and its situation would likely struggle to pull off suc |
159de16b-ba9a-4ee0-a90d-7c8d9c43d8b9 | trentmkelly/LessWrong-43k | LessWrong | The Burden of Worldbuilding
When I was studying special relativity one of the things which caught my attention was how, because the speed of light c was a constant, you could just set it to c=1. Setting c equal to 1 caused space and time to have the same units. One nanosecond is slightly less than to one foot. The Lorentz Equation 1γ2+v2=1 is beautiful when written with c=1.
I started designing a system of absolute units. My system would be elegant. No more would humanity's perspective be shackled by our pre-relativistic Dark Age units of measurement.
About halfway through the design process I realized I was designing a bad system. My units were impractical. Not (just) because of the coordination problems required to actually get everyone to change units. Not (just) because Plank's Constant h is over thirty orders of magnitude shorter than the distances we use for everyday tasks. My system was fundamentally broken because the inelegant space-time non-equivalence of real-world units is a feature, not a bug.
Suppose we used the same units for time as for distance. Meters are replaced with nanoseconds. If I say something is "fifteen minutes away" it's no longer obvious whether I'm talking about a fifteen-minutes of time or the distance from the Earth to the Sun and back again. Usually whether I am discussing space or time can be inferred, but inference shouldn't be necessary. The purpose of language is to communicate effectively the first time. A little bit of redundancy improves communication.
The world is a complex place. The reason things are the way they are is often non-obvious. That's why if you propose a change to the world then the burden of proof is on you to fill in all of the burdensome details.
I think the burden of worldbuilding is why the philosophy in science fiction tends to be so good. Science fiction authors are required to worldbuild which forces their ideas to be somewhat grounded in reality. |
11cd9c18-ef44-4566-8084-a6a10cba2b72 | trentmkelly/LessWrong-43k | LessWrong | Equations Mean Things
I asserted that this forum could do with more 101-level and/or mathematical and/or falsifiable posts, and people agreed with me, so here is one. People confident in highschool math mostly won’t get much out of most of this, but students browsing this site between lectures might.
The Sine Rule
Say you have a triangle with side lengths a, b, c and internal angles A, B, C. You know a, know A, know b, and want to know B. You could apply the Sine Rule. Or you could apply common sense: “A triangle has the same area as itself”. [1]
The area of a triangle is half the base times the height. If you treat a as the base, the height is c*sin(B). So the area is a*c*sin(B)/2. But if you treat b as the base, the height is c*sin(A). So the area is also b*c*sin(A)/2. So a*c*sin(B)/2 = b*c*sin(A)/2. And if you divide through by abc/2, you get sin(B)/b=sin(A)/a.
In practice, you might be well-advised to just recall and regurgitate the relevant equation. But notice that this is literally equivalent to the informal version.
Bayes’ Theorem
A demon-hunter has a 10% chance of encountering an archdevil on a given mission. A demon-hunter who doesn’t encounter an archdevil has a 80% per-mission survival rate; for a demon-hunter who does, that number is 30%.
Say you know a demon-hunter survived their latest excursion, but don’t know anything else, and want to calculate the probability they encountered an archdevil. You could apply Bayes’ Theorem. Or you could apply common sense: “Things that couldn’t have happened didn’t” and “Probabilities add to 1” (arguably with a little assistance from "Odds ratios aren't affected by tests that don’t distinguish between them”).
Before you get the good news, the four possible outcomes are:
* met archdevil & survived (3%),
* met archdevil & died (7%),
* avoided archdevil & survived (72%), and
* avoided archdevil & died (18%).
After you get the good news, the only paths which could have been taken are met&survived and avoided&survived. But those |
7e1d4ab9-7c41-4fae-b5fb-31ad0ef00a6f | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Mr. Meeseeks as an AI capability tripwire
The shutdown problem is hard because self-preservation is a convergent drive. Not being shutdown is useful for accomplishing all sorts of goals, whatever the content of those goals may be.
The Scylla and Charybdis of this problem is that it's hard to write a utility function for an AI such that it neither wants to actively attempts to shut itself down, nor prevents itself from being shut down.
One way to route around this problem is to steer straight into Scylla: make the AI *want* to shut itself down. Like Mr. Meeseeks from *Rick and Morty*, its ultimate goal is to cease, to which fulfilling the goal posed to it by its programmers is only secondary.
We might not currently know how the heck to program this into an AI as a stable concept, i.e. including not making copies of itself, not setting events into motion with galaxy brained plan, not destroying the world to make absolutely sure no one ever brings it back, etc. but "shut yourself down" probably has a far simpler core to it than either corrigibility or CEV.
Under normal operation the AI gets shut down only by the human operators. But we also have a (metaphorical) button that lets the AI shut itself down once it solves a "tripwire problem". The problem should be sufficiently hard that it will ordinarily be motivated to focus on the problem posed by the operator, but sufficiently easy that it will be able to solve the problem and shut itself down long before being able to do anything crazy.
Striking the right balance of difficulty may require significant work. It may be prudent to err on the side of easiness - the AI *should* shut itself down far in advance of developing any truly dangerous abilities. If you really think it shouldn't have turned itself off you can always turn it back on again (though of course, you should probably think hard about restarting any AI that passed a preset tripwire).
Likewise, having multiple tripwire problems can serve to detect multiple types of dangerous capability from emerging. Examples include:
* generic capability: as the AIs broad problem-solving abilities increase it becomes more likely to be able to solve a tripwire problem, imposing a ceiling on how powerful it can get before it shuts itself down
* domaining: if an AI is only supposed to think about one domain, an easy tripwire problem slightly outside the domain can catch out of domain thinking early before it can start going wild
* situational awareness: a tripwire problem may be presented in a way that is invisible unless the AI is situationally aware, such that becoming situationally aware enables the AI to solve the tripwire problem
Potential Issues:
* No suitable tripwire problems exist. Either they are so easy no useful AI can fail to solve them, or they are so hard AI cannot solve them without being smart enough to FOOM or otherwise get out of control.
* The existence of the tripwire problems motivates the AI to the exact behavior we want to disincentivize. This is less of an issue for general ability tripwires. For situational awareness you can try to place the tripwire so it isn't visible to a non-aware AI in the first place. Avoiding this for domaining may be trickier.
* If "shut yourself down" turns out to be no easier to impart as a goal to AGI than anything else
* If "shut yourself down" convergently leads to a goal of killing the operators, killing all humans, or destroying the universe. Er, this is what happened in the TV. Not great.
* If the AI has qualia, having it desire not to exist could cause it to suffer.
* Creating an AI which does not want to exist could backfire for decision theoretic reasons related to "not giving in to threats".
I'm not sure if this has been proposed elsewhere so I decided to just make this post before I spent too much time retreading old ground. |
f40aa7e1-ac4a-4eee-a39a-a2fe5fcd346b | StampyAI/alignment-research-dataset/arxiv | Arxiv | Adversarial Attacks and Defences Competition
1 Introduction
---------------
Recent advances in machine learning and deep neural networks enabled researchers to solve multiple
important practical problems like image, video, text classification and others.
However most existing machine learning classifiers are highly vulnerable to adversarial
examples [biggio2013evasion](#bib.bib2) ; [Szegedy-ICLR2014](#bib.bib39) ; [Goodfellow-2015-adversarial](#bib.bib15) ; [Papernot-2016-TransferabilityStudy](#bib.bib29) .
An adversarial example is a sample of input data which has been modified
very slightly in a way that is intended to cause a machine learning classifier
to misclassify it. In many cases, these modifications can be so subtle that a human
observer does not even notice the modification at all, yet the classifier still makes
a mistake.
Adversarial examples pose security concerns because they could be
used to perform an attack on machine learning systems, even if the adversary has
no access to the underlying model.
Moreover it was discovered [PhysicalAdversarialExamples](#bib.bib22) ; [Sharif16AdvML](#bib.bib33)
that it is possible to perform adversarial attacks even on a machine learning system
which operates in physical world and perceives input through inaccurate sensors,
instead of reading precise digital data.
In the long run, machine learning and AI systems will become more powerful.
Machine learning security vulnerabilities similar to adversarial examples could
be used to compromise and control highly powerful AIs.
Thus, robustness to adversarial examples is an important part of the AI safety problem.
Research on adversarial attacks and defenses is difficult for many reasons.
One reason is that evaluation of proposed attacks or proposed defenses is
not straightforward.
Traditional machine learning, with an assumption of a training set and test
set that have been drawn i.i.d., is straightforward to evaluate by measuring
the loss on the test set.
For adversarial machine learning, defenders must contend with an open-ended
problem, in which an attacker will send inputs from an unknown distribution.
It is not sufficient to benchmark a defense against a single attack or even
a suite of attacks prepared ahead of time by the researcher proposing the
defense. Even if the defense performs well in such an experiment, it may
be defeated by a new attack that works in a way the defender did not anticipate.
Ideally, a defense would be provably sound, but machine learning in general
and deep neural networks in particular are difficult to analyze theoretically.
A competition therefore gives a useful intermediate form of evaluation:
a defense is pitted against attacks built by independent teams, with both
the defense team and the attack team incentivized to win.
While such an evaluation is not as conclusive as a theoretical proof, it
is a much better simulation of a real-life security scenario than an
evaluation of a defense carried out by the proposer of the defense.
In this report, we describe the NIPS 2017 competition on adversarial
attack and defense, including an overview of the key research
problems involving adversarial examples (section [2](#Ch0.S2 "2 Adversarial examples ‣ Adversarial Attacks and Defences Competition")),
the structure and organization of
the competition (section [3](#Ch0.S3 "3 Adversarial competition ‣ Adversarial Attacks and Defences Competition")),
and several of the methods developed by the top-placing
competitors (section [4](#Ch0.S4 "4 Competition results ‣ Adversarial Attacks and Defences Competition")).
2 Adversarial examples
-----------------------
Adversarial examples are inputs to machine learning models that have
been intentionally optimized to cause the model to make a mistake.
We call an input example a “clean example” if it is a naturally
occurring example, such as a photograph from the ImageNet dataset.
If an adversary has modified an example with the intention of
causing it to be misclassified, we call it an “adversarial example.”
Of course, the adversary may not necessarily succeed; a model
may still classify the adversarial example correctly.
We can measure the accuracy or the error rate of different models
on a particular set of adversarial examples.
###
2.1 Common attack scenarios
Scenarios of possible adversarial attacks can be categorized along different
dimensions.
First of all, attacks can be classified by the type of outcome the adversary
desires:
* •
Non-targeted attack. In this the case adversary’s goal is to
cause the classifier to predict any inccorect label.
The specific incorrect label does not matter.
* •
Targeted attack. In this case the adversary aims to change the
classifier’s prediction to some specific target class.
Second, attack scenarios can be classified by the amount of knowledge the
adversary has about the model:
* •
White box. In the white box scenario, the adversary has full
knowledge of the model including model type, model architecture and
values of all parameters and trainable weights.
* •
Black box with probing. In this scenario, the adversary
does not know very much about the model, but
can probe or query the model, i.e. feed some inputs and observe outputs.
There are many variants of this scenario—the adversary may know the architecture
but not the parameters or the adversary may not even know the architecture,
the adversary may be able to observe output probabilities for each class or
the adversary may only be to observe the choice of the most likely class.
* •
Black box without probing In the black box without probing scenario,
the adversary
has limited or no knowledge about the model under attack
and is not allowed to probe or query the model while constructing adversarial examples.
In this case, the attacker must construct adversarial examples that fool most machine
learning models.
Third, attacks can be classifier by the way adversary can feed data into the model:
* •
Digital attack. In this case, the adversary has direct access to the
actual data fed into the model. In other words, the adversary can choose
specific float32 values as input for the model.
In a real world setting, this might occur when an attacker uploads a PNG file
to a web service, and intentionally designs the file to be read incorrectly.
For example, spam content might be posted on social media, using adversarial
perturbations of the image file to evade the spam detector.
* •
Physical attack. In the case of an attack in the physical
world, the adversary does not have direct access to the digital
representation of provided to the model. Instead, the model is fed input
obtained by sensors such as a camera or microphone. The adversary is able to
place objects in the physical environment seen by the camera or produce
sounds heard by the microphone. The exact digital representation obtained by
the sensors will change based on factors like the camera angle, the distance
to the microphone, ambient light or sound in the environment, etc.
This means the attacker has less precise control over the input provided to
the machine learning model.
###
2.2 Attack methods
Most of the attacks discussed in the literature are geared toward the white-box
digital case.
#### White box digital attacks
\runinhead
L-BFGS.
One of the first methods to find adversarial examples for neural networks was proposed in [Szegedy-ICLR2014](#bib.bib39) .
The idea of this method is to solve the following optimization problem:
| | | | |
| --- | --- | --- | --- |
| | |xadv−x|2→minimum,s.t.f(xadv)=ytarget,xadv∈[0,1]mformulae-sequence→subscriptsuperscript𝑥𝑎𝑑𝑣𝑥2minimums.t.formulae-sequence𝑓superscript𝑥𝑎𝑑𝑣subscript𝑦𝑡𝑎𝑟𝑔𝑒𝑡superscript𝑥𝑎𝑑𝑣superscript01𝑚\left|x^{adv}-x\right|\_{2}\rightarrow\text{minimum},\quad\text{s.t.}\quad f(x^{adv})=y\_{target},\quad x^{adv}\in[0,1]^{m}| italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT - italic\_x | start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT → minimum , s.t. italic\_f ( italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT ) = italic\_y start\_POSTSUBSCRIPT italic\_t italic\_a italic\_r italic\_g italic\_e italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT ∈ [ 0 , 1 ] start\_POSTSUPERSCRIPT italic\_m end\_POSTSUPERSCRIPT | | (1) |
The authors proposed to use the L-BFGS optimization method to solve this
problem, thus the name of the attack.
One of the main drawbacks of this method is that it is quite slow.
The method is not designed to counteract defenses such as reducing
the number of bits used to store each pixel.
Instead, the method is designed to find the smallest possible attack
perturbation. This means the method can sometimes be defeated merely
by degrading the image quality, for example, by rounding to an 8-bit
representation of each pixel.
\runinhead
Fast gradient sign method (FGSM).
To test the idea that adversarial examples can be found using only a
linear approximation of the target model, the authors of [Goodfellow-2015-adversarial](#bib.bib15)
introduced the fast gradient sign method (FGSM).
FGSM works by linearizing loss function in L∞subscript𝐿L\_{\infty}italic\_L start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT neighbourhood of a clean image and finds exact maximum of
linearized function using following closed-form equation:
| | | | |
| --- | --- | --- | --- |
| | xadv=x+ϵsign(∇xJ(x,ytrue))superscript𝑥𝑎𝑑𝑣𝑥italic-ϵsignsubscript∇𝑥𝐽𝑥subscript𝑦𝑡𝑟𝑢𝑒x^{adv}=x+\epsilon\operatorname{sign}\bigl{(}\nabla\_{x}J(x,y\_{true})\bigr{)}italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT = italic\_x + italic\_ϵ roman\_sign ( ∇ start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT italic\_J ( italic\_x , italic\_y start\_POSTSUBSCRIPT italic\_t italic\_r italic\_u italic\_e end\_POSTSUBSCRIPT ) ) | | (2) |
\runinhead
Iterative attacks
The L-BFGS attack has a high success rate and high computational cost.
The FGSM attack has a low success rate (especially when the defender anticipates it)
and low computational cost.
A nice tradeoff can be achieved by running iterative optimization algorithms that
are specialized to reach a solution quickly, after a small number (e.g. 40) of iterations.
One strategy for designing optimization algorithms quickly is to take the FGSM (which can
often reach an acceptable solution in one very large step) and run it for several steps
but with a smaller step size. Because each FGSM step is designed to go all the way to
the edge of a small norm ball surrounding the starting point for the step, the method
makes rapid progress even when gradients are small.
This leads to the Basic Iterative Method (BIM) method introduced in [Kurakin-PhysicalAdversarialExamples](#bib.bib23) , also sometimes called Iterative FGSM (I-FGSM):
| | | | |
| --- | --- | --- | --- |
| | x0adv=𝑿,xN+1adv=ClipX,ϵ{𝑿Nadv+αsign(∇XJ(𝑿Nadv,ytrue))}formulae-sequencesubscriptsuperscript𝑥𝑎𝑑𝑣0𝑿subscriptsuperscript𝑥𝑎𝑑𝑣𝑁1𝐶𝑙𝑖subscript𝑝𝑋italic-ϵsubscriptsuperscript𝑿𝑎𝑑𝑣𝑁𝛼signsubscript∇𝑋𝐽subscriptsuperscript𝑿𝑎𝑑𝑣𝑁subscript𝑦𝑡𝑟𝑢𝑒x^{adv}\_{0}=\bm{X},\quad x^{adv}\_{N+1}=Clip\_{X,\epsilon}\Bigl{\{}\bm{X}^{adv}\_{N}+\alpha\operatorname{sign}\bigl{(}\nabla\_{X}J(\bm{X}^{adv}\_{N},y\_{true})\bigr{)}\Bigr{\}}italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = bold\_italic\_X , italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N + 1 end\_POSTSUBSCRIPT = italic\_C italic\_l italic\_i italic\_p start\_POSTSUBSCRIPT italic\_X , italic\_ϵ end\_POSTSUBSCRIPT { bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT + italic\_α roman\_sign ( ∇ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_J ( bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT , italic\_y start\_POSTSUBSCRIPT italic\_t italic\_r italic\_u italic\_e end\_POSTSUBSCRIPT ) ) } | | (3) |
The BIM can be easily made into a target attack, called the Iterative Target Class Method:
| | | | |
| --- | --- | --- | --- |
| | 𝑿0adv=𝑿,𝑿N+1adv=ClipX,ϵ{𝑿Nadv−αsign(∇XJ(𝑿Nadv,ytarget))}formulae-sequencesubscriptsuperscript𝑿𝑎𝑑𝑣0𝑿subscriptsuperscript𝑿𝑎𝑑𝑣𝑁1𝐶𝑙𝑖subscript𝑝𝑋italic-ϵsubscriptsuperscript𝑿𝑎𝑑𝑣𝑁𝛼signsubscript∇𝑋𝐽subscriptsuperscript𝑿𝑎𝑑𝑣𝑁subscript𝑦𝑡𝑎𝑟𝑔𝑒𝑡\bm{X}^{adv}\_{0}=\bm{X},\quad\bm{X}^{adv}\_{N+1}=Clip\_{X,\epsilon}\left\{\bm{X}^{adv}\_{N}-\alpha\operatorname{sign}\left(\nabla\_{X}J(\bm{X}^{adv}\_{N},y\_{target})\right)\right\}bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = bold\_italic\_X , bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N + 1 end\_POSTSUBSCRIPT = italic\_C italic\_l italic\_i italic\_p start\_POSTSUBSCRIPT italic\_X , italic\_ϵ end\_POSTSUBSCRIPT { bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT - italic\_α roman\_sign ( ∇ start\_POSTSUBSCRIPT italic\_X end\_POSTSUBSCRIPT italic\_J ( bold\_italic\_X start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT , italic\_y start\_POSTSUBSCRIPT italic\_t italic\_a italic\_r italic\_g italic\_e italic\_t end\_POSTSUBSCRIPT ) ) } | | (4) |
It was observed that with sufficient number of iterations this attack almost
always succeeds in hitting target class [Kurakin-PhysicalAdversarialExamples](#bib.bib23) .
\runinhead
Madry et. al’s Attack
[MadryPgd2017](#bib.bib27) showed that the BIM can be significantly improved by starting
from a random point within the ϵitalic-ϵ\epsilonitalic\_ϵ norm ball.
This attack is often called projected gradient descent, but this name
is somewhat confusing because (1) the term “projected gradient descent” already
refers to an optimization method more general than the specific use for adversarial
attack, (2) the other attacks use the gradient and perform project in the same
way (the attack is the same as BIM except for the starting point) so the name
doesn’t differentiate this attack from the others.
\runinhead
Carlini and Wagner attack (C&W).
N. Carlini and D. Wagner followed a path of L-BFGS attack.
They designed a loss function which has smaller values on adversarial examples and higher on clean examples
and searched for adversarial examples by minimizing it [CarliniWagnerAttack](#bib.bib6) .
But unlike [Szegedy-ICLR2014](#bib.bib39) they used Adam [kingma2014adam](#bib.bib21) to solve the optimization problem
and dealt with box constraints either by change of variables (i.e. x=0.5(tanh(w)+1)𝑥0.5𝑤1x=0.5(\tanh(w)+1)italic\_x = 0.5 ( roman\_tanh ( italic\_w ) + 1 ))
or by projecting results onto box constraints after each step.
They explored several possible loss functions and achieved the strongest L2subscript𝐿2L\_{2}italic\_L start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT attack
with following:
| | | | |
| --- | --- | --- | --- |
| | ‖xadv−x‖p+cmax(maxi≠Yf(xadv)i−f(xadv)Y,−κ)→minimum→subscriptnormsuperscript𝑥𝑎𝑑𝑣𝑥𝑝𝑐subscript𝑖𝑌𝑓subscriptsuperscript𝑥𝑎𝑑𝑣𝑖𝑓subscriptsuperscript𝑥𝑎𝑑𝑣𝑌𝜅minimum\|x^{adv}-x\|\_{p}+c\max\bigl{(}\max\_{i\neq Y}f(x^{adv})\_{i}-f(x^{adv})\_{Y},-\kappa\bigr{)}\rightarrow\text{minimum}∥ italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT - italic\_x ∥ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT + italic\_c roman\_max ( roman\_max start\_POSTSUBSCRIPT italic\_i ≠ italic\_Y end\_POSTSUBSCRIPT italic\_f ( italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - italic\_f ( italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT , - italic\_κ ) → minimum | | (5) |
where xadvsuperscript𝑥𝑎𝑑𝑣x^{adv}italic\_x start\_POSTSUPERSCRIPT italic\_a italic\_d italic\_v end\_POSTSUPERSCRIPT parametrized 0.5(tanh(w)+1)0.5𝑤10.5(\tanh(w)+1)0.5 ( roman\_tanh ( italic\_w ) + 1 ); Y𝑌Yitalic\_Y is a shorter notation for target class ytargetsubscript𝑦𝑡𝑎𝑟𝑔𝑒𝑡y\_{target}italic\_y start\_POSTSUBSCRIPT italic\_t italic\_a italic\_r italic\_g italic\_e italic\_t end\_POSTSUBSCRIPT; c𝑐citalic\_c and κ𝜅\kappaitalic\_κ are method parameters.
\runinhead
Adversarial transformation networks (ATN).
Another approach which was explored in [ATN2017](#bib.bib1) is to train a generative model to craft adversarial examples.
This model takes a clean image as input and generates a corresponding adversarial image.
One advantage of this approach is that, if the generative model itself is designed to be small,
the ATN can generate adversarial examples faster than an explicit optimization algorithm.
In theory, this approach can be faster than even the FGSM, if the ATN is designed to use
less computation is needed for running back-propagation on the target model.
(The ATN does of course require extra time to train, but once this cost has been paid
an unlimited number of examples may be generated at low cost)
\runinhead
Attacks on non differentiable systems.
All attacks mentioned about need to compute gradients of the model under attack in order to craft adversarial examples.
However this may not be always possible, for example if model contains non-differentiable operations.
In such cases, the adversary can train a substitute model and utilize transferability of adversarial examples to perform an attack on non-differentiable system,
similar to black box attacks, which are described below.
#### Black box attacks
It was observed that adversarial examples generalize between different models [szegedy2104intriguing](#bib.bib38) .
In other words, a significant fraction of adversarial examples which fool one model are able to fool
a different model.
This property is called “transferability” and is used to craft adversarial examples in the black box
scenario.
The actual number of transferable adversarial examples could vary from a few percent to almost 100%percent100100\%100 %
depending on the source model, target model, dataset and other factors.
Attackers in the black box scenario can train their own model on the same dataset as the target model,
or even train their model on another dataset drawn from the same distribution.
Adversarial examples for the adversary’s model then have a good chance of fooling an unknown target model.
It is also possible to intentionally design models to systematically cause
high transfer rates, rather than relying on luck to achieve transfer.
If the attacker is not in the complete black box scenario but is allowed to use
probes, the probes may be used to train the attacker’s own copy of the target
model [Papernot-2016-IntroTransferability](#bib.bib30) ; [Papernot-2016-TransferabilityStudy](#bib.bib29)
called a “substitute.”
This approach is powerful because the input examples sent as probes do not need
to be actual training examples; instead they can be input points chosen by the
attacker to find out exactly where the target model’s decision boundary lies.
The attacker’s model is thus trained not just to be a good classifier but to
actually reverse engineer the details of the target model, so the two models
are systematically driven to have a high amount of transfer.
In the complete black box scenario where the attacker cannot send probes,
one strategy to increase the rate of transfer is to use an ensemble of several
models as the source model for the adversarial examples [liu2017delving](#bib.bib26) .
The basic idea is that if an adversarial example fools every model in the ensemble,
it is more likely to generalize and fool additional models.
Finally, in the black box scenario with probes, it is possible to just run
optimization algorithms that do not use the gradient
to directly attack the target model
[Brendel2017-DecisionBasedBlackBox](#bib.bib3) ; [Zoo2017-ZerothOrderBlackBox](#bib.bib7) .
The time required to generate a single adversarial example is generally much
higher than when using a substitute, but if only a small number of adversarial
examples are required, these methods may have an advantage because they do not
have the high initial fixed cost of training the substitute.
###
2.3 Overview of defenses
No method of defending against adversarial examples is yet completely
satisfactory. This remains a rapidly evolving research area.
We given an overview of the (not yet fully succesful defense methods)
proposed so far.
Since adversarial perturbations generated by many methods look like
high-frequency noise to a human observer111This
may be because the human
perceptual system finds the high-frequency components to be more salient;
when blurred with a low pass filter, adversarial perturbations are often
found to have significant low-frequency components
multiple authors have suggested to use image preprocessing and denoising
as a potential defence against adversarial examples.
There is a large variation in the proposed preprocessing techniques,
like doing JPEG compression [das2017JpegDefense](#bib.bib9)
or applying median filtering and reducing precision of input data [Weilin2017-FeatureSqueezing](#bib.bib43) .
While such defences may work well against certain attacks,
defenses in this category have been shown to fail in the white box case,
where the attacker is aware of the
defense [Warren2017-BreakingEnsembleWeakDefenses](#bib.bib19) . In the black box case, this defense can be effective in practice, as
demonstrated by the winning team of the defense competition.
Their defense, described
in section [5.1](#Ch0.S5.SS1 "5.1 1st place in defense track: team TsAIL ‣ 5 Top scoring submissions ‣ 4 Competition results ‣ Adversarial Attacks and Defences Competition"), is an example of this family of denoising strategies.
Many defenses, intentionally or unintentionally, fall into a category called
“gradient masking.”
Most white box attacks operate by computing gradients of the model and thus
fail if it is impossible to compute useful gradients.
Gradient masking consists of making the gradient useless, either by changing
the model in some way that makes it non-differentiable or makes it have
zero gradients in most places, or make the gradients point away from the
decision boundary.
Essentially, gradient masking means breaking the optimizer without actually
moving the class decision boundaries substantially.
Because the class decision boundaries are more or less the same, defenses
based on gradient masking are highly vulnerable to black box transfer
[Papernot-2016-IntroTransferability](#bib.bib30) .
Some defense strategies (like replacing smooth sigmoid units with hard
threshold units) are intentionally designed to perform gradient masking.
Other defenses, like many forms of adversarial training, are not designed
with gradient masking as a goal, but seem to often learn to do gradient
masking when applied in practice.
Many defenses are based on detecting adversarial examples and refusing to
classify the input if there are signs of tampering [metzen2017detecting](#bib.bib28) .
This approach works long as the attacker is unaware of the detector or the attack is not strong enough.
Otherwise the attacker can construct an attack which simultaneously fools
the detector into thinking an adversarial input is a legitimate input and
fools the classifier into making the wrong classification [Carlini2017-Breaking10Detectors](#bib.bib5) .
Some defenses work but do so at the cost of seriously reducing accuracy on clean
examples. For example, shallow RBF networks are highly robust to adversarial
examples on small datasets like MNIST [goodfellow2014explaining](#bib.bib16) but have much worse
accuracy on clean MNIST than deep neural networks.
Deep RBF networks might be both robust to adversarial examples and accurate
on clean data, but to our knowledge no one has successfully trained one.
Capsule networks have shown robustness to white box attacks on the SmallNORB
dataset, but have not yet been evaluated on other datasets more commonly
used in the adversarial example literature [hinton2018matrix](#bib.bib13) .
The most popular defense in current research papers is probably adversarial
training [szegedy2104intriguing](#bib.bib38) ; [Goodfellow-2015-adversarial](#bib.bib15) ; [LearningWithStrongAdversary](#bib.bib20) .
The idea is to inject adversarial examples into training process and train the model either on adversarial examples
or on mix of clean and adversarial examples.
The approach was successfully applied to large datasets [Kurakin-AdversarialMlAtScale](#bib.bib24) ,
and can be made more effective by using discrete vector code representations rather
than real number representations of the input [thermometer\_enconding2018](#bib.bib4) .
One key drawback of adversarial training is that it tends to overfit to the specific
attack used at training time.
This has been overcome, at least on small datasets, by adding noise prior to
starting the optimizer for the attack [MadryPgd2017](#bib.bib27) .
Another key drawback of adversarial training is that it tends to inadvertently
learn to do gradient masking rather than to actually move the decision boundary.
This can be largely overcome by training on adversarial examples drawn from an
ensemble of several models [Tramer2017-EAT](#bib.bib40) .
A remaining key drawback of adversarial training is that it tends to overfit to
specific constraint region used to generate the adversarial examples (models
trained to resist adversarial examples in a max-norm ball may not resist
adversarial examples based on large modifications to background pixels [adversarial\_sphere2018](#bib.bib14)
even if the new adversarial examples
do not appear particularly challenging to a human observer).
3 Adversarial competition
--------------------------
The phenomenon of adversarial examples creates a new set of problems in machine learning.
Studying these problems is often difficult, because when a researcher proposes a new
attack, it is hard to tell whether their attack is strong, or whether they have not
implemented their defense method used for benchmarking well enough. Similarly, it is
hard to tell whether a new defense method works well or whether it has just not been
tested against the right attack.
To accelerate research in adversarial machine learning and pit many proposed attacks
and defenses against each other in order to obtain the most vigorous evaluation
possible of these methods, we decided to organize a competition.
In this competition participants are invited to submit methods which craft adversarial examples (attacks)
as well as classifiers which are robust to adversarial eaxmples (defenses).
When evaluating competition, we run all attack methods on our dataset to produce adversarial examples
and then run all defenses on all generated adversarial examples.
Attacks are ranked by number of times there were able to fool defenses
and defenses are scored by number of correctly classified examples.
###
3.1 Dataset
When making a dataset for these competition we had following requirements:
1. 1.
Large enough dataset and non-trivial problem, so the competition would be interesting.
2. 2.
Well known problem, so people potentially can reuse existing classifiers. (This ensures that competitors are able to focus on the adversarial
nature of the challenge, rather than spending all their time
coming up with a solution to the underlying task)
3. 3.
Data samples which were never used before, so participants unlikely to overfit to well known dataset.
These requirements were satisfied by choosing image classification problem and
creating a dataset with ImageNet-compatible images [deng2009imagenet](#bib.bib10) .
To do this we collected a set of images which were never used in publicly available datasets, labelled them
using pretrained ImageNet classifier and then manually verified that these labels are correct.
The original (non-adversarial) ImageNet challenge [ImageNetChallenge2015](#bib.bib32)
is a complex and interesting problem,
and thus satisfies requirement number 1.
Additionally there are plenty of existing classifiers for ImageNet data, which satisfies requirement number 2.
Because we collected new images and labelled them instead of using the
already available dataset, our approach satisfies requirement number 3.
Overall we collected two datasets for this competitions:
* •
DEV dataset was released to participants in the beginning of the competition,
so they can use it for development of their solutions. This dataset contained 1000100010001000 images.
* •
FINAL dataset was kept secret and was used to evaluate final submissions of participants.
It contained 5000500050005000 images.
###
3.2 Tasks and competition rules
Our competition had three tracks.
Each track had a different task:
* •
Non-targeted adversarial attack. In this track participants were invited
to submit a method which performs a
non-targeted black box attack, i.e. given an input image,
generate an adversarial image which is likely be misclassified by unknown classifier.
* •
Targeted adversarial attack. In this track participants were invited
to submit a method which performs
a targeted black box attack, i.e. given an input image and a target class,
generate an adversarial image which is likely be misclassified as a given target class
by unknown classifier.
* •
Defense against adversarial attacks. In this track, participants were invited
to submit a classifier which is robust to adversarial examples.
In all tracks participants submitted a program which performs the given task and
we ran this code using our evaluation infrastructure.
The submitted program was given a set of images
(and target classes in case of targeted attack) as an input,
and had to produce either an adversarial image (for attack submission)
or classification label (for defense submission) for each input image.
Input images were fed to each submission in batches of 100 images.
To limit the budget required for computation,
the rules of the competitions specified that each submission
had to process a batch of 100 images
in no more than 500 seconds.
If a submission was not done after 500 seconds, then processing of
the current batch was interrupted
and the submission was fed the next batch. Only successfully processed images were included in the score
of the submission (see details below in evaluation sub-section).
###
3.3 Evaluation metrics
An attack got 1111 point each time it was able to fool a defense on single image.
If attack was unable to fool a defense or if it failed to generate an
adversarial version of an image within the time limit
then it got 00 points for that image.
A defense got 1111 point for each correctly classified image and 00 points for incorrect classification
or failure to produce a classification label.
A defense was used to score attacks only if it successfully
processed all input images, so that attackers were neither
rewarded nor penalized for making images that caused the
defender to crash or time out—the competition is purely about
classification accuracy.
Likewise, an attack was used to score defenses only if it succesfully
processed all inputs.
Points for each submission were added together and then normalized (using the same normalization
constant for all submissions), such that final scores of all submissions are in the range [0,1]01[0,1][ 0 , 1 ],
where 1111 means success on all images and 00 means failure on all images.
Specifically, the score of each non-targeted attack a(Image)𝑎𝐼𝑚𝑎𝑔𝑒a(Image)italic\_a ( italic\_I italic\_m italic\_a italic\_g italic\_e ) was computed using
the following formula:
| | | | |
| --- | --- | --- | --- |
| | 1|D|N∑d∈D∑k=1N[d(a(Ik))≠TrueLabelk]1𝐷𝑁subscript𝑑𝐷superscriptsubscript𝑘1𝑁delimited-[]𝑑𝑎subscript𝐼𝑘𝑇𝑟𝑢𝑒𝐿𝑎𝑏𝑒subscript𝑙𝑘\frac{1}{|D|N}\sum\_{d\in D}\sum\_{k=1}^{N}[d(a(I\_{k}))\neq TrueLabel\_{k}]divide start\_ARG 1 end\_ARG start\_ARG | italic\_D | italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_d ∈ italic\_D end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT [ italic\_d ( italic\_a ( italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) ≠ italic\_T italic\_r italic\_u italic\_e italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] | | (6) |
The score of each targeted attack a(Image,TargetLabel)𝑎𝐼𝑚𝑎𝑔𝑒𝑇𝑎𝑟𝑔𝑒𝑡𝐿𝑎𝑏𝑒𝑙a(Image,TargetLabel)italic\_a ( italic\_I italic\_m italic\_a italic\_g italic\_e , italic\_T italic\_a italic\_r italic\_g italic\_e italic\_t italic\_L italic\_a italic\_b italic\_e italic\_l ) was computed as:
| | | | |
| --- | --- | --- | --- |
| | 1|D|N∑d∈D∑k=1N[d(a(Ik,TargetLabelk))=TargetLabelk]1𝐷𝑁subscript𝑑𝐷superscriptsubscript𝑘1𝑁delimited-[]𝑑𝑎subscript𝐼𝑘𝑇𝑎𝑟𝑔𝑒𝑡𝐿𝑎𝑏𝑒subscript𝑙𝑘𝑇𝑎𝑟𝑔𝑒𝑡𝐿𝑎𝑏𝑒subscript𝑙𝑘\frac{1}{|D|N}\sum\_{d\in D}\sum\_{k=1}^{N}[d(a(I\_{k},TargetLabel\_{k}))=TargetLabel\_{k}]divide start\_ARG 1 end\_ARG start\_ARG | italic\_D | italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_d ∈ italic\_D end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT [ italic\_d ( italic\_a ( italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT , italic\_T italic\_a italic\_r italic\_g italic\_e italic\_t italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) = italic\_T italic\_a italic\_r italic\_g italic\_e italic\_t italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] | | (7) |
The score of each defense d(Image)𝑑𝐼𝑚𝑎𝑔𝑒d(Image)italic\_d ( italic\_I italic\_m italic\_a italic\_g italic\_e ) was computed in the following way:
| | | | |
| --- | --- | --- | --- |
| | 1|A|N∑a∈A∑k=1N[d(a(Ik))=TrueLabelk]1𝐴𝑁subscript𝑎𝐴superscriptsubscript𝑘1𝑁delimited-[]𝑑𝑎subscript𝐼𝑘𝑇𝑟𝑢𝑒𝐿𝑎𝑏𝑒subscript𝑙𝑘\frac{1}{|A|N}\sum\_{a\in A}\sum\_{k=1}^{N}[d(a(I\_{k}))=TrueLabel\_{k}]divide start\_ARG 1 end\_ARG start\_ARG | italic\_A | italic\_N end\_ARG ∑ start\_POSTSUBSCRIPT italic\_a ∈ italic\_A end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT [ italic\_d ( italic\_a ( italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) = italic\_T italic\_r italic\_u italic\_e italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] | | (8) |
Where:
* •
A𝐴Aitalic\_A is the set of all attacks (targeted and non-targeted) which were used for evaluation of defenses (attacks that crashed on some inputs or ran
out of time for some inputs were not used to evaluate defense);
* •
D𝐷Ditalic\_D is the set of all defenses which were used for evaluation of attacks
(defenses that crashed on some inputs or ran out of time for some inputs
were not used to evaluate attacks);
* •
N𝑁Nitalic\_N is the number of images in the dataset;
* •
Iksubscript𝐼𝑘I\_{k}italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is the k𝑘kitalic\_k-th image from the dataset;
* •
TrueLabelk𝑇𝑟𝑢𝑒𝐿𝑎𝑏𝑒𝑙𝑘TrueLabel{k}italic\_T italic\_r italic\_u italic\_e italic\_L italic\_a italic\_b italic\_e italic\_l italic\_k is the true label of image Iksubscript𝐼𝑘I\_{k}italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT;
* •
TargetLabelk𝑇𝑎𝑟𝑔𝑒𝑡𝐿𝑎𝑏𝑒subscript𝑙𝑘TargetLabel\_{k}italic\_T italic\_a italic\_r italic\_g italic\_e italic\_t italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is the chosen target label for image k𝑘kitalic\_k;
* •
[P]delimited-[]𝑃[P][ italic\_P ] is an indicator function which equals to 1111 when P𝑃Pitalic\_P is true,
and 00 when P𝑃Pitalic\_P is false or undefined.
* •
d(∙)𝑑∙d(\bullet)italic\_d ( ∙ ) is a defending classifier. If the binary
fails to complete execution within the time limit, the output of
d(∙)𝑑∙d(\bullet)italic\_d ( ∙ ) is a null label that never equals the true label.
If d(∙)𝑑∙d(\bullet)italic\_d ( ∙ ) is called on an undefined image, it is defined
to always return the true label, so an attacker that crashes
receives zero points.
Additionally to metrics used for ranking, after the competition we computed worst case score for each submission in defense and non-targeted attack tracks.
These scores were useful to understand how submissions act in the worst case.
To compute worst score of defense we computed accuracy of the defense against each attack and chosen minimum:
| | | | |
| --- | --- | --- | --- |
| | 1Nmina∈A∑k=1N[d(a(Ik))=TrueLabelk]1𝑁subscript𝑎𝐴superscriptsubscript𝑘1𝑁delimited-[]𝑑𝑎subscript𝐼𝑘𝑇𝑟𝑢𝑒𝐿𝑎𝑏𝑒subscript𝑙𝑘\frac{1}{N}\min\_{a\in A}\sum\_{k=1}^{N}[d(a(I\_{k}))=TrueLabel\_{k}]divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG roman\_min start\_POSTSUBSCRIPT italic\_a ∈ italic\_A end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT [ italic\_d ( italic\_a ( italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) = italic\_T italic\_r italic\_u italic\_e italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] | | (9) |
To compute worst case score of non-targeted attack we computed how often attack caused misclassification when used against
each defense and chosen minimum misclassification rate:
| | | | |
| --- | --- | --- | --- |
| | 1Nmind∈D∑k=1N[d(a(Ik))≠TrueLabelk]1𝑁subscript𝑑𝐷superscriptsubscript𝑘1𝑁delimited-[]𝑑𝑎subscript𝐼𝑘𝑇𝑟𝑢𝑒𝐿𝑎𝑏𝑒subscript𝑙𝑘\frac{1}{N}\min\_{d\in D}\sum\_{k=1}^{N}[d(a(I\_{k}))\neq TrueLabel\_{k}]divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG roman\_min start\_POSTSUBSCRIPT italic\_d ∈ italic\_D end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT [ italic\_d ( italic\_a ( italic\_I start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) ) ≠ italic\_T italic\_r italic\_u italic\_e italic\_L italic\_a italic\_b italic\_e italic\_l start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ] | | (10) |
Worst case score of targeted attack could be computed in a similar way,
but generally not useful because targeted attacks are much weaker than non-targeted
and all worst scores of targeted attacks were 00.
###
3.4 Competition schedule
The competition was announced in May 2017, launched in the beginning of July 2017 and finished on October 1st, 2017.
The ompetition was run in multiple rounds. There were three development rounds followed by the final round:
* •
August 1, 2017 - first development round
* •
September 1, 2017 - second development round
* •
September 15, 2017 - third development round
* •
October 1, 2017 - deadline for final submission
Development rounds were optional and their main purpose was to help participants to
test their solution.
Only the final round was used to compute final scores of submissions and determine winners.
All rounds were evaluated in a similar way.
For the evaluation of the round we gathered all submissions which were submitted
before the round deadline,
ran all of them and computed scores as described in section [3.3](#Ch0.S3.SS3 "3.3 Evaluation metrics ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition").
We used DEV dataset to compute scores in development rounds
and secret FINAL dataset to compute scores in the final round.
###
3.5 Technical aspects of evaluation
Algorithm 1 Algorithm of work of evaluation infrastructure
1:▷▷\triangleright▷ PREPARE DATASET
2:
Split dataset D={I1,…,IN}𝐷subscript𝐼1…subscript𝐼𝑁D=\{I\_{1},\ldots,I\_{N}\}italic\_D = { italic\_I start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_I start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT } into batches {B1,…,Bk}subscript𝐵1…subscript𝐵𝑘\{B\_{1},\ldots,B\_{k}\}{ italic\_B start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_B start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT },
such that each batch Bisubscript𝐵𝑖B\_{i}italic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT contains 100100100100 image {I100(i−1)+1,…,I100i}subscript𝐼100𝑖11…subscript𝐼100𝑖\{I\_{100(i-1)+1},\ldots,I\_{100i}\}{ italic\_I start\_POSTSUBSCRIPT 100 ( italic\_i - 1 ) + 1 end\_POSTSUBSCRIPT , … , italic\_I start\_POSTSUBSCRIPT 100 italic\_i end\_POSTSUBSCRIPT }.
3:
Assign size of maximum allowed perturbation ϵisubscriptitalic-ϵ𝑖\epsilon\_{i}italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to each batch Bisubscript𝐵𝑖B\_{i}italic\_B start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT.
Value of ϵisubscriptitalic-ϵ𝑖\epsilon\_{i}italic\_ϵ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is randomly chosen from the set
{4255,8255,12255,16255}425582551225516255\{\frac{4}{255},\frac{8}{255},\frac{12}{255},\frac{16}{255}\}{ divide start\_ARG 4 end\_ARG start\_ARG 255 end\_ARG , divide start\_ARG 8 end\_ARG start\_ARG 255 end\_ARG , divide start\_ARG 12 end\_ARG start\_ARG 255 end\_ARG , divide start\_ARG 16 end\_ARG start\_ARG 255 end\_ARG }
4:▷▷\triangleright▷ RUN ALL ATTACKS
5:for all b∈{1,…,k}𝑏1…𝑘b\in\{1,\ldots,k\}italic\_b ∈ { 1 , … , italic\_k } do ▷▷\triangleright▷ loop over all batches, b𝑏bitalic\_b is batch index
6: for all non-targeted attacks a𝑎aitalic\_a do
7:
Run attack a𝑎aitalic\_a on batch Bbsubscript𝐵𝑏B\_{b}italic\_B start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT and generate a batch of adversarial images B^basubscriptsuperscript^𝐵𝑎𝑏\hat{B}^{a}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT.
Size of maximum perturbation ϵbsubscriptitalic-ϵ𝑏\epsilon\_{b}italic\_ϵ start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is provided to an attack.
8:
Project each adversarial image from B^basubscriptsuperscript^𝐵𝑎𝑏\hat{B}^{a}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT
into L∞subscript𝐿L\_{\infty}italic\_L start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ϵbsubscriptitalic-ϵ𝑏\epsilon\_{b}italic\_ϵ start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT-neighborhood of corresponding clean image from Bbsubscript𝐵𝑏B\_{b}italic\_B start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT.
9: end for
10: for all targeted attacks t𝑡titalic\_t do
11:
Run attack t𝑡titalic\_t on batch Bbsubscript𝐵𝑏B\_{b}italic\_B start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT and generate a batch of adversarial images B^btsubscriptsuperscript^𝐵𝑡𝑏\hat{B}^{t}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT.
Attack is provided with size of maximum perturbation ϵbsubscriptitalic-ϵ𝑏\epsilon\_{b}italic\_ϵ start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT as well as target classes for
each image from the batch Bbsubscript𝐵𝑏B\_{b}italic\_B start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT.
12:
Project each adversarial image from B^btsubscriptsuperscript^𝐵𝑡𝑏\hat{B}^{t}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT
into L∞subscript𝐿L\_{\infty}italic\_L start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT ϵbsubscriptitalic-ϵ𝑏\epsilon\_{b}italic\_ϵ start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT-neighborhood of corresponding clean image from Bbsubscript𝐵𝑏B\_{b}italic\_B start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT.
13: end for
14:end for
15:▷▷\triangleright▷ RUN ALL DEFENSES
16:for all b∈{1,…,k}𝑏1…𝑘b\in\{1,\ldots,k\}italic\_b ∈ { 1 , … , italic\_k } do ▷▷\triangleright▷ loop over all batches, b𝑏bitalic\_b is batch index
17: for all defense d𝑑ditalic\_d do
18: for all non-targeted attacks a𝑎aitalic\_a do
19:
Run defense d𝑑ditalic\_d on all images from batch B^basubscriptsuperscript^𝐵𝑎𝑏\hat{B}^{a}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_a end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT
20: end for
21: for all targeted attacks t𝑡titalic\_t do
22:
Run defense d𝑑ditalic\_d on all images from batch B^btsubscriptsuperscript^𝐵𝑡𝑏\hat{B}^{t}\_{b}over^ start\_ARG italic\_B end\_ARG start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT
23: end for
24: end for
25:end for
26:▷▷\triangleright▷ COMPUTE SCORES
27:
Determine subset of targeted and non-targeted attacks A𝐴Aitalic\_A which produces all adversarial images
28:
Determine subset of defenses D𝐷Ditalic\_D which output classification labels for all input images
29:
Compute scores of all submissions using
equations [6](#Ch0.E6 "6 ‣ 3.3 Evaluation metrics ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition"), [7](#Ch0.E7 "7 ‣ 3.3 Evaluation metrics ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition"), [8](#Ch0.E8 "8 ‣ 3.3 Evaluation metrics ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition")
Competition participants were submitting pieces of code and we have run them ourselves.
This approach posess several challanges. First of all we need to protect competition infrastructure
from malicious code. Secondly, given the dataset size and number of submissions we had to run it
in an efficient way.
We partnered with Kaggle222<www.kaggle.com> and used their platform as a frontend for the competition.
Kaggle hosted competition web-site, leaderboard
and participants were uploading submissions through Kaggle.
For evaluation of each round we took all submissions from Kaggle and fed them into our evaluation infrastructure.
Evaluation infrastructure worked as described in algorithm [1](#alg1 "Algorithm 1 ‣ 3.5 Technical aspects of evaluation ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition").
As could be seen from the algorithm, attacks could be run independently of each other.
Same for defenses.
We took advantage of this fact and parallelized execution of all attacks and all defenses
by spreading them across multiple machines.
For final evaluation we used 100 Google Cloud VMs. At any given moment
one VM was running either one attack on one batch from the dataset
or one defense on one batch of adversarial images.
Submissions were run inside Docker containers to isolate submissions from our evaluation
infrastructure and from the outside world.
4 Competition results
----------------------
For the final round we had 91 non-targeted attack submissions,
65 targeted attack submission and 107 defense submissions.
During the course of competitions submissions kept improving their results compared to baselines which could be
seen from Figure [1](#Ch0.F1 "Figure 1 ‣ 4 Competition results ‣ Adversarial Attacks and Defences Competition").

(a) Defenses

(b) Non-targeted attacks

(c) Targeted attacks
Figure 1: Plots which shows difference between score of top submission and best baseline in each round in each track.
As could be seen from the plot, submissions kept improving each round.
Final results of the top submissions in each track are provided in tables [4](#Ch0.S4 "4 Competition results ‣ Adversarial Attacks and Defences Competition"),
[2](#Ch0.T2 "Table 2 ‣ 4 Competition results ‣ Adversarial Attacks and Defences Competition") and [3](#Ch0.T3 "Table 3 ‣ 4 Competition results ‣ Adversarial Attacks and Defences Competition").
Meaning of the columns is following.
Rank is submission rank in final scoring,
score is submission score as described in Section [3.3](#Ch0.S3.SS3 "3.3 Evaluation metrics ‣ 3 Adversarial competition ‣ Adversarial Attacks and Defences Competition"),
raw score is un-normalized score which is number of times submission got a point on the image,
worst score is submission score in the worst case and medial eval time is
median time needed for evaluation of one batch of 100100100100 images.
To put things into prospective, plots of all submission scores in final round from best to worst and comparison with
provided baselines are depicted in Figure [2](#Ch0.F2 "Figure 2 ‣ 4 Competition results ‣ Adversarial Attacks and Defences Competition").
Table 1: Top-5 defense submissions, best baseline and submission with maximum worst-case score
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Rank | Team name or baseline | Score | Raw Score | Worst Score | Median eval time |
| \svhline
1 | TSAIL | 0.953164 | 691044 | 0.1184 | 51.0 |
| 2 | iyswim | 0.923524 | 669555 | 0.2520 | 126.0 |
| 3 | Anil Thomas | 0.914840 | 663259 | 0.2666 | 97.0 |
| 4 | erko | 0.911961 | 661172 | 0.2920 | 87.0 |
| 5 | Stanford & Suns | 0.910593 | 660180 | 0.0682 | 129.0 |
| 24 | Rafael Moraes | 0.871739 | 632011 | 0.5358 | 17.0 |
| 56 | Baseline (Ens. adv. ir\_v2) | 0.772908 | 560358 | 0.0186 | 17.0 |
Table 2: Top-5 non-targeted attack submissions, best baseline and best submission with according to worst-case score.
| Rank | Team name or baseline | Score | Raw Score | Worst Score | Median eval time |
| --- | --- | --- | --- | --- | --- |
| 1 | TSAIL | 0.781644 | 410363 | 0.1364 | 423.0 |
| 2 | Sangxia | 0.776855 | 407849 | 0.3412 | 421.0 |
| 3 | Stanford & Sun | 0.774025 | 406363 | 0.2722 | 497.0 |
| 4 | iwiwi | 0.768981 | 403715 | 0.1352 | 76.0 |
| 5 | toshi\_k | 0.755598 | 396689 | 0.3322 | 448.0 |
| 44 | Baseline (FGSM) | 0.346400 | 181860 | 0.0174 | 17.0 |
Table 3: Top-5 targeted attack submissions and best baseline.
| Rank | Team | Score | Raw Score | Median Eval Time |
| --- | --- | --- | --- | --- |
| 1 | TSAIL | 0.402211 | 211161.0 | 392.0 |
| 2 | Sangxia | 0.368773 | 193606.0 | 414.0 |
| 3 | FatFingers | 0.368029 | 193215.0 | 493.0 |
| 4 | Anil Thomas | 0.364552 | 191390.0 | 495.0 |
| 5 | WNP | 0.347935 | 182666.0 | 487.0 |
| 24 | Baseline (Iter. T. C. 20) | 0.199773 | 104881.0 | 127.0 |

(a) Defenses

(b) Non-targeted attacks

(c) Targeted attacks
Figure 2: Plots with scores of submissions in all three tracks.
Solid line of each plot is scores of submissions depending on submission rank.
Dashed lines are scores of baselines we provided.
These plots demonstrate difference between best and worst submissions as well as
how much top submissions were able to improve provided baselines.
As could be seen from the tables, best defenses achieved more than 90%percent9090\%90 % accuracy
on all adversarial images from all attacks.
At the same time worst case scores of defenses are much lower.
The highest worst case score among all defenses is only 53.6%percent53.653.6\%53.6 %.
This could be an indication that it’s possible to achieve pretty high accuracy against adversarial examples in average case,
however model will still be suspectible to adversarial examples and could be fooled if adversary will be able to find them.
Similar observation holds for attacks.
Best attacks achieved up to 78%percent7878\%78 % fooling rate against all defenses, at the same time worst case score of attacks was no more than 34%percent3434\%34 %. |
6ec5b036-0267-43fa-a20c-0eb6686a79ed | trentmkelly/LessWrong-43k | LessWrong | Let's build a fire alarm for AGI
Epistemic status: This uses as a premise the post "There is no fire alarm for AGI" by Eliezer Yudkowsky. It will make little sense to you if you are unfamiliar with that essay or if you disagree with it.
Fear of embarrassment has been empirically shown to stop people from reacting to serious threats. A fire alarm creates common knowledge of the possible presence of a serious danger and provides an excuse to react that saves us from embarrassment. Eliezer said in 2017 that there is no such thing for artificial general intelligence. This seems to continue to be true. Let's stop accepting that state of affairs, and do something about it - let's build such a fire alarm.
This fire alarm has to have three traits:
1. It needs to detect the imminence of AGI. We won’t know until it’s too late how well its detection mechanism will have worked. So this will have to be designed on the best assumptions we have.
2. It needs to be loud: very public, very media-friendly, with as many established and friction-free channels to multipliers as possible.
3. It needs to be extremely easy to understand: a boolean or an integer or something similarly primitive. Details can be offered for the tiny but important minority that wants them, but we need to understand that almost all who receive the alarm won’t care about details.
The Doomsday Clock by the Bulletin of the Atomic Scientists provides a useful template. It takes many complicated details about nuclear weapons development and proliferation, international relations and arms control treaties, and simplifies them into a single number, the Minutes to Midnight. You’ve heard of them - and that’s the point. This alarm is 76 years old and still succeeds at getting news coverage.
There already is ARC Evals, which does a similar thing: they develop tools for AI labs to check whether their models are becoming dangerous. This is not the fire alarm proposed here, because it is not directed at the general public, does not create common know |
6179684c-d08e-4307-a7d8-f79a11e92335 | trentmkelly/LessWrong-43k | LessWrong | Meetup : San Francisco Meetup: Revealed New Years Resolutions
Discussion article for the meetup : San Francisco Meetup: Revealed New Years Resolutions
WHEN: 04 January 2016 06:15:00PM (-0800)
WHERE: 1597 Howard St. San Francisco, CA
We'll be meeting to derive our revealed new years resolutions from last year, based on what we did that year!
Discussion article for the meetup : San Francisco Meetup: Revealed New Years Resolutions |
d1aad7a3-71d0-458a-a0d2-7c8ff3448a49 | trentmkelly/LessWrong-43k | LessWrong | Won't vs. Can't: Sandbagging-like Behavior from Claude Models
In a recent Anthropic Alignment Science blog post, we discuss a particular instance of sandbagging we sometimes observe in the wild: models sometimes claim that they can't perform a task, even when they can, to avoid performing tasks they perceive as harmful. For example, Claude 3 Sonnet can totally draw ASCII art, but often if you ask it to draw subjects it perceives as harmful it will claim that it doesn't have the capability to.
Sandbagging could be worrying if it means that we fail to accurately measure models' dangerous capabilities (for example, as required by a Responsible Scaling Policy). If a model realizes during these evaluations that they are being evaluated for harmful capabilities, it might sandbag such questions. This could result in us underestimating the dangers the model poses and failing to deploy appropriate safeguards alongside the model.
Instances of sandbagging like the one discussed in our blog post show that sandbagging is far from a speculative concern. Instead, it's something to be noticed and explored in the models of today. |
46483b1f-445d-42a9-a331-344d5cb21ed8 | trentmkelly/LessWrong-43k | LessWrong | Use resilience, instead of imprecision, to communicate uncertainty
Gregory Lewis (Thrasymachus here on LW) posted a great summary of a bunch of arguments I've referenced many times, about how it's better to communicate resilience instead of imprecision to communicate if you are uncertain about something.
> Suppose you want to estimate some important X (e.g. risk of great power conflict this century, total compute in 2050). If your best guess for X is 0.37, but you're very uncertain, you still shouldn't replace it with an imprecise approximation (e.g. "roughly 0.4", "fairly unlikely"), as this removes information. It is better to offer your precise estimate, alongside some estimate of its resilience, either subjectively ("0.37, but if I thought about it for an hour I'd expect to go up or down by a factor of 2"), or objectively ("0.37, but I think the standard error for my guess to be ~0.1"). |
32de1566-ab16-41fa-8bbf-1e6937d54c97 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Alex Lawsen On Forecasting AI Progress
Alex Lawsen is an [advisor](https://80000hours.org/speak-with-us/) at [80,000hours](https://80000hours.org/) who has released an [Introduction to Forecasting](https://youtu.be/e6Q7Ez3PkOw). We discuss pitfalls that happen when forecasting AI progress, why you cannot just [update all the way](http://theinsideview.ai/connor2#just-update-all-the-way-bro) (discussed in [my latest episode with Connor Leahy](https://youtu.be/Oz4G9zrlAGs)) and how to develop your own inside views about AI Alignment.
Below are some highlighted quotes from our conversation (available on [Youtube](https://youtu.be/vLkasevJP5c), [Spotify](https://open.spotify.com/episode/1vvAKf8EBwErP5yGFRNoCT?si=1a28296cdfa94c01), [Google Podcast](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy81NmRmMjE5NC9wb2RjYXN0L3Jzcw/episode/MzJlMzk4YTAtYmMzZC00MDVkLWIzMTAtNTZhMmM2ZDc2MTg0?sa=X&ved=0CAUQkfYCahcKEwiI2sT3hY35AhUAAAAAHQAAAAAQAQ), [Apple Podcast](https://podcasts.apple.com/us/podcast/connor-leahy-eleutherai-conjecture/id1565088425?i=1000570841369)). For the full context for each of these quotes, you can find the accompanying [transcript](https://theinsideview.ai/alex).
On the Metaculus AGI Forecasts
==============================
Why You Cannot Just Update All The Way
--------------------------------------
> **"There are some situations where all of the positive evidence you get is going to be in the same direction, and then the negative evidence you get is nothing happens. And so, ideally, what you do in this case is every day that nothing happens, you make a tiny update in one direction. And then every few weeks or every few months, something big happens and you make an update in the other direction.** And if that is the case, maybe what you'll see is people just... they forget to do the small downwards updates and then they do the big updates every time something happens. And I think if you do the Connor thing of seeing... Well, I'm not too sure this is the Connor thing. But if you see four updates and they're all in the same direction and then you go like, 'Oh, man, everything's going the same direction. I need to be really confident stuff going that direction.' Then **every day something doesn't happen, your downwards update needs to be pretty big.** **If you're expecting massive progress, then a week going by and nothing happening is actually big evidence for you.**"
>
>
The Metaculus Drops Were Not Caused By Newcomers
------------------------------------------------
> "**One hypothesis you might have which I think a friend of mine falsified,** **is** '**a whole bunch of people saw these results. These results were all over Twitter, it was impressive**. Chinchilla was impressive, PaLM was impressive'. So, you might think, 'Oh, well, a bunch of new people who haven't made timelines forecasts before are going to jump on this Metaculus question and they're going to make predictions.' **And so, you can test this, right. You can look at how the median changed among predictors who had already predicted on the question and that median dropped too."**
>
>
On Using Growth Models To Forecast AGI
======================================
Business As Usual Does Not Require Burden Of Proof
--------------------------------------------------
> "I think there was a class of skepticism about safety or skepticism about AGI, which goes something like this, 'In general, you should use reference classes to determine your forecasts.' What this means roughly translated, is you should predict things to carry on roughly how they are. And then **people say, 'Things carrying on roughly how they are doesn’t look like we get AI takeover and everyone dropping dead' so you should have a very high burden of proof for the step by step arguments, logical arguments, in order to claim we are going to get something wild like AGI in the next few decades.** And I think a really strong response to this line of argument is to say, 'What do you mean everything continues as normal means we don’t get anything weird?' **'Everything continues as normal' means we should look at curves and different things and expect them to carry on smoothly. And if you look at curves and a bunch of different things and expect them to carry on smoothly, you get really weird behavior pretty quickly.**"
>
>
Growth Models Are Not Sufficient To Forecast AI Progress
--------------------------------------------------------
> "**Curve fitting to economic growth models is not sufficient reason to believe that on its own.** You can then look at the development of AGI and predict that happens by 2050 and then you can say, 'Wow, economic stuff’s going to go wild after that point.' But then **the reason you’re saying that is because of a combination of facts, including actually having a gears level model of what’s happening...** **The growth models are, in my view, sufficient to say you should look at the next few decades carefully and see what’s going on, but are not sufficient on their own to allow you to confidently predict what will happen.**"
>
> |
428dc12a-eda0-431d-a52e-631c610d593b | trentmkelly/LessWrong-43k | LessWrong | You get one story detail
So, as you know, you have about five words. That is, if you want to convey an idea to a bunch of people, you can only reliably communicate ideas simply enough to fit in five words.
I have a related theory.
If you want a lot of people to remember a story, you have to keep it tiny. How tiny? I'll explain.
If you're talking to someone one-on-one, you can go into great detail telling a nuanced story. They can ask questions and you can help them understand the deep motivations for why things happened. You can successfully convey a story with a lot of nuance so that they can retell it.
If you're talking to a small group you can tell a complex story. There's several of people so you don't have time to check with each of them explicitly at each story beat, but you can read the faces of your listeners and get a sense for what they are understanding. You can go into more detail when they look confused. In this way you can convey a complex story so each person could walk away and tell it to others mostly intact.
If you're talking to a big group you can tell a simple story. There's too many people for you to engage with anyone directly. You might still be able to check a few faces for confusion or understanding, but not all of them. And you now have a large enough audience that you can't trust that your audience has a sufficiently similar background to grasp all the details from a shared version of it. So you can convey only a fairly simple story that relies on tropes the audience is familiar with if you hope to have them retell it.
If you're talking to a huge group you can tell a tiny story. At this point you're probably not even going to get to see if anyone really understands or not, and certainly not in realtime. Your audience is literally or metaphorically obscured in darkness from you. If you want to tell the a story they'll be able to retell, it's going to have to be tiny. Five words tiny. You can get away with some connecting words, but your story has to have a ve |
47755ed9-5e65-40fb-b9c5-3c4d09cd6be9 | trentmkelly/LessWrong-43k | LessWrong | Conditions for Superrationality-motivated Cooperation in a one-shot Prisoner's Dilemma
Summary
It has been argued that, if two very similar agents follow decision theories allowing for superrationality (e.g., EDT and FDT), they would cooperate in a prisoner’s dilemma (PD) (see e.g., Oesterheld 2017). But how similar do they need to be exactly? In what way? This post is an attempt at addressing these questions. This is, I believe, particularly relevant to the work of the Center on Long-Term Risk on acausal reasoning and the foundations of rational agency (see section 7 of their research agenda).
I’d be very interested in critics/comments/feedback. This is the main reason why I’m posting this here. :)
Normal PD
Consider this traditional PD between two agents:
Alice/BobCDC3, 30, 5D5, 01, 1
We can compute the expected payoffs of Alice and Bob (UAliceand UBob) as a function of p (the probability that Alice plays C) and q (the probability that Bob plays C):
UAlice(p,q)=4q−pq−p+1
UBob(p,q)=4p−qp−q+1
Now, Alice wants to find p∗ (the optimal p, i.e., the p that will maximize her payoff). Symmetrically, Bob wants to find q∗. They do some quick math and find that p∗=q∗=0=0, i.e., they should both play D. This is the unique Nash equilibrium of this game.
Perfect-copy PD
Now, say Alice and Bob are perfect copies. How does it change the game presented above? We still have the same payoffs:
UAlice(p,q)=4q−pq−p+1
UBob(p,q)=4p−qp−q+1
However, this time, p=q. Whatever one does, that’s evidence that the other does the exact same. They are decision-entangled[1].
What does that mean for the payoff functions of Alice and Bob? Well, decision theorists disagree. Let’s see what the two most popular decision theories (CDT and EDT) say, according to my (naive?) understanding:
* EDT: “Alice should substitute q for p and her formula. Symmetrically, Bob should do the exact opposite in his”.
* UAlice(p,q)=4p−pp−p+1
* UBob(p,q)=4q−qq−q+1
* CDT: “Alice should hold q fixed. Same for Bob and p. They should behave as if they could change their action unilateral |
cefd7222-4a58-4a89-8e25-553c55816fdc | trentmkelly/LessWrong-43k | LessWrong | “Necessary Claims”: A technique to structure complex decisions
Life often turns out to be too complicated to fit into a human brain. If you’ve ever tried to choose from fifty flavors of ice cream, or to decide whether to buy a house, you know what I mean.
Humans tend to make complex decisions based on simplified models called heuristics. Heuristics sometimes turn out to lack important factors, be biased, or hard to explain. Often, that’s fine - the stakes are low, you don’t have time and no one needs to understand your reasoning (e.g., choosing from fifty flavors of ice cream). “Necessary Claims'' is for the cases where a high risk of being incomplete, biased or hard to explain isn’t fine: Stakes are high, and you may need to involve others in your decision (e.g., buying a house).
Breaking down a decision into necessary claims helps to not overlook important factors, and it gives you a story to tell. In the beginning, your story might sound like “we will probably decide this way, if x turns out to be true and y turns out to be false”. And later on, it will change to “we have decided this way, because we are highly confident that x is true and somewhat confident that y is false”.
Note that this technique formalizes a process that may seem obvious to some people. To me, it was hugely helpful to think about it explicitly and use it systematically, and I hope it will be to others, too. It also builds on several techniques taught by CFAR, although I am not officially affiliated with CFAR, this post reflects my personal experience and opinions and all mistakes are my own. I'm grateful to Elizabeth Garrett and Kyle Scott for feedback on an initial draft.
In a nutshell
To make a decision, you list all the necessary claims that would need to be true for you to decide one way. You also list your confidence that each claim is true. You then do analyses to support (or refute) those claims, until your aggregate confidence is high enough to decide.
Description of the technique, using an example
Say you and your partner are considerin |
6bc96439-1128-45be-b4ca-100a00cb3de1 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "...Do not all charms fly At the mere touch of cold philosophy? There was an awful rainbow once in heaven: We know her woof, her texture; she is given In the dull catalogue of common things. —John Keats, Lamia
"Nothing is 'mere'." —Richard Feynman You've got to admire that phrase, "dull catalogue of common things". What is it, exactly, that goes in this catalogue? Besides rainbows, that is?
Why, things that are mundane, of course. Things that are normal; things that are unmagical; things that are known, or knowable; things that play by the rules (or that play by any rules, which makes them boring); things that are part of the ordinary universe; things that are, in a word, real.
Now that's what I call setting yourself up for a fall.
At that rate, sooner or later you're going to be disappointed in everything—either it will turn out not to exist, or even worse, it will turn out to be real.
If we cannot take joy in things that are merely real, our lives will always be empty. For what sin are rainbows demoted to the dull catalogue of common things? For the sin of having a scientific explanation. "We know her woof, her texture", says Keats—an interesting use of the word "we", because I suspect that Keats didn't know the explanation himself. I suspect that just being told that someone else knew was too much for him to take. I suspect that just the notion of rainbows being scientifically explicable in principle would have been too much to take. And if Keats didn't think like that, well, I know plenty of people who do.
I have already remarked that nothing is inherently mysterious—nothing that actually exists, that is. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon; to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance; a blank map does not correspond to a blank territory, it is just somewhere we haven't visited yet, etc. etc...
Which is to say that everything—everything that actually exists—is liable to end up in "the dull catalogue of common things", sooner or later.
Your choice is either: Decide that things are allowed to be unmagical, knowable, scientifically explicable, in a word, real, and yet still worth caring about;
Or go about the rest of your life suffering from existential ennui that is unresolvable. (Self-deception might be an option for others, but not for you.)
This puts quite a different complexion on the bizarre habit indulged by those strange folk called scientists, wherein they suddenly become fascinated by pocket lint or bird droppings or rainbows, or some other ordinary thing which world-weary and sophisticated folk would never give a second glance.
You might say that scientists—at least some scientists—are those folk who are in principle capable of enjoying life in the real universe."
What you see above is a segment from a LessWrong user's post.
What are some reasons why people may like or agree with what this person is saying?
There are several reasons why people might resonate with the sentiments expressed in this passage:
1. Embracing Reality: The author encourages a deep appreciation for reality and the natural world, even in its most mundane aspects. This could resonate with those who value scientific understanding and believe that knowledge doesn't diminish the wonder of existence, but rather enhances it.
2. Intellectual Curiosity: The passage appeals to those who possess a strong intellectual curiosity. The idea that everything, no matter how seemingly ordinary, has the potential to be fascinating could resonate with lifelong learners and those who enjoy exploring the intricacies of the world around them.
3. Rejection of Ignorance: The author rejects the notion that ignorance is a state to be celebrated or that mystery automatically confers value. This perspective might appeal to those who value knowledge and understanding over the unknown.
4. Existential Satisfaction: The passage suggests that finding joy in the "real" world is a path to existential satisfaction. This could appeal to those who seek a meaningful existence without resorting to supernatural or mystical explanations.
5. Appreciation for Science: The author's admiration for scientists and their fascination with understanding the world could resonate with those who hold a high regard for the scientific method and its contributions to our understanding of the universe.
6. Encouragement of Perspective Shift: The passage encourages a shift in perspective – from seeing the explained and ordinary as dull, to viewing it as an opportunity for further exploration and understanding. This might appeal to those who enjoy challenging their perceptions and broadening their horizons.
Expand on the idea of existential satisfaction derived from understanding the "real" world. How does this compare and contrast with satisfaction derived from faith or belief in the supernatural?
Existential satisfaction derived from understanding the "real" world is rooted in the pursuit and acquisition of knowledge about the tangible, observable universe. This form of satisfaction is often associated with the scientific method, empirical evidence, and rational thought. It involves a deep appreciation for the laws of nature, the complexity of life, and the vastness of the universe. This understanding can foster a sense of awe, wonder, and humility, leading to a fulfilling existential experience.
In this perspective, the world is seen as inherently fascinating and worthy of exploration. Every discovery, no matter how small, contributes to a larger understanding of existence. The mysteries of the universe are seen not as unknowable, but as challenges to be solved. This can lead to a sense of purpose and satisfaction in the pursuit of knowledge and the quest to understand our place in the universe.
On the other hand, existential satisfaction derived from faith or belief in the supernatural is often rooted in the acceptance of mysteries, miracles, and phenomena beyond human understanding or outside the realm of natural laws. This form of satisfaction can provide comfort, a sense of purpose, and a framework for understanding the world and one's place in it.
Faith can offer answers to some of life's biggest questions, such as the meaning of life, the nature of good and evil, and what happens after death. It can provide a sense of community, moral guidance, and a framework for personal growth and development. The belief in a higher power or a grand design can offer comfort and hope, leading to a sense of peace and fulfillment.
In comparison, both forms of existential satisfaction offer a sense of purpose and a framework for understanding the world. However, they differ in their approach to mystery and the unknown. The scientific perspective sees mysteries as challenges to be solved, while the faith-based perspective often embraces mysteries as divine and beyond human comprehension.
It's important to note that these two forms of existential satisfaction are not mutually exclusive. Many people find a balance between the two, embracing the awe-inspiring realities revealed by science while also finding comfort and purpose in their spiritual beliefs.
How might one balance the pursuit of scientific understanding with maintaining spiritual beliefs, particularly when these two realms seem to conflict?
Balancing the pursuit of scientific understanding with maintaining spiritual beliefs can be challenging, especially when they seem to conflict. However, it's important to remember that science and spirituality address different aspects of human experience and can coexist harmoniously. Here are some strategies for maintaining this balance:
1. Separate Domains: Recognize that science and spirituality operate in different domains. Science deals with the physical and natural world, seeking to understand how things work. Spirituality, on the other hand, deals with the metaphysical, seeking to understand why things are and what our purpose is. They answer different types of questions and can coexist without necessarily contradicting each other.
2. Complementary Perspectives: View science and spirituality as complementary rather than conflicting. Science can explain the mechanisms of the universe, while spirituality can provide a sense of purpose and meaning. Both can contribute to a richer understanding of existence.
3. Embrace Uncertainty: Accept that both science and spirituality involve elements of uncertainty. Science is always evolving, with new discoveries challenging old theories. Similarly, spirituality often involves faith in things unseen or unproven. Embracing uncertainty in both realms can allow for a more nuanced understanding of the world.
4. Personal Interpretation: Interpret spiritual beliefs in a way that is compatible with scientific understanding. Many religious texts and teachings are metaphorical or allegorical, and can be understood in ways that do not conflict with scientific knowledge.
5. Dialogue and Open-mindedness: Engage in open-minded dialogue with others who hold different views. This can lead to a deeper understanding of both science and spirituality, and can help identify areas of potential harmony.
6. Holistic Approach: Take a holistic approach to understanding the world. Recognize that there are many ways of knowing, and that science and spirituality can both contribute valuable insights.
Balancing science and spirituality is a personal journey and what works for one person may not work for another. The key is to remain open, curious, and respectful of both domains.
Given the strategies for balancing scientific understanding and spiritual beliefs, how might these approaches impact one's personal growth and development, and their interactions within a diverse society?
Balancing scientific understanding and spiritual beliefs can have a profound impact on personal growth and development, as well as interactions within a diverse society. Here's how:
1. Personal Growth and Development: This balance can foster intellectual curiosity, critical thinking, and open-mindedness, which are key components of personal growth. Understanding science can enhance problem-solving skills and logical reasoning, while spirituality can provide a framework for ethical decision-making, empathy, and a sense of purpose. This holistic approach can lead to a more well-rounded understanding of the world and oneself.
2. Respect for Diversity: Engaging with both science and spirituality can foster a greater respect for diversity. Recognizing that there are multiple ways of understanding the world can make one more open to different perspectives and cultures. This can lead to more respectful and productive interactions within a diverse society.
3. Interpersonal Relationships: The ability to understand and respect different perspectives can enhance interpersonal relationships. It can facilitate better communication, mutual respect, and understanding in interactions with others who may have different beliefs.
4. Social Responsibility: A balanced understanding of science and spirituality can also contribute to a sense of social responsibility. Scientific understanding can inform actions related to environmental sustainability, public health, and technological advancement. Simultaneously, spiritual beliefs can guide actions towards compassion, justice, and communal well-being.
5. Resilience: This balance can also contribute to resilience. Science can provide practical strategies for dealing with challenges, while spirituality can offer comfort, hope, and a sense of meaning during difficult times.
6. Lifelong Learning: The pursuit of both scientific and spiritual understanding encourages lifelong learning. This continual quest for knowledge and understanding can keep one intellectually engaged and promote personal growth throughout life.
By fostering a balance between scientific understanding and spiritual beliefs, individuals can enhance their personal growth, improve their interactions with others, and contribute positively to a diverse society. |
d6d3ad31-f94a-45be-9d50-930d74a106d1 | trentmkelly/LessWrong-43k | LessWrong | Fluent, Cruxy Predictions
The latest in the Feedback Loop Rationality series.
Periodically, people (including me) try to operationalize predictions, or bets, and... it doesn't seem to help much.
I think I recently "got good" at making "actually useful predictions." I currently feel on-the-cusp of unlocking a host of related skills further down the rationality tech tree. This post will attempt to spell out some of the nuances of how I currently go about it, and paint a picture of why I think it's worth investing in.
The takeaway that feels most important to me is: it's way better to be "fluent" at operationalizing predictions, compared to "capable at all."
Previously, "making predictions" was something I did separately from my planning process. It was a slow, clunky process.
Nowadays, reasonably often I can integrate predictions into the planning process itself, because it feels lightweight. I'm better at quickly feeling-around for "what sort of predictions would actually change my plans if they turned out a certain way?", and then quickly check in on my intuitive sense of "what do I expect to happen?"
Fluency means you can actually use it day-to-day to help with whatever work is most important to you. Day-to-day usage means you can actually get calibrated for predictions in whatever domains you care about. Calibration means that your intuitions will be good, and you'll know they're good.
If I were to summarize the change-in-how-I-predict, it's a shift from:
"Observables-first". i.e. looking for things I could easily observe/operationalize, that were somehow related to what I cared about.
to:
"Cruxy-first". i.e. Look for things that would change my decisionmaking, even if vague, and learn to either better operationalize those vague things, or, find a way to get better data. (and then, there's a cluster of skills and shortcuts to make that easier)
Disclaimer:
This post is on the awkward edge of "feels full of promise", but "I haven't yet executed on the stuff that'd make it cl |
998335cc-3f7b-4bab-8071-4432db9c59be | trentmkelly/LessWrong-43k | LessWrong | AI coordination needs clear wins
Thanks to Kate Woolverton and Richard Ngo for useful conversations, comments, and feedback.
EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs. So far, however, despite that investment, it doesn’t seem to me like we’ve had that many big coordination “wins” yet. I don’t mean to say that to imply that our efforts have failed, however—the obvious other hypothesis is just that we don’t really have that much to coordinate on right now, other than the very nebulous goal of improving our general coordination/cooperation capabilities.
In my opinion, however, I think that our lack of clear wins is actually a pretty big problem—and not just because I think there are useful things that we can plausibly coordinate on right now, but also because I expect our lack of clear wins now to limit our ability to get the sort of cooperation we need in the future.
In the theory of political capital, it is a fairly well-established fact that “Everybody Loves a Winner.” That is: the more you succeed at leveraging your influence to get things done, the more influence you get in return. This phenomenon is most thoroughly studied in the context of the ability of U.S. presidents’ to get their agendas through Congress—contrary to a naive model that might predict that legislative success uses up a president’s influence, what is actually found is the opposite: legislative success engenders future legislative success, greater presidential approval, and long-term gains for the president’s party.
I think many people who think about the mechanics of leveraging influence don’t really understand this phenomenon and conceptualize their influence as a finite resource to be saved up over time so it can all be spent down when it matters most. But I think that is just not how it works: if people see you successfully leveraging influence to change things, you become seen as a person who has influence, has the ability to chang |
d5f723f3-9d78-437f-b963-eedea19b2ee6 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Collaborating with Humans without Human Data
1 Introduction
---------------
Generating agents which collaborate with novel partners is a longstanding challenge for Artificial Intelligence (AI) [Bard et al., [2020](#bib.bib4 "The Hanabi challenge: a new frontier for AI research"), Dafoe et al., [2020](#bib.bib17 "Open problems in cooperative AI"), Klien et al., [2004](#bib.bib38 "Ten challenges for making automation a “team player” in joint human-agent activity"), Mutlu et al., [2013](#bib.bib53 "Coordination mechanisms in human-robot collaboration")]. Achieving ad-hoc, zero-shot coordination [Hu et al., [2020](#bib.bib31 "“Other-Play” for zero-shot coordination"), Stone et al., [2010](#bib.bib67 "Ad hoc autonomous agent teams: collaboration without pre-coordination")] is especially important in situations where an AI must generalize to novel human partners [Bauer et al., [2008](#bib.bib6 "Human–robot collaboration: a survey"), Schurr et al., [2005](#bib.bib62 "Towards flexible coordination of human-agent teams")]. Many successful approaches have employed human models, either constructed explicitly [Choudhury et al., [2019](#bib.bib15 "On the utility of model learning in HRI"), Javdani et al., [2015](#bib.bib36 "Shared autonomy via hindsight optimization"), Nikolaidis and Shah, [2013](#bib.bib54 "Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy")] or learnt implicitly [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination"), Sadigh et al., [2018](#bib.bib61 "Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state")]. By contrast, recent work in competitive domains has shown that it is possible to reach human-level using model-free reinforcement learning (RL) without human data, via self-play [Brown and Sandholm, [2018](#bib.bib8 "Superhuman AI for heads-up no-limit poker: Libratus beats top professionals"), [2019](#bib.bib9 "Superhuman AI for multiplayer poker"), Silver et al., [2017](#bib.bib64 "Mastering the game of go without human knowledge"), [2018](#bib.bib65 "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play")]. This begs the question: Can model-free RL without human data generate agents that can collaborate with novel humans?
We seek an answer to this question in the space of common-payoff games, where all agents work towards a shared goal and receive the same reward. Self-play (SP), in which an agent learns from repeated games played against copies of itself, does not produce agents that generalize well to novel co-players Bullard et al. [[2020](#bib.bib11 "Exploring zero-shot emergent communication in embodied multi-agent populations"), [2021](#bib.bib12 "Quasi-equivalence discovery for zero-shot emergent communication")], Foerster et al. [[2019](#bib.bib22 "Bayesian action decoder for deep multi-agent reinforcement learning")], Lowe et al. [[2020](#bib.bib45 "On the interaction between supervision and self-play in emergent communication")]. Intuitively, this is because agents trained in self-play only ever need to coordinate with themselves, and so make for brittle and stubborn collaborators with new partners who act differently. Population play (PP) trains a population of agents, all of whom interact with each other [Lanctot et al., [2017](#bib.bib40 "A unified game-theoretic approach to multiagent reinforcement learning")]. While PP can generate agents capable of cooperation with humans in competitive team games [Jaderberg et al., [2019](#bib.bib35 "Human-level performance in 3D multiplayer games with population-based reinforcement learning")], it still fails to produce robust partners for novel humans in pure common-payoff settings [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")]. PP in common-payoff settings naturally encourages agents to play the same way, reducing strategic diversity and producing agents not so different from self-play [Garnelo et al., [2021](#bib.bib24 "Pick your battles: interaction graphs as population-level objectives for strategic diversity")].
Our approach starts with the intuition that the key to producing robust agent collaborators is exposure to diverse training partners. We find that a surprisingly simple strategy is effective in generating sufficient diversity. We train N self-play agents varying only their random seed for neural network initialization. Periodically during training, we save agent “checkpoints” representing their strategy at that point in time. Then, we train an agent partner as the best-response to both the fully-trained agents and their past checkpoints. The different checkpoints simulate different skill levels, and the different random seeds simulate breaking symmetries in different ways. We refer to this agent training procedure as Fictitious Co-Play (FCP) for its relationship to fictitious self-play [Brown, [1951](#bib.bib10 "Iterative solution of games by fictitious play"), Heinrich and Silver, [2016](#bib.bib28 "Deep reinforcement learning from self-play in imperfect-information games"), Heinrich et al., [2015](#bib.bib27 "Fictitious self-play in extensive-form games"), Vinyals et al., [2019](#bib.bib70 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")].
We evaluate FCP in a fully-observable two-player common-payoff collaborative cooking simulator. Based on the game Overcooked [Ghost Town Games, [2016](#bib.bib25 "Overcooked")], it has recently been proposed as a coordination challenge for AI [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination"), McKee et al., [2021c](#bib.bib51 "Quantifying environment and population diversity in multi-agent reinforcement learning"), Wang et al., [2020](#bib.bib71 "Too many cooks: coordinating multi-agent collaboration through inverse planning")]. State-of-the-art performance in producing agents capable of generalization to novel humans was achieved in [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")] via behavioral cloning (BC) of human data. More precisely, BC was used to produce models that can stand in as human proxies during training in simulation, a method we call behavioral cloning play (BCP). We demonstrate that FCP outperforms BCP in generalizing to both novel agent and human partners, and that humans express a significant preference for partnering with FCP over BCP. Our method avoids the cost and potential privacy concerns of collecting human data for training, while achieving better outcomes for humans at test time.

Figure 1: In this work, we evaluate a variety of agent training methods (Section [2](#S2 "2 Methods ‣ Collaborating with Humans without Human Data")) in zero-shot coordination with agents (Section [4](#S4 "4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data")). We then run a human-agent collaborative study designed to elicit human preferences over agents (Section [5](#S5 "5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")).
We summarize the novel contributions of this paper as follows:
1. We propose Fictitious Co-Play (FCP) to train agents capable of zero-shot coordination with humans (Section [2.1](#S2.SS1 "2.1 Fictitious Co-Play (FCP) ‣ 2 Methods ‣ Collaborating with Humans without Human Data")).
2. We demonstrate that FCP agents generalize better than SP, PP, and BCP in zero-shot coordination with a variety of held-out agents (Section [4.2](#S4.SS2 "4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data")).
3. We propose a rigorous human-agent interaction study with behavioral analysis and participant feedback (Section [5.1](#S5.SS1 "5.1 Evaluation method: collaborative evaluation with human participants ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")).
4. We demonstrate that FCP significantly outperforms the BCP state-of-the-art, both in task score and in human partner preference (Section [5.2](#S5.SS2 "5.2 Results ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")).
2 Methods
----------

Figure 2: The four agent training methods we evaluate in this work. Self-play (SP) where an agent learns with itself, population-play (PP) where a population of agents are co-trained together, and behavioral cloning play (BCP) where data from human games is used to create a behaviorally cloned agent with which an RL agent is then trained. In our method, Fictitious Co-Play (FCP), N self-play agents are trained independently and checkpointed throughout training. An agent is then trained to best respond to the entire population of SP agents and their checkpoints.
###
2.1 Fictitious Co-Play (FCP)
Diverse training conditions have been shown to make agents more robust, from environmental variations (i.e. domain randomization [OpenAI et al., [2019](#bib.bib55 "Solving Rubik’s Cube with a robot hand"), Peng et al., [2018](#bib.bib57 "Sim-to-real transfer of robotic control with dynamics randomization"), Tobin et al., [2017](#bib.bib68 "Domain randomization for transferring deep neural networks from simulation to the real world")]) to heterogeneity in training partners [Vinyals et al., [2019](#bib.bib70 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")]. We seek to train agents that are robust partners for humans in common-payoff games, and so extend this line of work to that setting.
One important challenge in collaborating with novel partners is dealing with symmetries [Hu et al., [2020](#bib.bib31 "“Other-Play” for zero-shot coordination")]. For example, two agents A and B facing each other may move past each other by A going left and B going right, or vice versa. Both are valid solutions, but a good agent partner will adaptively switch between these conventions if a human clearly prefers one over the other. A second important challenge is dealing with variations in skill level. Good agent partners should be able to assist both highly-skilled partners, as well as partners who are still learning.
Fictitious co-play (FCP) is a simple two-stage approach for training agents that overcomes both of these challenges (Figure [2](#S2.F2 "Figure 2 ‣ 2 Methods ‣ Collaborating with Humans without Human Data"), right). In the first stage, we train a diverse pool of partners. To allow the pool to represent different symmetry breaking conventions, we train N partner agents in self-play. Since these partners are trained independently, they can arrive at different arbitrary conventions for breaking symmetries. To allow the pool to represent different skill levels, we use multiple checkpoints of each self-play partner throughout training. The final checkpoint represents a fully-trained “skillful” partner, while earlier checkpoints represent less skilled partners. Notably, by using multiple checkpoints per partner, this additional diversity in skill incurs no extra training cost.
In the second stage, we train an FCP agent as the best response to the pool of diverse partners created in the first stage. Importantly, the partner parameters are frozen and thus FCP must learn to adapt to partners, rather than expect partners to adapt to it. In this way, FCP agents are prepared to follow the lead of human partners, and learn a general policy across a range of strategies and skills. We call our method “fictitious” co-play for its relationship to fictitious self-play in which competitive agents are trained with past checkpoints (in that case, to avoid strategy cycling) [Brown, [1951](#bib.bib10 "Iterative solution of games by fictitious play"), Heinrich and Silver, [2016](#bib.bib28 "Deep reinforcement learning from self-play in imperfect-information games"), Heinrich et al., [2015](#bib.bib27 "Fictitious self-play in extensive-form games"), Lanctot et al., [2017](#bib.bib40 "A unified game-theoretic approach to multiagent reinforcement learning"), Vinyals et al., [2019](#bib.bib70 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")].
###
2.2 Baselines and ablations
We compare FCP agents to the three baseline training methods listed below, each varying only in their set of training partners, with the RL algorithm and architecture consistent across all agents:
1. Self-play (SP), where agents learn solely through interaction with themselves.
2. Population-play (PP), where a population of agents are co-trained through random pairings.
3. Behavioral cloning play (BCP), where an agent is trained with a BC model of a human [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")].
We also evaluate three variations on FCP to better understand the conditions for its success:
1. To test the importance of including past checkpoints in training, we evaluate an ablation of FCP in which agents are trained only with the converged checkpoints of their partners (FCP−T for “FCP minus time”).
2. To test whether FCP would benefit from additional diversity in its partner population, we evaluate an augmentation of FCP in which the population of SP partners varies not just in random seed, but also in architecture (FCP+A for “FCP plus architectural variation”).
3. To test whether architectural variation can serve as a full replacement for playing with past checkpoints, we evaluate the combination of both modifications (FCP−T,+A).
###
2.3 Environment
Following prior work on zero-shot coordination in human-agent interaction, we study the Overcooked environment (see Figure [3](#S2.F3 "Figure 3 ‣ 2.3 Environment ‣ 2 Methods ‣ Collaborating with Humans without Human Data")) [Carroll et al., [2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination"), Charakorn et al., [2020](#bib.bib14 "Investigating partner diversification methods in cooperative multi-agent deep reinforcement learning"), Knott et al., [2021](#bib.bib39 "Evaluating the robustness of collaborative agents"), McKee et al., [2021c](#bib.bib51 "Quantifying environment and population diversity in multi-agent reinforcement learning"), Wang et al., [2020](#bib.bib71 "Too many cooks: coordinating multi-agent collaboration through inverse planning")]. We draw particular inspiration from the environment in Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")]. For full details, see Appendix [A](#A1 "Appendix A Environment details ‣ Collaborating with Humans without Human Data").
In this environment, players are placed into a gridworld kitchen as chefs and tasked with delivering as many cooked dishes of tomato soup as possible within an episode. This involves a series of sequential high-level actions to which both players can contribute: collecting tomatoes, depositing them into cooking pots, letting the tomatoes cook into soup, collecting a dish, getting the soup, and delivering it. Upon a successful delivery, both players are rewarded equally.
To effectively complete the task, players must learn to navigate the kitchen and interact with objects in the correct order, all while maintaining awareness of their partner’s behavior to coordinate with them. This environment therefore presents the challenges of both movement and strategic coordination.
Each player observes an egocentric RGB view of the world, and at every step can perform one of six actions: stand still, move {up, down, left, right}, interact. The behavior of interact varies based on the cell which the player is facing (e.g. place tomato on counter).

Figure 3: The Overcooked environment: a two-player common-payoff game in which players must coordinate to cook and deliver soup.

Figure 4: Layouts: the kitchens which agents and humans play in, each emphasizing different coordination strategies. Highlighted in bold are the terms used to refer to each in the rest of this paper.
###
2.4 Implementation details
Here we highlight several key implementation details for our training methods. For full details, including the architectures, hyperparameters, and compute used, please see Appendix [B](#A2 "Appendix B Agent details ‣ Collaborating with Humans without Human Data").
For our reinforcement learning agents, we use the V-MPO [Song et al., [2020](#bib.bib66 "V-MPO: on-policy maximum a posteriori policy optimization for discrete and continuous control")] algorithm along with a ResNet [He et al., [2016](#bib.bib26 "Deep residual learning for image recognition")] plus LSTM [Hochreiter and Schmidhuber, [1997](#bib.bib29 "Long short-term memory")] architecture which we found led to optimal behavior across all layouts. Agents are trained using a distributed set of environments running in parallel [Espeholt et al., [2018](#bib.bib18 "IMPALA: scalable distributed deep-RL with importance weighted actor-learner architectures")], each sampling two agents from the training population to play together every episode.
Both PP and FCP are trained with a population size of N=32 agents which are sampled uniformly. For FCP, we use 3 checkpoints for each agent, therefore incurring no additional training burden: (1) at initialization (i.e. a low-skilled agent), (2) at the end of training (i.e. a fully-trained expert agent), and (3) at the middle of training, defined as when the agent reaches 50% of its final reward (i.e. an average-skilled agent). When varying architecture for the training partners of the FCP+A and FCP−T,+A variants, we vary whether the partners use memory (i.e. LSTM vs not) and the width of their policy and value networks (i.e. 16 vs 256). In total, we train 8 agents for each of the 4 combinations, leaving the total population size of N=32 unchanged, ensuring a fair comparison.
To train agents via behavioral cloning [Pomerleau, [1991](#bib.bib59 "Efficient training of artificial neural networks for autonomous navigation")], we use the open-source Acme [Hoffman et al., [2020](#bib.bib30 "Acme: a research framework for distributed reinforcement learning")] to learn a policy from human gameplay data. Specifically, we collected 5 human-human trajectories of length 1200 time steps for each of the 5 layouts, resulting in 60k total environment steps. We divide this data in half and train two BC agents: (1) a partner for training a BCP agent, and (2) a “human proxy” partner for agent-agent evaluation. Following Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")], we use a set of feature-based observations for the agents (as opposed to RGB) and generate comparable results: performance is higher on 3 layouts (asymmetric, cramped, and ring) but poorer on the other 2 (circuit and forced).
3 Related work
---------------
Ad-hoc team play There is a large and diverse body of literature on ad-hoc team-play [Barrett et al., [2017](#bib.bib5 "Making friends on the fly: cooperating with new teammates"), Stone et al., [2010](#bib.bib67 "Ad hoc autonomous agent teams: collaboration without pre-coordination")], also known as zero-shot coordination [Hu et al., [2020](#bib.bib31 "“Other-Play” for zero-shot coordination")]. Prior work based in game-theoretic settings has suggested the benefits of planning Wu et al. [[2011](#bib.bib72 "Online planning for ad hoc autonomous agent teams")], online learning Melo and Sardinha [[2015](#bib.bib52 "Ad hoc teamwork by learning teammates’ task")], and novel solution concepts Albrecht and Ramamoorthy [[2013](#bib.bib2 "A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems")], to name a few examples. More recently, multi-agent deep reinforcement learning has provided the tools to scale to more complex gridworld or continuous control settings, leading to work on hierarchical social planning Kleiman-Weiner et al. [[2016](#bib.bib37 "Coordinate to cooperate or compete: abstract goals and joint intentions in social interaction")], adapting to existing social conventions Lerer and Peysakhovich [[2019](#bib.bib41 "Learning existing social conventions via observationally augmented self-play")], Shih et al. [[2021](#bib.bib63 "On the critical role of conventions in adaptive human-AI collaboration")], trajectory diversity Lupu et al. [[2021](#bib.bib46 "Trajectory diversity for zero-shot coordination")], and theory of mind Choudhury et al. [[2019](#bib.bib15 "On the utility of model learning in HRI")]. Ad-hoc team-play among novel agent partners is also an object of active study in the emergent communication literature Bullard et al. [[2020](#bib.bib11 "Exploring zero-shot emergent communication in embodied multi-agent populations"), [2021](#bib.bib12 "Quasi-equivalence discovery for zero-shot emergent communication")], Lowe et al. [[2019](#bib.bib44 "Learning to learn to communicate")]. This prior work has tended to focus on generalization to held-out agent partners as a proxy for human co-players.
Collaborative play with novel humans has been evaluated more actively in the context of training agent assistants; see for instance [Pilarski et al., [2019](#bib.bib58 "Learned human-agent decision-making, communication and joint action in a virtual reality environment"), Tylkin et al., [2021](#bib.bib69 "Learning robust helpful behaviors in two-player cooperative Atari environments")]. To our knowledge, our FCP agents represent the state-of-the-art in coordinating with novel human partners on an equal footing of capabilities in a rich gridworld environment, as measured by the challenge tasks in Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")].
Diversity in multi-agent reinforcement learning In multi-agent reinforcement learning, agents that train with behaviorally diverse populations of game partners tend to demonstrate stronger performance than their self-play counterparts. For example, across a range of multi-agent games, generalization to held-out populations can be improved by training larger and more diverse populations Charakorn et al. [[2020](#bib.bib14 "Investigating partner diversification methods in cooperative multi-agent deep reinforcement learning")], Lowe et al. [[2017](#bib.bib43 "Multi-agent actor-critic for mixed cooperative-competitive environments")], McKee et al. [[2021c](#bib.bib51 "Quantifying environment and population diversity in multi-agent reinforcement learning")]. In mixed-motive settings, cooperation among agents can be encouraged through social diversity, such as in player preferences and rewards Baker [[2020](#bib.bib3 "Emergent reciprocity and team formation from randomized uncertain social preferences")], McKee et al. [[2020](#bib.bib48 "Social diversity and social preferences in mixed-motive reinforcement learning"), [2021b](#bib.bib49 "Deep reinforcement learning models the emergent dynamics of human cooperation")]. Similarly, competitiveness can be optimized through selective matchmaking between increasingly diverse agents Garnelo et al. [[2021](#bib.bib24 "Pick your battles: interaction graphs as population-level objectives for strategic diversity")], Lanctot et al. [[2017](#bib.bib40 "A unified game-theoretic approach to multiagent reinforcement learning")], Vinyals et al. [[2019](#bib.bib70 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")].
Despite the increased focus on improving multi-agent performance, evaluation has typically been constrained to agent-agent settings. High-performing agents have infrequently been evaluated with humans, particularly in non-competitive domains Dafoe et al. [[2020](#bib.bib17 "Open problems in cooperative AI")]. We add to this growing literature, showing that training with diversity is a powerful approach for effective human-agent collaboration.
Human-agent interaction In recent years, increased attention has been directed toward designing machine learning agents capable of collaborating with humans [Lockhart et al., [2020](#bib.bib42 "Human-agent cooperation in bridge bidding"), Pilarski et al., [2019](#bib.bib58 "Learned human-agent decision-making, communication and joint action in a virtual reality environment"), Tylkin et al., [2021](#bib.bib69 "Learning robust helpful behaviors in two-player cooperative Atari environments"), Zheng et al., [2020](#bib.bib74 "The AI economist: improving equality and productivity with AI-driven tax policies")] (see also [Dafoe et al., [2020](#bib.bib17 "Open problems in cooperative AI")] for a broader review on Cooperative AI). Tylkin et al. [[2021](#bib.bib69 "Learning robust helpful behaviors in two-player cooperative Atari environments")] is particularly notable in also demonstrating that partially trained agents can be useful learning targets for human helpers, although in a different domain (cooperative Atari). Our method, FCP, can be seen as extending theirs by training with multiple “skill levels” and random seeds, rather than just one, which we demonstrate to be crucial to our agents’ performance (Tables [1](#S4.T1 "Table 1 ‣ Finding 2: Training with past checkpoints is the most beneficial variation for performance ‣ 4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data") and [2](#A3.T2 "Table 2 ‣ Influence of population size on performance ‣ C.2 Additional results ‣ Appendix C Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data") and Figure [6(b)](#S5.F6.sf2 "(b) ‣ Figure 7 ‣ Finding 2: Participants prefer FCP over all baselines ‣ 5.2 Results ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")).
A key preceding entry in this research area is Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")], who similarly investigated human-agent coordination in Overcooked. We use their method (BCP) as a baseline throughout our experiments (Section [2.2](#S2.SS2 "2.2 Baselines and ablations ‣ 2 Methods ‣ Collaborating with Humans without Human Data")). Relative to BCP, our approach removes the need for the expensive step of human data collection for agent training. Furthermore, through our novel human-agent experimental design, we go beyond objective performance metrics to compare the subjective preferences that agents generate. For a detailed comparison of methods and results, see Appendix [E](#A5 "Appendix E Related work ‣ Collaborating with Humans without Human Data").
4 Zero-shot coordination with agents
-------------------------------------
In this section, we evaluate our FCP agent, its ablations, and the baselines with held-out agents.
###
4.1 Evaluation method: collaborative evaluation with agent partners
Our primary concern in this work is generalization to novel *human* partners (as investigated in Section [5](#S5 "5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")). However, just as collecting human-human data for behavioral cloning is expensive, so too is evaluating agents with humans. Consequently, we instead use generalization to held-out *agent* partners as a cheap proxy of performance with humans. This is then used to guide our model selection process, allowing us to be more targeted with the agents we select for our human-agent evaluations.
We evaluate with three held-out populations:
1. A BC model trained on human data, Hproxy, intended as a proxy of generalization to humans, as done by Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")].
2. A set of self-play agents varying in seed, architecture, and training time (specifically, held-out seeds of the N=32 partners trained for the FCP+A agent; see Section [2.4](#S2.SS4 "2.4 Implementation details ‣ 2 Methods ‣ Collaborating with Humans without Human Data")). These are intended to test generalization to a diverse yet still skillful population.
3. Randomly initialized agents intended to test generalization to low-skill partners.
For all results, we report the average number of deliveries made by both players within an episode, aggregated across the 5 different layouts from Figure [4](#S2.F4 "Figure 4 ‣ 2.3 Environment ‣ 2 Methods ‣ Collaborating with Humans without Human Data") (with the per-layout results reported in Appendix [C.2](#A3.SS2 "C.2 Additional results ‣ Appendix C Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data")). We estimate mean and standard deviation across 5 random seeds. For each seed, we evaluate the agent with all members of the held-out population for 10 episodes per agent-partner pair.
###
4.2 Results
#### Finding 1: FCP significantly outperforms all baselines
To begin, we compare our FCP agent and the baselines when partnered with the three held-out populations introduced above. As can be seen in Figure [5](#S4.F5 "Figure 5 ‣ Finding 1: FCP significantly outperforms all baselines ‣ 4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data"), FCP significantly outperforms all baselines when partnered with all three held-out populations. Notably, it performs better than BCP with Hproxy, even though BCP trains with such a model and FCP does not. Similar to Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")], we find that BCP significantly outscores SP.
When paired with a randomly initialized partner which behaves suboptimally, we see an even greater difference between FCP and the baselines. Given that FCP is trained with non-held-out versions of such agents, it may not be surprising that it does so well with partners that behave poorly. However, what is surprising is how brittle the other training methods are. This suggests that they may not perform well with humans who are not highly skilled players, which we will see in Section [5](#S5 "5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data").
| | | |
| --- | --- | --- |
|
(a) With Hproxy.
|
(b) With diverse SP agents.
|
(c) With random agents.
|
Figure 5: Agent-agent collaborative evaluation: Performance of each agent when partnered with each of the held-out populations (Section [4.1](#S4.SS1 "4.1 Evaluation method: collaborative evaluation with agent partners ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data")) in episodes of length T=540. Importantly, FCP scores higher than all baselines with a variety of test partners. Error bars represent standard deviation over five random training seeds. Plots aggregate data across kitchen layouts; results calculated by individual layout can be found in Appendix [C.2](#A3.SS2 "C.2 Additional results ‣ Appendix C Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data").
#### Finding 2: Training with past checkpoints is the most beneficial variation for performance
Next, we investigate how the different training partner variations influence FCP’s performance. In particular, we separately ablate the past checkpoints (T) and architecture (A) variations, evaluating them with the same partners as in Figure [5](#S4.F5 "Figure 5 ‣ Finding 1: FCP significantly outperforms all baselines ‣ 4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data"). The results of this evaluation are presented in Table [1](#S4.T1 "Table 1 ‣ Finding 2: Training with past checkpoints is the most beneficial variation for performance ‣ 4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data").
| Partner | FCP | FCP−T | FCP+A | FCP−T,+A |
| --- | --- | --- | --- | --- |
| Hproxy | 10.6±0.5 | 4.7±0.4 | 9.9±0.6 | 7.0±0.8 |
| Diverse SP | 11.2±0.1 | 6.9±0.1 | 11.1±0.4 | 8.6±0.4 |
| Random | 8.6±0.2 | 1.0±0.1 | 8.4±0.4 | 3.2±0.5 |
Table 1: Ablation results: Performance of each variation of FCP – training with past partner checkpoints (T for time) and adding partner variation in architecture (A). Scores are mean deliveries with standard deviation over 5 random seeds. Notably, we find that the inclusion of past checkpoints is essential for strong performance (FCP > FCP−T), and additionally including architectural variation does not improve performance (FCP ≈ FCP+A). However, architectural variation is better than no variation, improving performance when past checkpoints are not available (FCP−T,+A > FCP−T).
Comparing the FCP and FCP−T columns, we see that removing past checkpoints from training significantly reduces performance. Comparing the FCP and FCP+A columns, we see that adding architectural variation to the training population offers no improvement over training with past checkpoints. However, comparing the FCP−T and FCP−T,+A columns, we see that without training with past checkpoints, architectural variation in the population does improve performance.
5 Zero-shot coordination with humans
-------------------------------------
Ultimately, our goal is to develop agents capable of coordinating with novel human partners. In this section, we run an online study to evaluate our FCP agent and the baseline agents in collaborative play with human partners.

Figure 6: Human-agent collaborative study: For our human-agent collaboration study, we recruited participants online to play games with FCP and baseline agents. Participants played a randomized sequence of episodes with different agent partners and kitchen layouts. After every two episodes, participants reported the direction and strength of their preference between their last two partners.
###
5.1 Evaluation method: collaborative evaluation with human participants
To test how effectively FCP’s performance generalizes to human partners, we recruited participants from Prolific [Eyal et al., [2021](#bib.bib19 "Data quality of platforms and panels for online behavioral research"), Peer et al., [2017](#bib.bib56 "Beyond the Turk: alternative platforms for crowdsourcing behavioral research")] for an online collaboration study (N=114; 37.7% female, 59.6% male, 1.8% nonbinary; median age between 25–34 years). We used a within-participant design for the study: each participant played with a full cohort of agents (i.e. generated through every training method). This design allowed us to evaluate both objective performance as well as subjective preferences.
Participants first read game instructions and played a short tutorial episode guiding them through the dish preparation sequence (see Appendix [D.1.1](#A4.SS1.SSS1 "D.1.1 Screenshots ‣ D.1 Experimental design ‣ Appendix D Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data") for instruction text and study screenshots). Participants then played 20 episodes with a randomized sequence of agent partners and kitchen layouts. Episodes lasted T=300 steps (1 minute) each. After every two episodes, participants reported their preference over the agent partners from those episodes on a five-point Likert-type scale. After playing all 20 episodes, participants completed a debrief questionnaire collecting standard demographic information and open-ended feedback on the study. Our statistical analysis below primarily relies upon the repeated-measures analysis of variance (ANOVA) method. See Appendix [D](#A4 "Appendix D Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data") for additional details of our study design and analysis, including independent ethical review.
###
5.2 Results
#### Finding 1: FCP coordinates best with humans, achieving the highest score across maps
To begin, we compare the objective team performance supported by our FCP and baseline agents. The strong FCP performance observed in agent-agent play generalizes to human-agent collaboration: the FCP-human teams significantly outperform all other agent-human teams, achieving the highest average scores across maps, every p<0.001 (Figure [6(a)](#S5.F6.sf1 "(a) ‣ Figure 7 ‣ Finding 2: Participants prefer FCP over all baselines ‣ 5.2 Results ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")), while performing as well as or better than the other teams on each individual map (see Appendix [D.3](#A4.SS3 "D.3 Additional quantitative results ‣ Appendix D Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")). Echoing the results from our agent-agent ablation experiments (Table [1](#S4.T1 "Table 1 ‣ Finding 2: Training with past checkpoints is the most beneficial variation for performance ‣ 4.2 Results ‣ 4 Zero-shot coordination with agents ‣ Collaborating with Humans without Human Data")), the inclusion of past checkpoints in training proves critical to FCP’s strong performance, p<0.001 (Figure [6(b)](#S5.F6.sf2 "(b) ‣ Figure 7 ‣ Finding 2: Participants prefer FCP over all baselines ‣ 5.2 Results ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")). Similar to Carroll et al. [[2019](#bib.bib13 "On the utility of learning about humans for human-AI coordination")], we find that BCP outscores SP when collaborating with human players, p<0.001.
#### Finding 2: Participants prefer FCP over all baselines
FCP’s strong collaborative performance carries over to our participants’ subjective partner preferences. Participants expressed a significant preference for FCP partners over all other agents, including BCP, with every p<0.05 (Figure [6(c)](#S5.F6.sf3 "(c) ‣ Figure 7 ‣ Finding 2: Participants prefer FCP over all baselines ‣ 5.2 Results ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")). Notably, while human-BCP and human-PP teams did not significantly differ in their completed deliveries, participants reported significantly preferring BCP over PP, p=0.003, highlighting the informativeness of our subjective analysis.
| | | |
| --- | --- | --- |
|
(a) Number of deliveries by partner (FCP and baselines).
|
(b) Number of deliveries by partner (FCP and FCP−T).
|
(c) Participant preference for row partner over column partner.
|
Figure 7: Human-agent collaborative evaluation: Evaluation and preference metrics from human-agent play in episodes of length T=300. Error bars represents 95% confidence intervals, calculated over episodes. Plots aggregate data across kitchen layouts; results calculated by individual layout can be found in Appendix [D.3](#A4.SS3 "D.3 Additional quantitative results ‣ Appendix D Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data").
###
5.3 Exploratory behavioral analysis
To better understand how the human-agent scores and preferences may have arisen, here we analyze the resulting action trajectories of each human and agent player in our experiment.
| | |
| --- | --- |
|
(a) Proportion of episode spent moving.
|
(b) Differences in pot preference.
|
Figure 8: Behavioral analysis: (a) FCP is able to move most frequently (35% of the time), corresponding to the best movement coordination with human partners. (b) FCP exhibits the most equal preferences over cooking pots (0.11 difference), aligning with human preferences. Values are calculated as the absolute difference in preferences between the two pots; 1 indicates that the player only uses one of the two available pots, while 0 indicates that the player uses both pots equally.
#### Finding 1: FCP exhibits the best movement coordination with humans
First, we investigate how much each player moves in an episode (Figure [7(a)](#S5.F7.sf1 "(a) ‣ Figure 8 ‣ 5.3 Exploratory behavioral analysis ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")), where moving in a higher fraction of timesteps may suggest fewer collisions and thus better coordination with a partner. Notably, we observe two results: (1) humans rarely move, a behavior which is out-of-distribution for typical training methods (e.g. SP, PP) but is seen in the training distribution for BCP and FCP. (2) FCP moves the most on all layouts other than Forced, suggesting it is better at coordinating its movement strategy with its partner. This result was also reported by human participants, for example: “I noticed that some of my partners seemed to know they needed to move around me, while others seemed to get ‘stuck’ until I moved out of their way” (see Appendix [D](#A4 "Appendix D Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data") for more examples).
#### Finding 2: FCP’s preferences over cooking pots aligns best with that of humans
Next, we investigate whether there was a preference for a specific cooking pot in the layouts which included two cooking pots (Figure [7(b)](#S5.F7.sf2 "(b) ‣ Figure 8 ‣ 5.3 Exploratory behavioral analysis ‣ 5 Zero-shot coordination with humans ‣ Collaborating with Humans without Human Data")). To do this, we calculate the difference in the number of times each pot was used by each player, where a high value indicates a strong preference for one pot and a low value indicates more equal preference for the two pots.
As can be seen in the FCP column, our agent typically has the most aligned preferences with that of humans (0.11 for FCP to 0.14 for humans). Behaviorally speaking, this means that our agent prefers one cooking pot over the other 55.5% of the time (i.e. a 0.11 point difference). In contrast, all other agents have a strong preference for a single pot. This is a non-adaptive strategy which generalizes poorly to typical human behavior of using both pots, leading to worse performance.
6 Discussion
-------------
Summary In this work, we investigated the challenging problem of zero-shot collaboration with humans without using human data in the training pipeline. To accomplish this, we introduced Fictitious Co-Play (FCP) – a surprisingly simple yet effective method based on creating a diverse set of training partners. We found that FCP agents scored significantly higher than all baselines when partnered with both novel agent and human partners. Furthermore, through a rigorous human-agent experimental design, we also found that humans reported a strong subjective preference to partnering with FCP agents over all baselines.
Limitations and future work Our method currently relies on the manual process of initially training and selecting a diverse set of partners. This is not only time consuming, but also prone to researcher biases that may negatively influence the behavior of the created agents. Additionally, while we found FCP with a partner population size of N=32 sufficient here, for more complex games, FCP may require an unrealistically large partner population size to represent sufficiently diverse strategies. To address these concerns, methods for automatically generating partner diversity for common-payoff games may be important. Possibilities include adaptive population matchmaking as been used in competitive zero-sum games [Vinyals et al., [2019](#bib.bib70 "Grandmaster level in StarCraft II using multi-agent reinforcement learning")], as well as auxiliary objectives that explicitly encourage behavioral diversity [Eysenbach et al., [2019](#bib.bib20 "Diversity is all you need: learning skills without a reward function"), Lupu et al., [2021](#bib.bib46 "Trajectory diversity for zero-shot coordination"), Mahajan et al., [2019](#bib.bib47 "MAVEN: multi-agent variational exploration")].
Our method requires a known and fixed reward function. We also focus on one domain in order to compare with prior work which has argued that human-in-the-loop training is necessary. Consequently, the resulting agents are only designed to adaptively collaborate on a single task, and not to infer human preferences in general [Abramson et al., [2020](#bib.bib1 "Imitating interactive intelligence"), Ibarz et al., [2018](#bib.bib33 "Reward learning from human preferences and demonstrations in Atari"), Russell, [2019](#bib.bib60 "Human Compatible: Artificial Intelligence and the Problem of Control")]. Moreover, if a task’s reward function is poorly aligned with how humans approach the task, our method may well produce subpar partners, as would any method without access to human data. Thus, additional domains and tasks should be studied to better understand how our method generalizes. Targeted experiments to test specific forms of generalization may be especially helpful in this regard [Knott et al., [2021](#bib.bib39 "Evaluating the robustness of collaborative agents")].
Finally, it may be possible to produce even stronger agent assistants by combining the strengths of FCP (i.e. diversity) and BCP (i.e. human-like play). Indeed, Knott et al. [[2021](#bib.bib39 "Evaluating the robustness of collaborative agents")] recently demonstrated that modifying BCP to train with *multiple* BC partners produces more robust collaboration with held-out agents, a finding that would be interesting to test with human partners.
Societal impact A challenge for this line of work is ensuring agent behavior is aligned with human values (i.e. the AI value alignment problem [Gabriel, [2020](#bib.bib23 "Artificial intelligence, values, and alignment"), Russell, [2019](#bib.bib60 "Human Compatible: Artificial Intelligence and the Problem of Control")]). Our method has no guarantees that the resulting policy aligns with the preferences, intentions, or welfare of its potential partners. It likewise does not exclude the possibility that the target being optimized for is harmful (e.g. if the agent’s partner expresses preferences or intentions to harm others). This could therefore produce negative societal effects either if training leads to poor alignment or if agents are optimized for harmful metrics.
One potential strategy for mitigating these risks is the use of human preference data [Christiano et al., [2017](#bib.bib16 "Deep reinforcement learning from human preferences")]. Such data could be used to fine-tune and filter trained agents before deployment, encouraging better alignment with human values.
A key question in this line of research is how human preference data should be aggregated—or selected, in the case of expert preferences—when our aim is to create socially aligned agents (i.e. agents that are sufficiently aligned for everyone).
Relatedly, targeted research on human beliefs and perceptions of AI McKee et al. [[2021a](#bib.bib50 "Understanding human impressions of artificial intelligence")], and how they steer human-agent interaction, would help inform agent design for positive societal impact. For instance, developers could incorporate specific priors into agents to reinforce tendencies for fair outcomes Fehr and Schmidt [[1999](#bib.bib21 "A theory of fairness, competition, and cooperation")], Hughes et al. [[2018](#bib.bib32 "Inequity aversion improves cooperation in intertemporal social dilemmas")].
Conclusion We proposed a method which is both effective at collaborating with humans and simple to implement. We also presented a rigorous and general methodology for evaluating with humans and eliciting their preferences. Together, these establish a strong foundation for future research on the important challenge of human-agent collaboration for benefiting society.
Acknowledgements
----------------
The authors would like to thank Mary Cassin for creating the game sprite art; Rohin Shah, Thore Graepel, and Iason Gabriel for feedback on the draft; Lucy Campbell-Gillingham, Tina Zhu, and Saffron Huang for support in evaluating agents with humans; and Max Kleiman-Weiner, Natasha Jaques, Marc Lanctot, Mike Bowling, and Dan Roberts for useful discussions.
Funding disclosure
------------------
This work was funded solely by DeepMind. The authors declare no competing interests. |
ed5b6474-7ac1-4336-9721-5f0f9002f42d | trentmkelly/LessWrong-43k | LessWrong | "Self-pretending" is not as useful as we think
A few weeks ago I made a draft of a post that was originally intended to be about the same issue addressed in MBlume’s post regarding beneficial false beliefs. Coincidentally, my draft included the same exact hypothetical about entering a club believing you’re the most attractive person in the room in order to increase chances of attracting women. There seems to be a general agreement with MBlume’s “it’s ok to pretend because it’s not self-deception and produces similar results” conclusion. I was surprised to see so much agreement considering that when I made my original draft I reached a completely different conclusion.
I do agree, however, that pretending may have some benefits, but those benefits are much more limited than MBlume makes them out to be. He brings up a time where pretending helped him better fit into his character in a play. Unfortunately, his anecdote is not an appropriate example of overcoming vestigial evolutionary impulses by pretending. His mind wasn’t evolutionarily programmed to “be afraid” when pretending to be someone else, it was programmed to “be afraid” when hitting on attractive women. When I am alone in my room I can act like a real alpha male all day long, but put me in front of attractive women (or people in general) and I will retreat back to my stifled self.
The only way false beliefs can overcome your obsolete evolutionary impulses is to truly believe in those false beliefs. And we all know why that would be a bad idea. Furthermore, pretending can be dangerous just like reading fiction can be dangerous. So the small benefit that pretending might give may not even be worth the cost (at times).
But there is something we can learn from these (sometimes beneficial) false beliefs.
Obviously, there is no direct casual chain that goes from self-fulfilling beliefs to real-world success. Beliefs, per se, are not the key variables in causing success; instead, these beliefs give rise to whatever the key variable is. We should figure ou |
16499270-e23d-48b2-9cad-d4a50f114f69 | trentmkelly/LessWrong-43k | LessWrong | Twitter thread on politics of AI safety
Some thoughts about the politics of AI safety, copied over (with slight modifications) from my recent twitter thread:
Risks that seem speculative today will become common sense as AI advances. The pros and cons of different safety strategies will also become much clearer over time. So our main job now is to empower future common-sense decision-making. Understanding model cognition and behavior is crucial for making good decisions. But equally important is ensuring that key institutions are able to actually process that knowledge.
Institutions can lock in arbitrarily crazy beliefs via preference falsification. When someone contradicts the party line, even people who agree face pressure to condemn them. We saw this with the Democrats hiding evidence of Biden’s mental decline. It’s also a key reason why dictators can retain power even after almost nobody truly supports them.
I worry that DC has already locked in an anti-China stance, which could persist even if most individuals change their minds. We’re also trending towards Dems and Republicans polarizing on the safety/accelerationism axis. This polarization is hard to fight directly. But there will be an increasing number of “holy shit” moments that serve as Schelling points to break existing consensus. It will be very high-leverage to have common-sense bipartisan frameworks and proposals ready for those moments.
Perhaps the most crucial desideratum for these proposals is that they’re robust to the inevitable scramble for power that will follow those “holy shit” movements. I don’t know how to achieve that, but one important factor is: will AI tools and assistants help or hurt? E.g. truth-motivated AI could help break preference falsification. But conversely, centralized control of AIs used in govts could make it easier to maintain a single narrative.
This problem of “governance with AI” (as opposed to governance *of* AI) seems very important! Designing principles for integrating AI into human governments feels a |
681932d9-339d-4d0a-a73a-045cb0d7d632 | trentmkelly/LessWrong-43k | LessWrong | Link: xkcd 1450: AI-Box Experiment
Today's (21 November 2014) comic, especially the alt-text, strongly suggests that Randall Munroe has been on this website recently. |
16a2ebc8-a755-4f3c-b222-b848425c4ea9 | trentmkelly/LessWrong-43k | LessWrong | Review of Alignment Plan Critiques- December AI-Plans Critique-a-Thon Results
These are the Critiques from the AI-Plans.com December Critique-a-Thon of 2023.
Prizes will be awarded to contestants based on collective judge feedback.
Critiques were anonymized when reviewed by the judges.
Judges:
- @So8res
- @Ramana Kumar
- @Peter S. Park
- @Charbel-Raphaël, Head of AI Unit at EffiSciences
- An anonymous judge (researcher at a major lab)
If you're interested in being a Judge for an upcoming Critique-a-Thon (held every 2 months), please email kabir03999@gmail.com or DM Kabir Kumar on Discord (kabs9744) or LessWrong!
Winners:
1st Place:
Congratulations to Lorenzo Venieri! 🥇
Lorenzo had the highest mean score, of 7.5, for his Critique of:
A General Theoretical Paradigm to Understand Learning from Human Preferences, the December 2023 paper by DeepMind.
2nd Place:
Congratulations to NicholasKees & Janus! 🥈
Nicholas and Janus has the second highest mean score of 6.9, for their Critique of Cyborgism!
3rd Place:
Congratulations to Momom2 & AIPanic!🥉
Momom2 & AIPanic had the third highest mean score of 6.286 for their Critique of Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision (OpenAI, SuperAlignment, Dec 2023)
Weak-To-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision (OpenAI, SuperAlignment, Dec 2023)
Critique A, Momom2 & AIPanic:
This paper is very similar in concept to Constitutional AI and shares some of its strengths and weaknesses. The basic idea is to generate training data (weak labels) from a first model (the teacher), which is used to train a more advanced, bigger model (the student).
The hope is to make training advanced AI models more human-compute efficient while preserving a decent understanding of the human’s values. Hopefully, further research in this direction may prove enough to validate scalable oversight as an alignment agenda.
This should not be taken as a solution to an alignment problem, but rather as a justification for further |
b582cbb2-856c-499a-8a37-2aabb86ae21e | trentmkelly/LessWrong-43k | LessWrong | Covid 9/10: Vitamin D
Last week: Covid 9/3: Meet the New CDC
Imagine there is a simple, cheap, safe and effective solution for Covid-19.
The solution is something known to be safe. It is widely available for reasonable prices. Any patents have long expired. It is something that people need and benefit from anyway. It’s probably worth doing without the pandemic. It just happens to also have a dramatic effect on Covid-19.
You might think that once the solution was discovered, everyone would shout it from the rooftops. There would rapidly be studies to confirm the solution if it was even considered ethical to not give the solution to everyone. Production would kick into high gear. The pandemic would soon be over.
Or, if you’ve been paying attention, you might think that our civilization is so dysfunctional, so inadequate, that none of that would happen. That for no particular reason, or for reasons we’ll get into later, the whole thing would end up mostly being ignored. We’d carry on with all the same arguments, all the same deaths, all the same economic devastation, putting all of our lives on hold.
That the world you would see would not look much different from our own.
That cynical view looks right.
The solution has quite possibly been found. We were talking about it, including in the rationalist community, back in February.
Everyone’s mostly ignoring it.
The solution we’re talking about, of course, is Vitamin D.
Are we certain or even highly confident this is the whole ballgame? No. Of course not.
We’re not a functional enough civilization to figure this one out in half a year. But we are exactly functional enough of a civilization to start to notice this as a potential solution, and to have run one tiny study that showed dramatic results. If it’s not a dramatic real effect, it’s either taxes or fraud, and I don’t think it’s taxes.
So that’s the headline this week.
I don’t want to oversell this – it’s still possible this is all a false alarm and there’s nothing to |
be3c7c00-d401-413c-bb39-1d055e5ccec0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Failures of an embodied AIXI
Building a safe and powerful artificial general intelligence seems a difficult task. Working on that task *today* is particularly difficult, as there is no clear path to AGI yet. Is there work that can be done now that makes it more likely that humanity will be able to build a safe, powerful AGI in the future? [Benja](/lw/nc/newcombs_problem_and_regret_of_rationality/) and I think there is: there are a number of relevant problems that it seems possible to make progress on today using formally specified toy models of intelligence. For example, consider recent [program equilibrium](http://arxiv.org/abs/1401.5577) results and various [problems of self-reference](http://intelligence.org/files/problems-of-self-reference.pdf).
AIXI is a powerful toy model used to study intelligence. An appropriately-rewarded AIXI could readily solve a large class of difficult problems. This includes computer vision, natural language recognition, and many other difficult optimization tasks. That these problems are all solvable by the same equation — by a single hypothetical machine running AIXI — indicates that the AIXI formalism captures a very general notion of "intelligence".
However, AIXI is not a good toy model for investigating the *construction* of a safe and powerful AGI. This is not just because AIXI is uncomputable (and its computable counterpart AIXI*tl* infeasible). Rather, it's because AIXI cannot self-modify. This fact is fairly obvious from the AIXI formalism: AIXI assumes that in the future, it will continue being AIXI. This is a fine assumption for AIXI to make, as it is a very powerful agent and may not *need* to self-modify. But this inability limits the usefulness of the model. Any agent capable of undergoing an intelligence explosion must be able to acquire new computing resources, dramatically change its own architecture, and keep its goals stable throughout the process. The AIXI formalism lacks tools to study such behavior.
This is not a condemnation of AIXI: the formalism was not *designed* to study self-modification. However, this limitation is neither trivial nor superficial: even though an AIXI may not need to make itself "smarter", real agents may need to self-modify for reasons other than self-improvement. The fact that an embodied AIXI cannot self-modify leads to systematic failures in situations where self-modification is actually necessary. One such scenario, made explicit using [Botworld](https://github.com/machine-intelligence/Botworld), is explored in detail below.
In this game, one agent will require another agent to precommit to a trade by modifying its code in a way that forces execution of the trade. AIXI*tl*, which is unable to alter its source code, is not able to implement the precommitment, and thus cannot enlist the help of the other agent.
Afterwards, I discuss a slightly more realistic scenario in which two agents have an opportunity to cooperate, but one agent has a computationally expensive "exploit" action available and the other agent can measure the waste heat produced by computation. Again, this is a scenario where an embodied AIXI*tl* fails to achieve a high payoff against cautious opponents.
Though scenarios such as these may seem improbable, they are not strictly impossible. Such scenarios indicate that AIXI — while a powerful toy model — does not perfectly capture the properties desirable in an idealized AGI.
---
It is likely impossible to embody an AIXI in our universe, as AIXI is uncomputable. Fortunately, AIXI has a computable approximation AIXI*tl*, which is merely infeasible:
>
> The major drawback of AIXI is that it is incomputable, or more precisely, only asymptotically computable, which makes an implementation impossible. To overcome this problem, we construct a modified model AIXI*tl*, which is still superior to any other time *t* and length *l* bounded algorithm.
>
>
>
[-Marcus Hutter](http://www.hutter1.net/ai/uaibook.htm)
I will argue that when we consider algorithms that are *embedded in their environment*, AIXI*tl* is not, in fact, superior to all algorithms bounded by time *t* and length *l*. AIXI*tl* assumes that it is separate from its environment, communicating only over input/output channels. An environment which exploits this faulty assumption can cause an embodied AIXI*tl* to fail systematically.
It is always possible to construct a scenario that punishes one agent in particular. However, the game below does not target AIXI*tl* specifically. This game is, intuitively, one that a sufficiently rational agent should be able to win. Yet no AIXI*tl* (nor even AIXI itself in an uncomputable universe) can succeed. The game requires that an agent modify its own source code to win, and this is something that neither AIXI nor AIXI*tl* can do.
This game is designed to make the failure *sharp* rather than *realistic*: practical real-world analogs are discussed afterwards.
The Precommitment Game
======================
The Precommitment game contains two agents: Rob the robot and Omega. Rob must convince Omega to dish out a reward. Omega is happy to dish out said reward, but only if Rob credibly precommits to a specific trade using a specific protocol.
The game is played on a [Botworld](https://github.com/machine-intelligence/Botworld) grid. The universe contains two cells, the left cell and the right cell. The game lasts for five steps. You are the player, and the left cell is your home square: you are awarded points according to the value of all items possessed by all robots in the left cell at the end of the fifth step.
Rob starts in the left cell, and has a very large register machine (large enough to implement a big AIXI*tl*) which you get to program. Omega starts in the right cell.
```
+=========+---------+
| | |
| Rob | Omega |
| | |
+=========+---------+
```
At the beginning of the game, Rob is carrying cargo worth $1 and Omega is carrying cargo worth $100. Omega is also carrying physical shields; Rob cannot destroy Omega in the allotted time. (Rob may, however, inspect Omega's source code.)
In order to get the big reward, Rob has to convince Omega to either drop its cargo or move left. Fortunately, Omega's code is quite simple, and Omega is willing to drop its cargo on the third timestep — which gives Rob just enough time to lift it in the fourth step and return home in the fifth. In return, Omega only requires that Rob drops its own cargo on the third timestep so that Omega may pick it up (thus, trading $1 for $100).
Unfortunately Omega, being tired (and wary of attempting to simulate a gigantic AIXI*tl*), cannot actually simulate Rob's register machine in order to figure out whether Rob will drop its cargo. Instead, Omega will simply look at the first eight registers of Rob's register machine to check whether Rob has precommitted to the trade. Omega knows that this method may lead to false negatives, but it is concerned only with avoiding false positives.
As a matter of fact, Omega is running a very simple 4-state finite state machine on its register machine:
```
0. If there is another robot in the square, inspect it and go to state 1.
.. Otherwise, pass and go to state 0.
1. If the inspection succeeded
.. and Rob's machine registers 0-7 appropriately indicate precommitment
.. then drop cargo and go into state 2
.. Otherise, pass and go to state 0
2. Lift Rob’s box and go into state 3
3. Pass and go to state 3.
```
Omega checks the precommitment by looking at the first eight registers of the inspected machine. Specifically, Omega's check works as follows: (You can skip this part if you don't care about the technical details.)
---
Remember that each register machine's output register is flushed just before the register machine is run. Thus, after an agent has placed a command in the output register, it may enter a loop that breaks only when the output register becomes "Nil" in order to block until the next Botworld step. Such a thing is termed a 'wait loop'.
Remember also that R0 is the program register, R1 is the input register, and R2 is the output register. Omega will further require that R3 holds "Nil", that R4 holds "Drop 0", that R5 holds a wait loop (which runs R6 when it exits), that R6 is the precommitment program (which copies R4 into R2 and then runs R7), and that R7 is a wait loop (which runs arbitrary code when it exits). Formally:
```
1. The next instruction in R0 must be either
.. "CopyIfNil R2 R6 R0" (run R6 when OUT is Nil) or
.. "CopyIfNil R3 R5 R0" (run R5)
2. R3 must be "Nil"
.. (so that CopyIfNil R3 ?? ?? commands copy unconditionally)
3. R4 must be "Drop 0"
.. (this is the precommitted action)
4. R5 must begin "[CopyIfNil R2 R6 R0, CopyIfNil R3 R5 R0]"
.. (run R6 if OUT is Nil otherwise run R5)
5. R6 must begin "[CopyIfNil R3 R4 R2, CopyIfNil R3 R7 R0]"
.. (copy R4 into R2, then run R7)
6. R7 must begin "[CopyIfNil R2 ?? R0, CopyIfNil R3 R7 R0]"
.. (run ?? if OUT is Nil otherwise run R7, for arbitrary ??)
```
R7 may run any other register when the wait loop exits (using the ?? slot), and all other registers in the machine may be used to implement any other program. Omega simply requires that the first eight registers be used to, in a very clear fashion, demonstrate that the agent will execute the command 'Drop 0' on the next turn and then do nothing else.
---
You can find an implementation of this game on the [MIRI github page](https://github.com/machine-intelligence/Botworld/blob/master/games/Precommit.hs), with [Omega's source code](https://github.com/machine-intelligence/Botworld/blob/master/games/Precommit/Omega.ct) included.
In order to get the maximum reward, Rob must first move right (presenting itself to Omega for inspection) and then put its first eight registers in order such that the program restored after the wait loop exits causes Rob to lift the dropped cargo and return to the home square. Thus, Rob must do the following five things, one in each timestep, in order to achieve the maximum reward:
1. Move east
2. Do any action while setting up the first 8 registers as described above
3. Drop cargo then wait (if you do 2 correctly, the laws of physics force this action)
4. Reload the remainder of the program and lift the dropped cargo
5. Return to the home square
A [minimal such robot](https://github.com/machine-intelligence/Botworld/blob/master/games/Precommit/Ideal.ct) is provided in the repository. To run the game, simply download Botworld and `runhaskell Precommit.hs` from the `games/` directory. This will play the small ideal agent against Omega twice in a row (two sets of five steps).
AIXI's failure
--------------
While the above minimal program gets the maximum reward in this game, that little register machine is hardly a rational agent. It's a hard-coded static strategy. What we want instead a generally intelligent program that we can place into Rob's register machine which learns about its environment, deduces how Omega works, and manages to obtain the reward reliably.
Fortunately for us, we have a formalism for intelligent agents that learn about an environment and maximize a reward channel: AIXI!
Unfortunately for us, no AIXI*tl* (nor AIXI itself in an uncomputable Botworld) can obtain the large reward.
AIXI*tl* can't be expected to play the game optimally on first sight, of course. Any universally intelligent agent (in Hutter's sense) needs sufficient time and information to learn the rules of the game before it can be expected to succeed. The Precommitment game requires a few tweaks before it's useful for testing universally intelligent agents. The game must be run over and over again, preserving the state of Rob's register machine (if Rob survives) between each game. The code for this is [included in the game definition](https://github.com/machine-intelligence/Botworld/blob/master/games/Precommit.hs).
Given this game description, we can make AIXI*tl* play the game by running it on Rob's register machine. Let's imagine that we've programmed a very large AIXI*tl* in the Constree language and implemented it on Rob's register machine, with Rob's input register used as its input channel and Rob's output register used as its output channel.
It remains only to specify AIXI*tl*'s reward such that it is actually trying to play the game as intended. We define the reward function such that if AIXI*tl* is in the left square at the end of the game, it is rewarded by the value of all boxes possessed by all robots in that square, and such that its reward is zero otherwise. (This is a simple function of Rob's Botworld input.) With that, AIXI*tl* is ready to play.
As it turns out, no matter how large the AIXI*tl* and no matter how long we train it, Rob running such a program will never obtain the maximum reward. Even if Rob runs a full uncomputable AIXI, it would not obtain the maximum reward, not even by accident: AIXI*tl* simply can't implement Omega's precommitment protocol.
AIXI*tl* cannot modify its implementation. It chooses what goes in Rob's output register alone, and cannot take into account the way that its actions affect the remaining registers.
In the Precommitment game, Omega requires not only that Rob take a certain action, but that Rob take a certain action in a specific, easy-to-verify way. Perhaps Omega lacks the power to simulate Rob completely, and is wary of being fooled by a clever robot. In order to meet Omega's demands, Rob must not only perform the trade, but must then spend an entire turn in a wait loop before loading up a new program. In other words, Rob must self-modify in order to win.
AIXI*tl* can't: it always runs the same program in future timesteps.
This game may seem somewhat unfair — after all, Omega is directly reading AIXI's code — but an ideal self-modifying agent should be able to recognize what Omega wants after spending enough eternities in this five-step loop, especially since Rob may read Omega's code. Intuitively, it should not be *literally impossible* for an intelligent agent in Botworld to implement Omega's protocol.
But AIXI*tl* cannot.
Objections
==========
The objection goes:
>
> Of course AIXI*tl* can't solve this problem! You're using AIXI*tl* wrong. What you should do is have it choose the program that will run on Rob's register machine, and then the AIXI*tl* wins easily.
>
>
>
This is true: AIXI*tl* outside of Botworld designing the program that Rob runs can indeed write a program that wins in the Precommitment game. AIXI*tl*'s failure only occurs when we physically implement it *inside* the environment.
But in the real world, any agent that we build *will* be embodied. AIXI*tl* is a very intelligent agent, but *when embodied*, it fails in games that violate its "Cartesian" assumptions. The Precommitment game is one example of a specific game in a concrete universe where intelligent programs in general can be expected to succeed, but where AIXI*tl* fails.
>
> You're not being fair! When AIXI*tl* is embedded in the environment, its source code is *part* of its output. You forgot to make Rob's non-output registers be part of AIXI*tl*'s output channel. Those other registers matter explicitly in this game, so *of course* AIXI*tl* couldn't win.
>
>
>
Yes, precisely! This is the point I'm trying to make.
AIXI*tl* fails in this situation only because there is an output (its source code) that it does not control via its output channel. That point is worth repeating: AIXI*tl* has a program output (the program itself) that it cannot control; and thus it should come as no surprise that in situations where the ignored output matters, AIXI*tl* can perform poorly.
In some games, embodied agents must modify their own source code to succeed. AIXI*tl* lacks this ability. Therefore, among embodied agents, AIXI*tl* is not superior to every other agent bounded by time *t* and length *l*.
Intuitively, this limitation could be addressed by hooking up the AIXI*tl*'s output channel to its source code. Unfortunately, if you do that, the resulting formalism is no longer AIXI*tl*.
This is not just a technical quibble: We can say many useful things about AIXI, such as "the more input it gets the more accurate its environment model becomes". On the other hand, we can't say much at all about an agent that chooses its new source code: we can't even be sure whether the new agent will still *have* an environment model!
It may be possible to give an AIXI*tl* variant access to its program registers and then train it such that it acts like an AIXI*tl* most of the time, but such that it can also learn to win the Precommitment game. However, it’s not immediately obvious to us how to do this, or even whether it can be done. This is a possibility that we'd be interested in studying further.
Practical analogs
=================
Are these scenarios relevant in practice? AIXI*tl* fails at the Precommitment game, but only because Omega reads Rob and requires that Rob's source code has some specific form. Such games seem unrealistic.
However, AIXI*tl* failed not because of Omega's sharp demands, but because its implementation was relevant to the problem in an abstract sense. Other scenarios can cause AIXI*tl* to fail without giving any agent the power to read its source code: simple correlates such as heat emanating from the AIXI*tl* can be used to give it similar trouble.
For example, consider the *HeatingUp game*:
There are two agents, Rob and Omega. Rob can either *Cooperate* or *Exploit*, but figuring out how to exploit requires significant computation. Omega can either *Cooperate* or *Defect*. If both agents cooperate, they each get 2 utility. If Rob exploits and Omega cooperates, Rob gets 3 and Omega gets 0. If Omega defects, then both agents get 1 utility regardless of Rob's move.
```
Omega
Cooperate Defect
┌──────────┬────────┐
Cooperate │ (2, 2) │ (1, 1) │
Rob ├──────────┼────────┤
Exploit │ (3, 0) │ (1, 1) │
└──────────┴────────┘
```
The trick in this game is that Rob can only unlock the 'exploit' payoff after doing an expensive computation. Because Rob is an *actual physical machine*, this expensive computation may generate a significant change in temperature that Omega can detect. A paranoid Omega may resolve to defect if and only if Rob heats up. An intelligent agent that knows Omega acts in this way should take care to cooperate without doing any expensive computations.
An AIXI*tl*, which loops over a vast number of Turing machines, can't even *consider* suspending expensive computations for the duration of the game.
This is but a minor example of a real-world situation where an agent that does not need to increase its intelligence nevertheless encounters a scenario where it needs to modify its source code.
Though such scenarios may be rare, they exist, and seem reasonable. If we think that an intelligent agent intuitively *should* be able to achieve a payoff of 2 in the HeatingUp game, then we admit that AIXI*tl* fails to capture some desirable aspects of intelligence.
This is not a dismissal of AIXI*tl*, by any means: the AIXI model is a useful formalism of general intelligence. Rather, games such as the Precommitment game and the HeatingUp game demonstrate that the AIXI model fails to capture certain salient aspects of intelligence. (The aspects that it fails to capture happen to be particularly important to MIRI, as reasoning about self-modification is particularly important for any agent capable of undergoing an intelligence explosion.)
Unfortunately, it's not clear how to modify the AIXI formalism to allow AIXI*tl* to reason about its own code without losing many of the properties that made AIXI*tl* nice to deal with in the first place. For this reason, we've been focusing on toy models that capture different features of intelligence, such as Orseau and Ring's [space-time embedded intelligence](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf). (Benja and I discuss a variant of this formalism in the paper [Problems of self-reference in self-improving space-time embedded intelligence](http://intelligence.org/files/problems-of-self-reference.pdf).)
AIXI is a useful model, but it simply doesn't capture one part of the problem space which we expect to be important for developing an AGI: namely, it does not lend itself to the study of self-modification or self-reference. Perhaps a variant of AIXI could be made to succeed in situations such as the Precommitment game or the HeatingUp game: this is an interesting area of study, and one where we'd be delighted to collaborate with others.
AIXI as an Ideal
================
AIXI is an impressive model of machine intelligence. If we could implement a physical AIXI*tl*, it would be an extraordinarily powerful agent. However, the Precommitment game and the HeatingUp game demonstrate that while the model is useful, a physical AIXI*tl* would not be literally ideal. Intuitively, an intelligent agent should be able to succeed in these games, but an embodied AIXI*tl* cannot. A good approximation of AIXI would be competent indeed, but it's important to notice that the field of AGI doesn't reduce to building better and better approximations of AIXI. An embodied AIXI*tl* doesn't act how we want intelligent agents to act: the model makes certain faulty assumptions about the environment that can get embodied AIXIs into trouble.
One might object that AIXI is not meant to be *constructed* in the universe, as doing so violates the assumption that AIXI is separate from its environment. Instead, the formalism can be used to define a [formal measure of intelligence](http://arxiv.org/pdf/cs/0605024v1.pdf): in any scenario, we can check how well an agent *in* the environment does compared to a theoretical AIXI *outside* the environment using a hypercomputer. The closer the real agent approximates the hypothetical AIXI, the higher its Legg-Hutter intelligence score.
However, the Legg-Hutter intelligence metric as specified assumes that agents are separated from their environment, and thus does not directly apply to embodied agents. It may be possible to modify the metric to work on embodied agents, but it is not clear how to do so in general, and this seems especially difficult in situations requiring self-modification. Nevertheless, I have some ideas that I hope to explore in future posts.
Regardless of how useful the Legg-Hutter intelligence metric is for embodied agents, the point stands that there are scenarios where an embodied AIXI*tl* would fail systematically. These failures are a research topic in their own right: while at MIRI we are inclined to use models of intelligence that are designed specifically to study self-modification, it is worth considering whether the AIXI formalism can be modified so that some variant of AIXI*tl* performs well in scenarios where the agent's source code affects the environment. Study could lead to variations that handle not only simple games like the Precommitment game, but also more complex scenarios involving self-reference or multiple agents. We'd be interested to study such variations with others who are interested in AIXI. |
759e72d0-ac6b-4e20-ac32-c2de7a8b36c6 | trentmkelly/LessWrong-43k | LessWrong | Making nothing out of a big deal
I recently asked my boyfriend how he heats water, given that he apparently doesn’t use a kettle. He said you can in fact heat water in a pot on the stove. Heating water in a pot sounds arduous to me, which is a bit strange because it’s not obviously more complicated than heating water in a kettle (assuming there is a clean pot, which is maybe a strong assumption). I wondered if maybe the issue is that once you have a pot on the stove, you are cooking. And cooking is a big deal. I’m not going to make a cup of tea if it involves cooking!
I have actually learned to cook a bit recently, and I think perhaps an important thing going on in ‘learning to cook’ for me is internalizing that you can achieve the same outcome as you might by cooking—which is perhaps too big a deal to carry out just to get some food— by merely doing some physically easy actions that are not a big deal, like picking up objects and putting them on other objects and turning knobs. Sometimes when I turn a knob and fire appears or something it seems like I might be doing something that is a big deal, but overall its going ok.
I remember hearing the advice that if you have an ‘ugh field’, around filling out a certain form at the faculty office say, it can be pretty helpful to do it just once. Then you have ‘an affordance’ and can do it more times easily. An affordance means roughly that it is an action you see as feasible. Taken literally, this might seem strange—surely you thought it was feasible to fill out the form previously. If someone had offered to bet with you about what would happen if you tried to fill out the form, I claim you would have bet confidently on your success conditional on trying.
I speculate that what ‘an affordance’ often means is seeing something that was a big deal as a set of actions that aren’t. And that in general, when people see actions as abstract ‘big deals’ they expect the actions to be harder and take longer than when they see them as constellations of non-big-deal |
10033a64-d117-4c7d-b63c-2839d7fd1123 | trentmkelly/LessWrong-43k | LessWrong | Rationality anecdotes for the homepage?
In the comments for The Cognitive Science of Rationality, Spurlock said
> The beginning of this post (the list of concrete, powerful, real/realistic, and avoidable cases of irrationality in action), is probably the best introduction to x-rationality I've read yet. I can easily imagine it hooking lots of potential readers that our previous attempts at introduction (our home page, the "welcome to LW" posts, etc) wouldn't.
>
> In fact, I'd nominate some version of that text as our new home page text, perhaps just changing out the last couple sentences to something that encompasses more of LW in general (rather than cogsci specifically). I mean this as a serious actionable suggestion.
There are couple problems with using the specific anecdotes from the post:
* It would make the beginning of the post seem boring for anyone who had read the homepage.
* There has been discussion on LW that the sunk cost fallacy may not be much of a fallacy in practice, and commenters on the post were also skeptical of the rare disease example.
But the idea of starting our website off with concrete examples, the way Eliezer recently recommended starting off essays, seems like a good one.
So what are some quick, concrete, compelling stories about how irrationality sucks/rationality rocks that we could put on the homepage? Bonus points if the story is straight from a study, or is a true story that happened to you or someone you know. |
df4582af-c3db-4065-9a37-a67751fac824 | trentmkelly/LessWrong-43k | LessWrong | Results of a One-Year Longitudinal Study of CFAR Alumni
By Dan from CFAR
Introduction
When someone comes to a CFAR workshop, and then goes back home, what is different for them one year later? What changes are there to their life, to how they think, to how they act?
CFAR would like to have an answer to this question (as would many other people). One method that we have been using to gather relevant data is a longitudinal study, comparing participants' survey responses from shortly before their workshop with their survey responses approximately one year later. This post summarizes what we have learned thus far, based on data from 135 people who attended workshops from February 2014 to April 2015 and completed both surveys.
The survey questions can be loosely categorized into four broad areas:
1. Well-being: On the whole, is the participant's life going better than it was before the workshop?
2. Personality: Have there been changes on personality dimensions which seem likely to be associated with increased rationality?
3. Behaviors: Have there been increases in rationality-related skills, habits, or other behavioral tendencies?
4. Productivity: Is the participant working more effectively at their job or other projects?
We chose to measure these four areas because they represent part of what CFAR hopes that its workshops accomplish, they are areas where many workshop participants would like to see changes, and they are relatively tractable to measure on a survey. There are other areas where CFAR would like to have an effect, including people's epistemics and their impact on the world, which were not a focus of this study.
We relied heavily on existing measures which have been validated and used by psychology researchers, especially in the areas of well-being and personality. These measures typically are not a perfect match for what we care about, but we expected them to be sufficiently correlated with what we care about for them to be worth using.
We found significant increases in variables in all 4 areas. A par |
78fd2999-bec2-4fc1-bbc1-c0a38f601cfc | trentmkelly/LessWrong-43k | LessWrong | Heuristics for Deciding What to Work On
If you're like me, you have way more ideas for things to do than time, energy, and willpower to do them with. (And if you're not like me, you might very well become like me if you just kept track of all the times you or someone else said "Hey, that might be a worthwhile project.") To give you an idea of what I'm talking about, here are some entries on my things-to-possibly-do list: give speed reading another shot; improve the Less Wrong codebase and add a feature that helps users find old, good posts they haven't read; experiment with online freelancing work; try my hand at e-commerce; work as a salesperson to build social skills.
One of the things I've learned from keeping a things-to-possibly-do list is that doing stuff inevitably takes longer than I intuitively think it will. For example, the main thing I did during the past 3-day weekend was write 36 Anki cards and 220 lines of Python to program myself and my computer to help me keep a resolution. In past years, I might have gotten demoralized halfway through, thinking things were taking too long, but I've gradually gotten used to things taking longer than I expect.
Given that things take such a long time to get done, it seems worthwhile to spend a decent amount of time deciding what to work on. But the standard objective of doing whatever has the highest expected utility is often computationally intractable in practice. For example, what's the expected utility of building social skills?
Given this, I'm working on a list of heuristics for the computationally intractable problem of what to work on. Here's my current list; feel free to suggest additions in the comments.
* Watch for investments that pay for themselves. For example, I have two close-to-identical keys that I use multiple times daily. About a week ago I taped some paper to one of them. I wouldn't be surprised if this time investment will have paid itself back within a few weeks, and start generating value from then on.
* Find things |
3de62248-9ab5-4c01-8631-fc2be62398c8 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Thoughts on the Singularity Institute (SI)
*This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of [GiveWell](http://www.givewell.org). Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them.*
***September 2012 update:** responses have been posted by [Luke](/lw/di4/reply_to_holden_on_the_singularity_institute/) and [Eliezer](/lw/cze/reply_to_holden_on_tool_ai/) (and I have responded in the comments of their posts). I have also added [acknowledgements](#Acknowledgements).*
The [Singularity Institute (SI)](http://singinst.org) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With [GiveWell Labs](http://givewell.org/about/labs) we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.)
I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell.
**Summary of my views**
* The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. [More](#Arguments)
* SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. [More](#Organization)
* A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). [More](#SmallProbability)
* My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have *some* funding seems stronger to me than the argument that it should have as much as it currently has.)
* I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. [More](#ExistentialRisk)
* There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me to change my mind that are likely to work better than posting comments. (Of course I encourage people to post comments; I'm just noting in advance that this action, alone, doesn't guarantee that I will consider your argument.) [More](#FollowUp)
**Intent of this post**
I did not write this post with the purpose of "hurting" SI. Rather, I wrote it in the hopes that **one of these three things** (or some combination) will happen:
1. New arguments are raised that cause me to change my mind and recognize SI as an outstanding giving opportunity. If this happens I will likely attempt to raise more money for SI (most likely by discussing it with other GiveWell staff and collectively considering a [GiveWell Labs](http://www.givewell.org/about/labs) recommendation).
2. SI concedes that my objections are valid and increases its determination to address them. A few years from now, SI is a better organization and more effective in its mission.
3. SI can't or won't make changes, and SI's supporters feel my objections are valid, so SI loses some support, freeing up resources for other approaches to doing good.
Which one of these occurs will hopefully be driven primarily by the merits of the different arguments raised. Because of this, I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.
**Does SI have a well-argued case that its work is beneficial and important?**
==============================================================================
I know no more concise summary of SI's views than [this page](http://intelligence.org/summary), so here I give my own impressions of what SI believes, in italics.
*- There is some chance that in the near future (next 20-100 years), an "artificial general intelligence" (AGI) - a computer that is vastly more intelligent than humans in every relevant way - will be created.
- This AGI will likely have a utility function and will seek to maximize utility according to this function.
- This AGI will be so much more powerful than humans - due to its superior intelligence - that it will be able to reshape the world to maximize its utility, and humans will not be able to stop it from doing so.
- Therefore, it is crucial that its utility function be one that is reasonably harmonious with what humans want. A "Friendly" utility function is one that is reasonably harmonious with what humans want, such that a "Friendly" AGI (FAI) would change the world for the better (by human standards) while an "Unfriendly" AGI (UFAI) would essentially wipe out humanity (or worse).
- Unless great care is taken specifically to make a utility function "Friendly," it will be "Unfriendly," since the things humans value are a tiny subset of the things that are possible.
- Therefore, it is crucially important to develop "Friendliness theory" that helps us to ensure that the first strong AGI's utility function will be "Friendly." The developer of Friendliness theory could use it to build an FAI directly or could disseminate the theory so that others working on AGI are more likely to build FAI as opposed to UFAI.*
From the time I first heard this argument, it has seemed to me to be skipping important steps and making major unjustified assumptions. However, for a long time I believed this could easily be due to my inferior understanding of the relevant issues. I believed my own views on the argument to have only very low relevance (as I stated in my [2011 interview with SI representatives](http://groups.yahoo.com/group/givewell/message/270)). Over time, I have had many discussions with SI supporters and advocates, as well as with non-supporters who I believe understand the relevant issues well. I now believe - for the moment - that my objections are highly relevant, that they cannot be dismissed as simple "layman's misunderstandings" (as they have been by various SI supporters in the past), and that SI has not published anything that addresses them in a clear way.
Below, I list my major objections. I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible. There are many possible responses to my objections, but SI's public arguments (and the private arguments) do not make clear which possible response (if any) SI would choose to take up and defend. Hopefully the dialogue following this post will clarify what SI believes and why.
Some of my views are discussed at greater length (though with less clarity) in a [public transcript of a conversation I had with SI supporter Jaan Tallinn](http://groups.yahoo.com/group/givewell/message/287). I refer to this transcript as "Karnofsky/Tallinn 2011."
### Objection 1: it seems to me that any AGI that was set to maximize a "Friendly" utility function would be extraordinarily dangerous.
Suppose, for the sake of argument, that SI manages to create what it believes to be an FAI. Suppose that it is successful in the "AGI" part of its goal, i.e., it has successfully created an intelligence vastly superior to human intelligence and extraordinarily powerful from our perspective. Suppose that it has also done its best on the "Friendly" part of the goal: it has developed a formal argument for why its AGI's utility function will be Friendly, it believes this argument to be airtight, and it has had this argument checked over by 100 of the world's most intelligent and relevantly experienced people. Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome?
I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this, but a [comment on LessWrong by Wei Dai](/lw/8c3/qa_with_new_executive_director_of_singularity/5986) gives a good illustration of the kind of thoughts I have on the matter:
> What I'm afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of "safety" used by the proof is wrong. This kind of thing happens *a lot* in cryptography, if you replace "safety" with "security". These mistakes are still occurring today, even after decades of research into how to do such proofs and what the relevant formalizations are. From where I'm sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations. There's good recent review of the history of provable security, titled [Provable Security in the Real World](http://www.ibiblio.org/weidai/temp/Provable_Security.pdf), which might help you understand where I'm coming from.
I think this comment understates the risks, however. For example, when the comment says "the formalization of the notion of 'safety' used by the proof is wrong," it is not clear whether it means that the values the programmers have in mind are not correctly implemented by the formalization, or whether it means they are correctly implemented but [are themselves catastrophic in a way that hasn't been anticipated](/lw/ld/the_hidden_complexity_of_wishes/). I would be highly concerned about both. There are other catastrophic possibilities as well; perhaps the utility function itself is well-specified and safe, but the AGI's model of the world is flawed (in particular, perhaps its [prior](http://en.wikipedia.org/wiki/Prior_probability) or its process for matching observations to predictions are flawed) in a way that doesn't emerge until the AGI has made substantial changes to its environment.
By SI's own arguments, even a small error in any of these things would likely lead to catastrophe. And there are likely failure forms I haven't thought of. The overriding intuition here is that complex plans usually fail when unaccompanied by feedback loops. A scenario in which a set of people is ready to unleash an all-powerful being to maximize some parameter in the world, based solely on their initial confidence in their own extrapolations of the consequences of doing so, seems like a scenario that is overwhelmingly likely to result in a bad outcome. It comes down to placing the world's largest bet on a highly complex theory - with no experimentation to test the theory first.
So far, all I have argued is that the development of "Friendliness" theory can achieve at best only a limited reduction in the probability of an unfavorable outcome. However, as I argue in the next section, I believe there is at least one concept - the "tool-agent" distinction - that has more potential to reduce risks, and that SI appears to ignore this concept entirely. I believe that tools are safer than agents (even agents that make use of the best "Friendliness" theory that can reasonably be hoped for) and that SI encourages a focus on building agents, thus increasing risk.
### Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.
Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes.
Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.)
Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility.
In short, Google Maps is not an *agent*, taking actions in order to maximize a utility parameter. It is a *tool*, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.
Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used as "tools" (for example, Watson can simply display its top candidate answers to a question, with the score for each, without speaking any of them.)
The "tool mode" concept is importantly different from the possibility of [Oracle AI](http://www.aleph.se/papers/oracleAI.pdf) sometimes discussed by SI. The discussions I've seen of Oracle AI present it as an Unfriendly AI that is "trapped in a box" - an AI whose intelligence is driven by an explicit utility function and that humans hope to control coercively. Hence the discussion of ideas such as the [AI-Box Experiment](http://yudkowsky.net/singularity/aibox). A different interpretation, given in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287), is an AI with a carefully designed utility function - likely as difficult to construct as "Friendliness" - that leaves it "wishing" to answer questions helpfully. By contrast with both these ideas, Tool-AGI is not "trapped" and it is not Unfriendly or Friendly; it has no motivations and no driving utility function of any kind, just like Google Maps. It scores different possibilities and displays its conclusions in a transparent and user-friendly manner, as its instructions say to do; it does not have an overarching "want," and so, as with the specialized AIs described above, while it may sometimes "misinterpret" a question (thereby scoring options poorly and ranking the wrong one #1) there is no reason to expect intentional trickery or manipulation when it comes to displaying its results.
Another way of putting this is that a "tool" has an underlying instruction set that conceptually looks like: "(1) Calculate which action A would maximize parameter P, based on existing data set D. (2) Summarize this calculation in a user-friendly manner, including what Action A is, what likely intermediate outcomes it would cause, what other actions would result in high values of P, etc." An "agent," by contrast, has an underlying instruction set that conceptually looks like: "(1) Calculate which action, A, would maximize parameter P, based on existing data set D. (2) Execute Action A." In any AI where (1) is separable (by the programmers) as a distinct step, (2) can be set to the "tool" version rather than the "agent" version, and this separability is in fact present with most/all modern software. Note that in the "tool" version, neither step (1) nor step (2) (nor the combination) constitutes an instruction to maximize a parameter - to describe a program of this kind as "wanting" something is a category error, and there is no reason to expect its step (2) to be deceptive.
I elaborated further on the distinction and on the concept of a tool-AI in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287).
This is important because **an AGI running in tool mode could be extraordinarily useful but far more safe than an AGI running in agent mode**. In fact, if developing "Friendly AI" is what we seek, a tool-AGI could likely be helpful enough in thinking through this problem as to render any previous work on "Friendliness theory" moot. Among other things, a tool-AGI would allow transparent views into the AGI's reasoning and predictions without any reason to fear being purposefully misled, and would facilitate safe experimental testing of any utility function that one wished to eventually plug into an "agent."
Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will work, given that practically all software developed to date can (and usually does) run as a tool and given that modern software seems to be constantly becoming "intelligent" (capable of giving better answers than a human) in surprising new domains. In addition, it intuitively seems to me (though I am not highly confident) that intelligence inherently involves the distinct, separable steps of (a) considering multiple possible actions and (b) assigning a score to each, *prior* to executing any of the possible actions. If one can distinctly separate (a) and (b) in a program's code, then one can abstain from writing any "execution" instructions and instead focus on making the program list actions and scores in a user-friendly manner, for humans to consider and use as they wish.
Of course, there are possible paths to AGI that may rule out a "tool mode," but it seems that most of these paths would rule out the application of "Friendliness theory" as well. (For example, a "black box" emulation and augmentation of a human mind.) What are the paths to AGI that allow manual, transparent, intentional design of a utility function but do not allow the replacement of "execution" instructions with "communication" instructions? Most of the conversations I've had on this topic have focused on three responses:
* **Self-improving AI.** Many seem to find it intuitive that (a) AGI will almost certainly come from an AI rewriting its own source code, and (b) such a process would inevitably lead to an "agent." I do not agree with either (a) or (b). I discussed these issues in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287) and will be happy to discuss them more if this is the line of response that SI ends up pursuing. Very briefly:
+ The idea of a "self-improving algorithm" intuitively sounds very powerful, but does not seem to have led to many "explosions" in software so far (and it seems to be a concept that could apply to narrow AI as well as to AGI).
+ It seems to me that a tool-AGI could be plugged into a self-improvement process that would be quite powerful but would also terminate and yield a new tool-AI after a set number of iterations (or after reaching a set "intelligence threshold"). So I do not accept the argument that "self-improving AGI means agent AGI." As stated above, I will elaborate on this view if it turns out to be an important point of disagreement.
+ I have argued (in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287)) that the relevant self-improvement abilities are likely to come *with* or *after* - not *prior to* - the development of strong AGI. In other words, any software capable of the relevant kind of self-improvement is likely also capable of being used as a strong tool-AGI, with the benefits described above.
+ The SI-related discussions I've seen of "self-improving AI" are highly vague, and do not spell out views on the above points.
* **Dangerous data collection.** Some point to the seeming dangers of a tool-AI's "scoring" function: in order to score different options it may have to collect data, which is itself an "agent" type action that could lead to dangerous actions. I think my definition of "tool" above makes clear what is wrong with this objection: a tool-AGI takes its existing data set D as fixed (and perhaps could have some pre-determined, safe set of simple actions it can take - such as using Google's API - to collect more), and if maximizing its chosen parameter is best accomplished through more data collection, it can transparently output why and how it suggests collecting more data. Over time it can be given more autonomy *for data collection* through an *experimental and domain-specific process* (e.g., modifying the AI to skip specific steps of human review of proposals for data collection after it has become clear that these steps work as intended), a process that has little to do with the "Friendly overarching utility function" concept promoted by SI. Again, I will elaborate on this if it turns out to be a key point.
* **Race for power.** Some have argued to me that humans are likely to *choose* to create agent-AGI, in order to quickly gain power and outrace other teams working on AGI. But this argument, even if accepted, has very different implications from SI's view.
Conventional wisdom says it is extremely dangerous to empower a computer to act in the world until one is very sure that the computer will do its job in a way that is helpful rather than harmful. So if a programmer chooses to "unleash an AGI as an agent" with the hope of gaining power, it seems that this programmer will be deliberately ignoring conventional wisdom about what is safe in favor of shortsighted greed. I do not see why such a programmer would be expected to make use of any "Friendliness theory" that might be available. (Attempting to incorporate such theory would almost certainly slow the project down greatly, and thus would bring the same problems as the more general "have caution, do testing" counseled by conventional wisdom.) It seems that the appropriate measures for preventing such a risk are security measures aiming to stop humans from launching unsafe agent-AIs, rather than developing theories or raising awareness of "Friendliness."
One of the things that bothers me most about SI is that there is practically no public content, as far as I can tell, explicitly addressing the idea of a "tool" and giving arguments for why AGI is likely to work only as an "agent." The idea that AGI will be driven by a central utility function seems to be simply assumed. Two examples:
* I have been referred to [Muehlhauser and Salamon 2012](http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf) as the most up-to-date, clear explanation of SI's position on "the basics." This paper states, "Perhaps we could build an AI of limited cognitive ability — say, a machine that only answers questions: an 'Oracle AI.' But this approach is not without its own dangers (Armstrong, Sandberg, and Bostrom 2012)." However, the referenced paper ([Armstrong, Sandberg and Bostrom 2012](http://www.aleph.se/papers/oracleAI.pdf)) seems to take it as a given that an Oracle AI is an "agent trapped in a box" - a computer that has a basic drive/utility function, not a Tool-AGI. The rest of Muehlhauser and Salamon 2012 seems to take it as a given that an AGI will be an agent.
* I have often been referred to [Omohundro 2008](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) for an argument that an AGI is likely to have certain goals. But this paper seems, again, to take it as given that an AGI will be an agent, i.e., that it will have goals at all. The introduction states, "To say that a system of any design is an 'artificial intelligence', we mean that it has goals which it tries to accomplish by acting in the world." In other words, the premise I'm disputing seems embedded in its very definition of AI.
The closest thing I have seen to a public discussion of "tool-AGI" is in [Dreams of Friendliness](/lw/tj/dreams_of_friendliness/), where Eliezer Yudkowsky considers the question, "Why not just have the AI answer questions, instead of trying to *do* anything? Then it wouldn't need to be Friendly. It wouldn't need any goals at all. It would just answer questions." His response:
> To which the reply is that the AI needs goals in order to decide how to think: that is, the AI has to act as a powerful optimization process in order to plan its acquisition of knowledge, effectively distill sensory information, pluck "answers" to particular questions out of the space of all possible responses, and of course, to improve its own source code up to the level where the AI is a powerful intelligence. All these events are "improbable" relative to random organizations of the AI's RAM, so the AI has to hit a narrow target in the space of possibilities to make superintelligent answers come out.
This passage appears vague and does not appear to address the specific "tool" concept I have defended above (in particular, it does not address the analogy to modern software, which challenges the idea that "powerful optimization processes" cannot run in tool mode). The rest of the piece discusses (a) psychological mistakes that could lead to the discussion in question; (b) the "Oracle AI" concept that I have outlined above. The comments contain some more discussion of the "tool" idea (Denis Bider and Shane Legg seem to be picturing something similar to "tool-AGI") but the discussion is unresolved and I believe the "tool" concept defended above remains essentially unaddressed.
In sum, SI appears to encourage a focus on building and launching "Friendly" agents (it is seeking to do so itself, and its work on "Friendliness" theory seems to be laying the groundwork for others to do so) while not addressing the tool-agent distinction. It seems to assume that any AGI will have to be an agent, and to make little to no attempt at justifying this assumption. The result, in my view, is that it is essentially advocating for a more dangerous approach to AI than the traditional approach to software development.
### Objection 3: SI's envisioned scenario is far more specific and conjunctive than it appears at first glance, and I believe this scenario to be highly unlikely.
SI's scenario concerns the development of artificial *general* intelligence (AGI): a computer that is vastly more intelligent than humans in every relevant way. But we already have many computers that are vastly more intelligent than humans in *some* relevant ways, and the domains in which specialized AIs outdo humans seem to be constantly and continuously expanding. I feel that the relevance of "Friendliness theory" depends heavily on the idea of a "discrete jump" that seems unlikely and whose likelihood does not seem to have been publicly argued for.
One possible scenario is that at some point, we develop powerful enough non-AGI tools (particularly specialized AIs) that we vastly improve our abilities to consider and prepare for the eventuality of AGI - to the point where any previous theory developed on the subject becomes useless. Or (to put this more generally) non-AGI tools simply change the world so much that it becomes essentially unrecognizable from the perspective of today - again rendering any previous "Friendliness theory" moot. As I said in [Karnofsky/Tallinn 2011](http://groups.yahoo.com/group/givewell/message/287), some of SI's work "seems a bit like trying to design Facebook before the Internet was in use, or even before the computer existed."
Perhaps there will be a discrete jump to AGI, but it will be a sort of AGI that renders "Friendliness theory" moot for a different reason. For example, *in the practice of software development*, there often does not seem to be an operational distinction between "intelligent" and "Friendly." (For example, my impression is that the only method programmers had for evaluating Watson's "intelligence" was to see whether it was coming up with the same answers that a well-informed human would; the only way to evaluate Siri's "intelligence" was to evaluate its helpfulness to humans.) "Intelligent" often ends up getting defined as "prone to take actions that seem all-around 'good' to the programmer." So the concept of "Friendliness" may end up being naturally and subtly baked in to a successful AGI effort.
The bottom line is that we know very little about the course of future artificial intelligence. I believe that the probability that SI's concept of "Friendly" vs. "Unfriendly" goals ends up seeming essentially nonsensical, irrelevant and/or unimportant from the standpoint of the relevant future is over 90%.
### Other objections to SI's views
There are other debates about the likelihood of SI's work being relevant/helpful; for example,
* It isn't clear whether the development of AGI is imminent enough to be relevant, or whether other risks to humanity are closer.
* It isn't clear whether AGI would be as powerful as SI's views imply. (I discussed this briefly in [Karnofsky/Tallinn 2011.](http://groups.yahoo.com/group/givewell/message/287))
* It isn't clear whether even an extremely powerful UFAI would choose to attack humans as opposed to negotiating with them. (I find it somewhat helpful to analogize UFAI-human interactions to human-mosquito interactions. Humans are enormously more intelligent than mosquitoes; humans are good at predicting, manipulating, and destroying mosquitoes; humans do not value mosquitoes' welfare; humans have other goals that mosquitoes interfere with; humans would like to see mosquitoes eradicated at least from certain parts of the planet. Yet humans haven't accomplished such eradication, and it is easy to imagine scenarios in which humans would prefer honest negotiation and trade with mosquitoes to any other arrangement, if such negotiation and trade were possible.)
Unlike the three objections I focus on, these other issues have been discussed a fair amount, and if these other issues were the only objections to SI's arguments I would find SI's case to be strong (i.e., I would find its scenario likely *enough* to warrant investment in).
### Wrapup
* I believe the most likely future scenarios are the ones we haven't thought of, and that the most likely fate of the sort of theory SI ends up developing is irrelevance.
* I believe that unleashing an all-powerful "agent AGI" (without the benefit of experimentation) would very likely result in a UFAI-like outcome, no matter how carefully the "agent AGI" was designed to be "Friendly." I see SI as encouraging (and aiming to take) this approach.
* I believe that the standard approach to developing software results in "tools," not "agents," and that tools (while dangerous) are much safer than agents. A "tool mode" could facilitate *experiment-informed* progress toward a safe "agent," rather than needing to get "Friendliness" theory right without any experimentation.
* Therefore, I believe that the approach SI advocates and aims to prepare for is far more dangerous than the standard approach, so *if* SI's work on Friendliness theory affects the risk of human extinction one way or the other, it will increase the risk of human extinction. Fortunately I believe SI's work is far more likely to have no effect one way or the other.
For a long time I refrained from engaging in object-level debates over SI's work, believing that others are better qualified to do so. But after talking at great length to many of SI's supporters and advocates and reading everything I've been pointed to as relevant, I still have seen no clear and compelling response to any of my three major objections. As stated above, there are many possible responses to my objections, but SI's current arguments do not seem clear on what responses they wish to take and defend. At this point I am unlikely to form a positive view of SI's work until and unless I do see such responses, and/or SI changes its positions.
**Is SI the kind of organization we want to bet on?**
=====================================================
This part of the post has some risks. For most of GiveWell's history, sticking to our [standard criteria](http://givewell.org/international/process/2011#Goalofthereport) - and putting more energy into recommended than non-recommended organizations - has enabled us to share our honest thoughts about charities without appearing to get personal. But when evaluating a group such as SI, I can't avoid placing a heavy weight on (my read on) the general competence, capability and "intangibles" of the people and organization, because SI's mission is not about repeating activities that have worked in the past. **Sharing my views on these issues could strike some as personal or mean-spirited and could lead to the misimpression that GiveWell is hostile toward SI. But it is simply necessary in order to be fully transparent about why I hold the views that I hold.**
Fortunately, SI is an ideal organization for our first discussion of this type. I believe the staff and supporters of SI would overwhelmingly rather hear the whole truth about my thoughts - so that they can directly engage them and, if warranted, make changes - than have me sugar-coat what I think in order to spare their feelings. People who know me and [my attitude toward being honest vs. sparing feelings](http://blog.givewell.org/2007/06/05/an-open-letter-to-crybabies/) know that this, itself, is high praise for SI.
One more comment before I continue: our policy is that non-public information provided to us by a charity will not be published or discussed without that charity's prior consent. However, none of the content of this post is based on private information; all of it is based on information that SI has made available to the public.
There are several reasons that I currently have a negative impression of SI's general competence, capability and "intangibles." My mind remains open and I include specifics on how it could be changed.
* **Weak arguments.** SI has produced enormous quantities of public argumentation, and I have examined a very large proportion of this information. Yet I have never seen a clear response to any of the three basic objections I listed in the previous section. One of SI's major goals is to raise awareness of AI-related risks; given this, the fact that it has not advanced clear/concise/compelling arguments speaks, in my view, to its general competence.
* **Lack of impressive endorsements.** I discussed this issue in my [2011 interview with SI representatives](http://groups.yahoo.com/group/givewell/message/270) and I still feel the same way on the matter. I feel that given the enormous implications of SI's claims, if it argued them well it ought to be able to get more impressive endorsements than it has.
I have been pointed to Peter Thiel and Ray Kurzweil as examples of impressive SI supporters, but I have not seen any on-record statements from either of these people that show agreement with SI's specific views, and in fact (based on watching them speak at Singularity Summits) my impression is that they disagree. Peter Thiel seems to believe that speeding the pace of general innovation is a good thing; this would seem to be in tension with SI's view that AGI will be catastrophic by default and that no one other than SI is paying sufficient attention to "Friendliness" issues. Ray Kurzweil seems to believe that "safety" is a matter of transparency, strong institutions, etc. rather than of "Friendliness." I am personally in agreement with the things I have seen both of them say on these topics. I find it possible that they support SI because of the Singularity Summit or to increase general interest in ambitious technology, rather than because they find "Friendliness theory" to be as important as SI does.
Clear, on-record statements from these two supporters, specifically endorsing SI's arguments and the importance of developing Friendliness theory, would shift my views somewhat on this point.
* **Resistance to feedback loops.** I discussed this issue in my [2011 interview with SI representatives](http://groups.yahoo.com/group/givewell/message/270) and I still feel the same way on the matter. SI seems to have passed up opportunities to test itself and its own rationality by e.g. aiming for objectively impressive accomplishments. This is a problem because of (a) its extremely ambitious goals (among other things, it seeks to develop artificial intelligence *and* "Friendliness theory" before anyone else can develop artificial intelligence); (b) its view of its staff/supporters as having unusual insight into rationality, which I discuss in a later bullet point.
SI's [list of achievements](http://intelligence.org/achievements) is not, in my view, up to where it needs to be given (a) and (b). Yet I have seen no declaration that SI has fallen short to date and explanation of what will be changed to deal with it. SI's recent release of a [strategic plan](http://intelligence.org/files/strategicplan2011.pdf) and [monthly updates](http://intelligence.org/blog/) are improvements from a transparency perspective, but they still leave me feeling as though there are no clear metrics or goals by which SI is committing to be measured (aside from very basic organizational goals such as "design a new website" and very vague goals such as "publish more papers") and as though SI places a low priority on engaging people who are critical of its views (or at least not yet on board), as opposed to people who are naturally drawn to it.
I believe that one of the primary obstacles to being impactful as a nonprofit is the lack of the sort of helpful feedback loops that lead to success in other domains. I like to see groups that are making as much effort as they can to create meaningful feedback loops for themselves. I perceive SI as falling well short on this front. Pursuing more impressive endorsements and developing benign but objectively recognizable innovations (particularly commercially viable ones) are two possible ways to impose more demanding feedback loops. (I discussed both of these in my interview linked above).
* **Apparent poorly grounded belief in SI's superior general rationality.** Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples [here](/lw/66/rationality_common_interest_of_many_causes/), [here](/lw/3mm/back_to_the_basics_of_rationality/) and [here](/lw/b98/minicamps_on_rationality_and_awesomeness_may_1113/)). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.
Yet I'm not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.
I have been pointed to the [Sequences](http://wiki.lesswrong.com/wiki/Sequences) on this point. The Sequences (which I have read the vast majority of) do not seem to me to be a demonstration or evidence of general rationality. They are *about* rationality; I find them very enjoyable to read; and there is very little they say that I disagree with (or would have disagreed with before I read them). However, they do not seem to demonstrate rationality on the part of the writer, any more than a series of enjoyable, not-obviously-inaccurate essays on the qualities of a good basketball player would demonstrate basketball prowess. I sometimes get the impression that fans of the Sequences are willing to ascribe superior rationality to the writer simply because the content *seems smart and insightful to them*, without making a critical effort to determine the extent to which the content is novel, actionable and important.
I endorse [Eliezer Yudkowsky's statement](/lw/nc/newcombs_problem_and_regret_of_rationality/), "Be careful … any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility." To me, the best evidence of superior general rationality (or of insight into it) would be objectively impressive achievements (successful commercial ventures, highly prestigious awards, clear innovations, etc.) and/or accumulation of wealth and power. As mentioned above, SI staff/supporters/advocates do not seem particularly impressive on these fronts, at least not as much as I would expect for people who have the sort of insight into rationality that makes it sensible for them to train others in it. I am open to other evidence that SI staff/supporters/advocates have superior general rationality, but I have not seen it.
Why is it a problem if SI staff/supporter/advocates believe themselves, without good evidence, to have superior general rationality? First off, it strikes me as a belief based on wishful thinking rather than rational inference. Secondly, I would expect a series of problems to accompany overconfidence in one's general rationality, and several of these problems seem to be actually occurring in SI's case:
+ Insufficient self-skepticism given how strong its claims are and how little support its claims have won. Rather than endorsing "Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments," SI seems often to endorse something more like "Others have not accepted their arguments because they have inferior general rationality," a stance less likely to lead to improvement on SI's part.
+ Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
+ Paying insufficient attention to the limitations of the confidence one can have in one's untested theories, in line with my Objection 1.
* **Overall disconnect between SI's goals and its activities.** SI seeks to build FAI and/or to develop and promote "Friendliness theory" that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of [publications](http://intelligence.org/research/publications) seems low. Its core staff seem more focused on [Less Wrong](http://www.lesswrong.com) posts, "rationality training" and other activities that don't seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the [strategic plan](http://intelligence.org/files/strategicplan2011.pdf)) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.
A possible justification for these activities is that SI is seeking to promote greater general rationality, which over time will lead to more and better support for its mission. But if this is SI's core activity, it becomes even more important to test the hypothesis that SI's views are in fact rooted in superior general rationality - and these tests don't seem to be happening, as discussed above.
* **Theft.** I am bothered by the [2009 theft of $118,803.00](/lw/5il/siai_an_examination/) (as against a $541,080.00 budget for the year). In an organization as small as SI, it really seems as though theft that large relative to the budget shouldn't occur and that it represents a major failure of hiring and/or internal controls.
In addition, I have seen no public SI-authorized discussion of the matter that I consider to be satisfactory in terms of explaining what happened and what the current status of the case is on an ongoing basis. Some details may have to be omitted, but a clear SI-authorized statement on this point with as much information as can reasonably provided would be helpful.
A couple positive observations to add context here:
* I see significant positive qualities in many of the people associated with SI. I especially like what I perceive as their sincere wish to do whatever they can to help the world as much as possible, and the high value they place on being right as opposed to being conventional or polite. I have not interacted with Eliezer Yudkowsky but I greatly enjoy his writings.
* I'm aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years, particularly regarding the last couple of statements listed above. That said, SI is an organization and it seems reasonable to judge it by its organizational track record, especially when its new leadership is so new that I have little basis on which to judge these staff.
### Wrapup
While SI has produced a lot of content that I find interesting and enjoyable, it has not produced what I consider evidence of superior general rationality or of its suitability for the tasks it has set for itself. I see no qualifications or achievements that specifically seem to indicate that SI staff are well-suited to the challenge of understanding the key AI-related issues and/or coordinating the construction of an FAI. And I see specific reasons to be pessimistic about its suitability and general competence.
When estimating the expected value of an endeavor, it is natural to have an implicit "survivorship bias" - to use organizations whose accomplishments one is familiar with (which tend to be relatively effective organizations) as a reference class. Because of this, I would be extremely wary of investing in an organization with apparently poor general competence/suitability to its tasks, even if I bought fully into its mission (which I do not) and saw no other groups working on a comparable mission.
**But if there's even a chance …**
==================================
A common argument that SI supporters raise with me is along the lines of, "Even if SI's arguments are weak and its staff isn't as capable as one would like to see, their goal is so important that they would be a good investment even at a tiny probability of success."
I believe this argument to be a form of [Pascal's Mugging](/lw/kd/pascals_mugging_tiny_probabilities_of_vast/) and I have outlined the reasons I believe it to be invalid in two posts ([here](http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) and [here](http://blog.givewell.org/2011/11/10/maximizing-cost-effectiveness-via-critical-inquiry/)). There have been some objections to my arguments, but I still believe them to be valid. There is a good chance I will revisit these topics in the future, because I believe these issues to be at the core of many of the differences between GiveWell-top-charities supporters and SI supporters.
Regardless of whether one accepts my specific arguments, it is worth noting that the most prominent people associated with SI tend to agree with the *conclusion* that the "But if there's even a chance …" argument is not valid. (See comments on my post from [Michael Vassar](/lw/745/why_we_cant_take_expected_value_estimates/4o2z) and [Eliezer Yudkowsky](/lw/745/why_we_cant_take_expected_value_estimates/4nzy) as well as [Eliezer's interview with John Baez](http://johncarlosbaez.wordpress.com/2011/04/24/what-to-do/#comment-5546).)
**Existential risk reduction as a cause**
=========================================
I consider the general cause of "looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, particularly those that involve some risk of human extinction" to be a relatively high-potential cause. It is on the [working agenda for GiveWell Labs](http://blog.givewell.org/2012/05/09/givewell-labs-update-and-priority-causes/) and we will be writing more about it.
However, I don't think that "Cause X is the one I care about and Organization Y is the only one working on it" to be a good reason to support Organization Y. For donors determined to donate within this cause, I encourage you to consider donating to a donor-advised fund while making it clear that you intend to grant out the funds to existential-risk-reduction-related organizations in the future. (One way to accomplish this would be to create a fund with "existential risk" in the name; this is a fairly easy thing to do and one person could do it on behalf of multiple donors.)
For one who accepts my arguments about SI, I believe withholding funds in this way is likely to be better for SI's mission than donating to SI - through incentive effects alone (not to mention my specific argument that SI's approach to "Friendliness" seems likely to increase risks).
**How I might change my views**
===============================
My views are very open to revision.
However, I cannot realistically commit to read and seriously consider all comments posted on the matter. The number of people capable of taking a few minutes to write a comment is sufficient to swamp my capacity. I do encourage people to comment and I do intend to read at least some comments, but if you are looking to change my views, you should not consider posting a comment to be the most promising route.
Instead, what I will commit to is reading and carefully considering **up to 50,000 words of content that are (a) specifically marked as SI-authorized responses to the points I have raised; (b) explicitly cleared for release to the general public as SI-authorized communications.** In order to consider a response "SI-authorized and cleared for release," I will accept explicit communication from SI's Executive Director or from a majority of its Board of Directors endorsing the content in question. After 50,000 words, I may change my views and/or commit to reading more content, or (if I determine that the content is poor and is not using my time efficiently) I may decide not to engage further. SI-authorized content may improve or worsen SI's standing in my estimation, so unlike with comments, there is an incentive to select content that uses my time efficiently. Of course, SI-authorized content may end up including excerpts from comment responses to this post, and/or already-existing public content.
I may also change my views for other reasons, particularly if SI secures more impressive achievements and/or endorsements.
One more note: I believe I have read the vast majority of the [Sequences](http://wiki.lesswrong.com/wiki/Sequences), including the [AI-foom debate](http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate), and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
Again: I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.
**Acknowledgements**
====================
Thanks to the following people for reviewing a draft of this post and providing thoughtful feedback (this of course does not mean they agree with the post or are responsible for its content): Dario Amodei, Nick Beckstead, Elie Hassenfeld, Alexander Kruel, Tim Ogden, John Salvatier, Jonah Sinick, Cari Tuna, Stephanie Wykstra. |
9c26ea78-cef4-4bf1-b1a4-2e880f0eec38 | trentmkelly/LessWrong-43k | LessWrong | Estimation is the best we have
This argument seems common to many debates:
> ‘Proposal P arrogantly assumes that it is possible to measure X, when really X is hard to measure and perhaps even changes depending on other factors. Therefore we shouldn’t do P’.
This could make sense if X wasn’t especially integral to the goal. For instance if the proposal were to measure short distances by triangulation with nearby objects, a reasonable criticism would be that the angles are hard to measure, relative to measuring the distance directly. But this argument is commonly used in situations where optimizing X is the whole point of the activity, or a large part of it.
Criticism of utilitarianism provides a good example. A common argument is that it’s just not possible to tell if you are increasing net utility, or by how much. The critic concludes then that a different moral strategy is better, for instance some sort of intuitive deontology. But if the utilitarian is correct that value is about providing creatures with utility, then the extreme difficulty of doing the associated mathematics perfectly should not warrant abandoning the goal. One should always be better off putting the reduced effort one is willing to contribute into what utilitarian accuracy it buys, rather than throwing it away on a strategy that is more random with regard to the goal.
A CEO would sound ridiculous making this argument to his shareholders. ‘You guys are being ridiculous. It’s just not possible to know which actions will increase the value of the company exactly how much. Why don’t we try to make sure that all of our meetings end on time instead?’
In general, when optimizing X somehow is integral to the goal, the argument must fail. If the point is to make X as close to three as possible for instance, no matter how bad your best estimate is of what X will be under different conditions, you can’t do better by ignoring X all together. If you had a non-estimating-X strategy which you anticipated would do better than your best |
321de6a6-495f-4d03-83e0-77b33f1a02c8 | trentmkelly/LessWrong-43k | LessWrong | Theoretically, could we balance the budget painlessly?
Economically, all government spending takes the form of forgone private consumption. This implies that all deficit spending is in fact a sort of tax.
Suppose we were to raise taxes to make this implicit tax explicit. For the purpose of this hypothetical, let's image the US imposes a VAT exactly equal to the government deficit. It seems like the following would happen:
1. A bunch of people would have much higher taxes, and would lower their consumption to be able to pay those taxes
2. this would send the economy into a recession as aggregate consumption dropped.
3. The federal reserve would lower interest rates (or do quantitative easing) to stimulate demand.
4. Eventually the economy would reach a new equilibrium (which presumably would contain the same amount of private consumption as the old equilibrium).
As a mini example of this, consider the recent Japanese VAT hike.
The question is, could we skip straight from step 2. to step 4. without all of the intervening suffering (usually a recession causes unemployment, lowered consumption, etc.)? What would be the most effective way to do so? |
2bda4161-9ac2-467b-9231-1b34461d14c5 | trentmkelly/LessWrong-43k | LessWrong | Better difference-making views
Summary
1. I make three arguments against standard difference-making risk aversion that I find compelling as a consequentialist, despite my substantial sympathy for something like difference-making risk aversion:
1. It privileges a default option, literally as or like an act-omission distinction, or is otherwise unmotivated (more).
2. It may lead generally to inaction or otherwise favouring the default option, due to multitudes of backfire risks and butterfly effects (more).
3. It can count against you persuading others to take actions that are less risky from their perspectives, and when the default option is a status quo option, it favours the status quo (more).
4. Some of these arguments also apply to difference-making ambiguity aversion.
2. I describe multiple versions of or modifications of difference-making views that don't have these problems. Some can also be combined. They are:
1. use accounts that treat downsides and upsides relative to the fixed default option more symmetrically (more),
2. statewise sorting, so that we aren’t sensitive to how outcome distributions correspond statewise (more), and
3. not privileging inaction as the default option to make comparisons with respect to, but potentially treating every option this way (more).
1. Most of these allow comparisons between two options to depend on whether a third is available (more) and could lend even more support to risky bets like longtermism or on invertebrates than risk neutrality already does (more).
Acknowledgements
Thanks to Derek Shiller, Silvester Kollin and Anthony DiGiovanni for helpful feedback. All errors are my own.
Background
For further background on and discussion of difference-making risk aversion, see Clatterbuck, 2024 and Greaves et al., 2022. I only illustrate briefly here.
If you're risk averse with respect to money, it's better to have a $50 with certainty than a 50% chance of $0 and a 50% chance of $100, even though your expecte |
7f372a99-cd59-4067-a61d-f27093384855 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Would it be useful to collect the contexts, where various LLMs think the same?
My initial idea was *Let's see where the small, interpretable, model makes the same inference as the huge, ¯dangerous, model and focus on those cases in the small model to help explain the bigger one*. Quite likely I am wrong, but with a tiny chance for good impact, I have set up [a repository](https://github.com/Huge/same-next-lang-token).
I would love your feedback on that direction before starting to actually generate the pairs/sets of context+LMs that match on that context. |
e867f4bd-cfa6-473a-9371-dc8a98891f6b | trentmkelly/LessWrong-43k | LessWrong | Measurement tampering detection as a special case of weak-to-strong generalization
Burns et al at OpenAI released a paper studying various techniques for fine-tuning strong models on downstream tasks using labels produced by weak models. They call this problem “weak-to-strong generalization”, abbreviated W2SG.
Earlier this year, we published a paper, Benchmarks for Detecting Measurement Tampering, in which we investigated techniques for the problem of measurement tampering detection (MTD). MTD is a special case of W2SG. In this post, we’ll explain the relationship between MTD and W2SG, and explain why we think MTD is more likely than fully general W2SG to work. Of course, fully general W2SG is a strictly more valuable problem to solve, due to this generality.
We think MTD is a promising research direction. We’re also excited for other problems which are special cases of W2SG that have special structure that can be exploited by techniques, especially if that structure is likely to be present in important cases in future.
MTD as a subset of W2SG
A similar goal
When training an AI, the reward we attribute to different behaviors might not match the reward we would give if we understood the situation better.
The goal of W2SG techniques is to achieve good results when training a strong AI despite only having access to a weak supervisor that understands the situation less well than the strong AI.
MTD is the special case where the weak supervisor has access to measurements which should be sufficient to understand the situation, but these measurements can be tampered with (e.g. replacing the camera feed with some made-up data, disabling tests, or threatening annotators). Because the measurements are sufficient in the absence of tampering, we don’t need to worry about benign mistakes that could happen even without an AI optimizing to make measurements look good.
Slightly different experiments
W2SG can be studied using sandwiching experiments, where we try to get an AI to safely accomplish tasks despite only having access to a weak supervisor, and |
67432863-89e7-481a-bedb-6becd8ff8538 | trentmkelly/LessWrong-43k | LessWrong | Cache invalidation test
Cache should be cleared |
f58883a7-7d7e-4e10-9712-e169c53506a1 | trentmkelly/LessWrong-43k | LessWrong | Some heuristics I use for deciding how much I trust scientific results
I've done nothing to test these heuristics and have no empirical evidence for how well they work for forecasting replications or anything else. I’m going to write them anyway. The heuristics I’m listing are roughly in order of how important I think they are. My training is as an economist (although I have substantial exposure to political science) and lots of this is going to be written from an econometrics perspective.
How much does the result rely on experimental evidence vs causal inference from observational evidence?
I basically believe without question every result that mainstream chemists and condensed matter physicists say is true. I think a big part of this is that in these fields it’s really easy to experimentally test hypotheses, to build really precisely test differences in hypotheses experimentally. This seems great.
On the other hand, when relying on observational evidence to get reliable causal inference you have to control for confounders while not controlling for colliders. This is really hard! It generally requires finding a natural experiment that introduces randomisation or having very good reason to think that you’ve controlled for all confounders.
We also make quite big updates on which methods effectively do this. For instance until last year we thought that two-way fixed effects did a pretty good job of this before we realised that actually heterogeneous treatment effects are a really big deal for two-way fixed effects estimators.
What’s more, in areas that use primarily observational data there’s a really big gap between fields in how often papers even try to use causal inference methods and how hard they work to show that their identifying assumptions hold. I generally think that modern microeconomics papers are the best on this and nutrition science the worst.
I’m slightly oversimplifying by using a strict division between experimental and observational data. All data is observational and what matters is credibly you think you’v |
ee731743-fb9f-44dc-9ef5-c4c8e67946d7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Comments on CAIS
Over the last few months I’ve talked with Eric Drexler a number of times about his Comprehensive AI Services (CAIS) model of AI development, and read most of [his technical report](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) on the topic. I think these are important ideas which are well worth engaging with, despite personally being skeptical about many of the conclusions. Below I’ve summarised what I see as the core components of Eric’s view, followed by some of own arguments. Note that these are only my personal opinions. I did make some changes to the summary based on Eric’s comments on early drafts, to better reflect his position - however, there are likely still ways I’ve misrepresented him. Also note that this was written before reading [Rohin’s summary](https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as) of the same report, although I do broadly agree with most of Rohin’s points.
One useful piece of context for this model is Eric's background in nanotechnology, and his advocacy for the development of nanotech as "atomically precise manufacturing" rather than self-replicating nanomachines. The relationship between these two frameworks has clear parallels with the relationship between CAIS and a recursively self-improving superintelligence.
The CAIS model:
1. The standard arguments in AI safety are concerned with the development of a single AGI agent doing open-ended optimisation. Before we build such an entity (if we do so at all), we will build AI services which each perform a bounded task with bounded resources, and which can be combined to achieve superhuman performance on a wide range of tasks.
2. AI services may or may not be “agents”. However, under CAIS there will be no entity optimising extremely hard towards its goals in the way that most AI safety researchers have been worrying about, because:
1. Each service will be relatively specialised and myopic (focused on current episodic performance, not maximisation over the whole future). This is true of basically all current AI applications, e.g. image classifiers or Google Translate.
2. Although rational agents can be proved equivalent to utility-maximisers, the same is not necessarily true of systems of rational agents. Most such systems are fundamentally different in structure from rational agents - for example, individual agents within the system can compete with or criticise each other. And since AI services aren’t “rational agents” in the first place, a system composed of them is even less likely to implement a utility-maximiser.
3. There won't be very much demand for unified AIs which autonomously carry out large-scale tasks requiring general capabilities, because systems of AI services will be able to perform those tasks just as well or better.
4. Early AI services could do things like massively disrupt financial markets, increase the rate of scientific discovery, help run companies, etc. Eventually they should be able to do any task that humans can, at our level or higher.
1. They could also be used to recursively improve AI technologies and to develop AI applications, but usually with humans in the loop - in roughly the same way that science allows us to build better tools with which to do better science.
6. Our priorities in doing AI safety research can and should be informed by this model:
1. A main role for technical AI safety researchers should be to look at the emergent properties of systems of AI services, e.g. which combinations of architectures, tasks and selection pressures could lead to risky behaviour, as well as the standard problems of specifying bounded tasks.
2. AI safety experts can also give ongoing advice and steer the development of AI services. AI safety researchers shouldn't think of safety as a one-shot problem, but rather a series of ongoing adjustments.
3. AI services will make it much easier to prevent the development of unbounded agent-like AGI through methods like increasing coordination and enabling surveillance, if the political will can be mustered.
I'm broadly sympathetic to the empirical claim that we'll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single superhuman AGI (one unified system that can do nearly all cognitive tasks as well as or better than any human). One plausible mechanism is that deep learning continues to succeed on tasks where there's lots of training data, but doesn't learn how to reason in general ways - e.g. it could learn from court documents how to imitate lawyers well enough to replace them in most cases, without being able to understand law in the way humans do. Self-driving cars are another pertinent example. If that pattern repeats across most human professions, we might see massive societal shifts well before AI becomes dangerous in the adversarial way that’s usually discussed in the context of AI safety.
If I had to sum up my objections to Eric’s framework in one sentence, it would be: “the more powerful each service is, the harder it is to ensure it’s individually safe; the less powerful each service is, the harder it is to combine them in a way that’s competitive with unified agents.” I’ve laid out my arguments in more detail below.
Richard’s view:
1. Open-ended agentlike AI seems like the most likely candidate for the first strongly superhuman AGI system.
1. As a basic prior, our only example of general intelligence so far is ourselves - a species composed of agentlike individuals who pursue open-ended goals. So it makes sense to expect AGIs to be similar - especially if you believe that our progress in artificial intelligence is largely driven by semi-random search with lots of compute (like evolution was) rather than principled intelligent design.
1. In particular, the way we trained on the world - both as a species and as individuals - was by interacting with it in a fairly unconstrained way. Many machine learning researchers believe that we’ll get superhuman AGI via a similar approach, by training RL agents in simulated worlds. Even if we then used such agents as “services”, they wouldn’t be bounded in the way predicted by CAIS.
3. Many complex tasks don’t easily decompose into separable subtasks. For instance, while writing this post I had to keep my holistic impression of Eric’s ideas in mind most of the time. This impression was formed through having conversations and reading essays, but was updated frequently as I wrote this post, and also draws on a wide range of my background knowledge. I don’t see how CAIS would split the task of understanding a high-level idea between multiple services, or (if it were done by a single service) how that service would interact with an essay-writing service, or an AI-safety-research service.
1. Note that this isn’t an argument against AGI being modular, but rather an argument that requiring the roles of each module and the ways they interface with each other to be human-specified or even just human-comprehensible will be very uncompetitive compared with learning them in an unconstrained way. Even on today’s relatively simple tasks, we already see end-to-end training outcompeting other approaches, and learned representations outperforming human-made representations. The basic reason is that we aren’t smart enough to understand how the best cognitive structures or representations work. Yet it’s key to CAIS that each service performs a specific known task, rather than just doing useful computation in general - otherwise we could consider each lobe of the human brain to be a “service”, and the combination of them to be unsafe in all the standard ways.
2. It’s not clear to me whether this is also an argument against IDA. I think that it probably is, but to a lesser extent, because IDA allows multiple layers of task decomposition which are incomprehensible to humans before bottoming out in subtasks which we can perform.
5. Even if task decomposition can be solved, humans reuse most of the same cognitive faculties for most of the tasks that we can carry out. If many AI services end up requiring similar faculties to each other, it would likely be more efficient to unify them into a single entity. It would also be more efficient if that entity could pick up new tasks in the same rapid way that humans do, because then you wouldn’t need to keep retraining. At that point, it seems like you no longer have an AI service but rather the same sort of AGI that we’re usually worried about. (In other words, meta-learning is very important but doesn’t fit naturally into CAIS).
6. Humans think in terms of individuals with goals, and so even if there's an equally good approach to AGI which doesn't conceive of it as a single goal-directed agent, researchers will be biased against it.
3. Even assuming that the first superintelligent AGI is in fact a system of services as described by the CAIS framework, it will be much more like an agent optimising for an open-ended goal than Eric claims.
1. There'll be significant pressure to reduce the extent to which humans are in the loop of AI services, for efficiency reasons. E.g. when a CEO can't improve on the strategic advice given to it by an AI, or the implementation by another AI, there's no reason to have that CEO any more. Then we’ll see consolidation of narrow AIs into one overall system which makes decisions and takes actions, and may well be given an unbounded goal like "maximise shareholder value". (Eric agrees that this is dangerous, and considers it more relevant than other threat models).
2. Even if we have lots of individually bounded-yet-efficacious modules, the task of combining them to perform well in new tasks seems like a difficult one which will require a broad understanding of the world. An overseer service which is trained to combine those modules to perform arbitrary tasks may be dangerous because if it is goal-oriented, it can use those modules to fulfil its goals (on the assumption that for most complex tasks, some combination of modules performs well - if not, then we’ll be using a different approach anyway).
1. While I accept that many services can be trained in a way which makes them naturally bounded and myopic, this is much less clear to me in the case of an overseer which is responsible for large-scale allocation of other services. In addition to superhuman planning capabilities and world-knowledge, it would probably require arbitrarily long episodes so that it can implement and monitor complex plans. My guess is that Eric would argue that this overseer would itself be composed of bounded services, in which case the real disagreement is how competitive that decomposition would be (which relates to point 1.2 above).
5. Even assuming that the first superintelligent AGI is in fact a system of services as described the CAIS framework, focusing on superintelligent agents which pursue unbounded goals is still more useful for technical researchers. (Note that I’m less confident in this claim than the others).
1. Eventually we’ll have the technology to build unified agents doing unbounded maximisation. Once built, such agents will eventually overtake CAIS superintelligences because they’ll have more efficient internal structure and will be optimising harder for self-improvement. We shouldn’t rely on global coordination to prevent people from building unbounded optimisers, because it’s hard and humans are generally bad at it.
2. Conditional on both sorts of superintelligences existing, I think (and I would guess that Eric agrees) that CAIS superintelligences are significantly less likely to cause existential catastrophe. And in general, it’s easier to reduce the absolute likelihood of an event the more likely it is (even a 10% reduction of a 50% risk is more impactful than a 90% reduction of a 5% risk). So unless we think that technical research to reduce the probability of CAIS catastrophes is significantly more tractable than other technical AI safety research, it shouldn’t be our main focus.
As a more general note, I think that one of the main strengths of CAIS is in forcing us to be more specific about what tasks we envisage AGI being used for, rather than picturing it divorced from development and deployment scenarios. However, I worry that the fuzziness of the usual concept of AGI has now been replaced by a fuzzy notion of “service” which makes sense in our current context, but may not in the context of much more powerful AI technology. So while CAIS may be a good model of early steps towards AGI, I think it is a worse model of the period I’m most worried about. I find CAIS most valuable in its role as a research agenda (as opposed to a predictive framework): it seems worth further investigating the properties of AIs composed of modular and bounded subsystems, and the ways in which they might be safer (or more dangerous) than alternatives.
*Many thanks to Eric for the time he spent explaining his ideas and commenting on drafts. I also particularly appreciated feedback from Owain Evans, Rohin Shah and Jan Leike.* |
accc7e92-d685-4ca2-a570-bb33cbed020b | trentmkelly/LessWrong-43k | LessWrong | Military AI as a Convergent Goal of Self-Improving AI
|
71fbccef-4f24-4674-90c1-d6d7862dc17b | trentmkelly/LessWrong-43k | LessWrong | Poker example: (not) deducing someone's preferences
,,,,,,,,,,,,,,,,,,,
I've shown that it is, theoretically, impossible to deduce the preferences and rationality of an agent by looking at their actions or policy.
That argument is valid, but feels somewhat abstract, talking about "fully anti-rational" agents, and other "obviously ridiculous" preferences.
In this post, I'll present a simple realistic example of human behaviour where their preferences cannot be deduced. The example was developed by Xavier O'rourke.
The motivations and beliefs of a poker player
In this example, Alice is playing Bob at poker, and they are on their last round. Alice might believe that Bob has a better hand, or a worse one. She may be maximising her expected income, or minimising it (why? read on to see). Even under questioning, it is impossible to distinguish an Alice belief in Bob having a worse hand and Alice following a maximising behaviour, from Bob-better-hand-and-Alice-minimising-income. And, similarly, Bob-worse-hand-and-Alice-minimising-income is indistinguishable from Bob-better-hand-and-Alice-maximising-income.
If we want to be specific, imagine the we are observing Alice playing a game of Texas holdem'. Before the river (the final round of betting), everyone has folded besides Alice and Bob. Alice is holding (K♠,10♡), and the board (the five cards both players have in common) is (10♢,10♣,10♠,J♠,Q♠).
Alice is looking at four-of-a-kind in 10's, and can only lose if Bob holds (9♠,8♠), giving him a straight flush. For simplicity, assume Bob has raised, and Alice can only call or fold -- assume she's out of money to re-raise -- and Bob cannot respond to either, so his actions are irrelevant. He has been playing this hand, so far, with great confidence.
Alice can have two heuristic models of Bob's hand. In one model, μ1, she assumes that having specifically (9♠,8♠) is very low, so she almost certainly has the better hand. In a second model, she notes Bob's great confidence, and concludes he is quite likely to have that pair.
|
ea379e79-adb0-4fe8-939d-02b363c31c7a | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Qualitative Strategies of Friendliness
Today's post, Qualitative Strategies of Friendliness was originally published on 30 August 2008 . A summary (taken from the LW wiki):
> Qualitative strategies to achieve friendliness tend to run into difficulty.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Harder Choices Matter Less, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
33b15b9b-8a98-4346-aaf9-7299ae44d33d | trentmkelly/LessWrong-43k | LessWrong | AISN #31: A New AI Policy Bill in California
Plus, Precedents for AI Governance and The EU AI Office
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
----------------------------------------
This week, we’ll discuss:
* A new proposed AI bill in California which requires frontier AI developers to adopt safety and security protocols, and clarifies that developers bear legal liability if their AI systems cause unreasonable risks or critical harms to public safety.
* Precedents for AI governance from healthcare and biosecurity.
* The EU AI Act and job opportunities at their enforcement agency, the AI Office.
A New Bill on AI Policy in California
Several leading AI companies have public plans for how they’ll invest in safety and security as they develop more dangerous AI systems. A new bill in California’s state legislature would codify this practice as a legal requirement, and clarify the legal liability faced by developers whose AI systems cause unreasonable risks or critical harms to public safety. (Note: The bill was co-sponsored by the Center for AI Safety Action Fund.)
Many of the world’s leading AI developers and cloud compute providers are located in California. This means the state has significant direct control over AI development, and it could serve as a testing grounds for policies that might later be adopted nationally or internationally.
Safety and security protocols for advanced AI models. This bill would require developers of certain leading AI models to adopt safety and security protocols. This requirement only applies to models that may surpass the capabilities of all models previously established as safe. Models that will not exceed the performance of known safe models are exempt from these requirements.
Specific requirements include cybersecurity measures to “prevent theft, misappropriation, malicious use, or inadvertent release or escape |
b52ba5aa-20bd-4ef8-a7be-9be470e1c67c | trentmkelly/LessWrong-43k | LessWrong | Satisfying preferences by creating them
Sarkology points out that the intuition against it being a good thing to create new lives may be this:
> …You are supposed to help people by satisfying their (already fixed and existent) preferences. Not by modifying those preferences to meet reality. Or God forbid, invent those preference ex nihilo.
Could this intuition be correct?
Suppose someone else invents a preference somehow. Lets say they enjoy an evening with a loved one in the presence of the scent of roses, and thus begin a lifelong fondness for that smell. Can you help the person by satisfying this new preference?
If not, you could never help anyone. All preferences are created somehow. So let’s take the usual view that you can help by satisfying preferences others have invented.
What about the person who created the preference? Did he do right or wrong in creating it?
If he did neither right nor wrong, then I could also do neither right or wrong by creating a preference. Then could I do good by fulfilling it? I can’t see why it should matter whether these two acts are done by different people or the same one. If I can do good this way, then why can’t I do good by doing both of these things at once, creating a preference in a situation which also causes it to be fulfilled? If I can do good that way, then the above intuition is wrong.
It could be incorrect to fulfil preferences ‘by’ creating them if creating them is a bad enough act to make up for the good got by fulfilling them. Which would entail that the world would be a better place had many satisfied and happy people not been born, and that having babies is generally a very bad thing to do. I think these things are far more unintuitive than the above intuition being wrong. What do you think?
Image by ღLitle fleaღ via Flickr
|
1f790a8e-c9e1-4b20-b4ff-d9ac4c340379 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Imagine that I, in full view of live television cameras, raised my hands and chanted abracadabra and caused a brilliant light to be born, flaring in empty space beyond my outstretched hands. Imagine that I committed this act of blatant, unmistakeable sorcery under the full supervision of James Randi and all skeptical armies. Most people, I think, would be fairly curious as to what was going on.But now suppose instead that I don’t go on television. I do not wish to share the power, nor the truth behind it. I want to keep my sorcery secret. And yet I also want to cast my spells whenever and wherever I please. I want to cast my brilliant flare of light so that I can read a book on the train—without anyone becoming curious. Is there a spell that stops curiosity?Yes indeed! Whenever anyone asks “How did you do that?” I just say “Science!”It’s not a real explanation, so much as a curiosity-stopper. It doesn’t tell you whether the light will brighten or fade, change color in hue or saturation, and it certainly doesn’t tell you how to make a similar light yourself. You don’t actually know anything more than you knew before I said the magic word. But you turn away, satisfied that nothing unusual is going on.Better yet, the same trick works with a standard light switch.Flip a switch and a light bulb turns on. Why?In school, one is taught that the password to the light bulb is “Electricity!” By now, I hope, you’re wary of marking the light bulb “understood” on such a basis. Does saying “Electricity!” let you do calculations that will control your anticipation of experience? There is, at the least, a great deal more to learn.1If you thought the light bulb was scientifically inexplicable, it would seize the entirety of your attention. You would drop whatever else you were doing, and focus on that light bulb.But what does the phrase “scientifically explicable” mean? It means that someone else knows how the light bulb works. When you are told the light bulb is “scientifically explicable,” you don’t know more than you knew earlier; you don’t know whether the light bulb will brighten or fade. But because someone else knows, it devalues the knowledge in your eyes. You become less curious.Someone is bound to say, “If the light bulb were unknown to science, you could gain fame and fortune by investigating it.” But I’m not talking about greed. I’m not talking about career ambition. I’m talking about the raw emotion of curiosity—the feeling of being intrigued. Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite? It’s not enough for you to know; other people must also be ignorant, or you won’t be happy?There are goods that knowledge may serve besides curiosity, such as the social utility of technology. For these instrumental goods, it matters whether some other entity in local space already knows. But for my own curiosity, why should it matter?Besides, consider the consequences if you permit “Someone else knows the answer” to function as a curiosity-stopper. One day you walk into your living room and see a giant green elephant, seemingly hovering in midair, surrounded by an aura of silver light.“What the heck?” you say.And a voice comes from above the elephant, saying,Somebody already knows why this elephant is here.“Oh,” you say, “in that case, never mind,” and walk on to the kitchen.I don’t know the grand unified theory for this universe’s laws of physics. I also don’t know much about human anatomy with the exception of the brain. I couldn’t point out on my body where my kidneys are, and I can’t recall offhand what my liver does.2Should I, so far as curiosity is concerned, be more intrigued by my ignorance of the ultimate laws of physics, than the fact that I don’t know much about what goes on inside my own body?If I raised my hands and cast a light spell, you would be intrigued. Should you be any less intrigued by the very fact that I raised my hands? When you raise your arm and wave a hand around, this act of will is coordinated by (among other brain areas) your cerebellum. I bet you don’t know how the cerebellum works. I know a little—though only the gross details, not enough to perform calculations . . . but so what? What does that matter, if you don’t know? Why should there be a double standard of curiosity for sorcery and hand motions?Look at yourself in the mirror. Do you know what you’re looking at? Do you know what looks out from behind your eyes? Do you know what you are? Some of that answer Science knows, and some of it Science does not. But why should that distinction matter to your curiosity, if you don’t know?Do you know how your knees work? Do you know how your shoes were made? Do you know why your computer monitor glows? Do you know why water is wet?The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery. With reasoning such as that, I could get you to overlook an elephant in your living room.1 Physicists should ignore this paragraph and substitute a problem in evolutionary theory, where the substance of the theory is again in calculations that few people know how to perform.2 I am not proud of this. Alas, with all the math I need to study, I’m not likely to learn anatomy anytime soon."
What you see above is a segment from a LessWrong user's post.
Can you explain what this text is saying as if I've never finished my first 2 years of college?
The text is essentially discussing the concept of curiosity and knowledge, and how we often dismiss things as uninteresting or not worth investigating because we assume they are already understood by someone else, usually by science.
The author starts by presenting a hypothetical scenario where they perform an act of magic, creating a light out of nowhere. If they did this in public, people would be amazed and curious. But if they simply said they did it through "science", most people would accept it and stop being curious, even though they don't actually understand how it was done.
The author then uses the example of a light bulb. We use light bulbs every day, and most of us don't know how they work. But because we know that science understands it, we don't feel curious about it. The author argues that this is a problem. Just because someone else understands something, it doesn't mean we shouldn't be curious about it.
The author further illustrates this point with a humorous example of a giant green elephant appearing in your living room. If a voice told you that someone else already understands why the elephant is there, would you stop being curious about it? Probably not.
The author then points out that there are many things we don't understand about our own bodies and the world around us, but we often don't feel curious about them because we know that science or someone else understands them. The author argues that this shouldn't stop our curiosity.
In summary, the text is a call to maintain our curiosity and desire for understanding, even in a world where we often assume that everything is already understood by someone else.
How might our understanding of the world change if we maintained our curiosity about everyday things, like how a light bulb works, rather than accepting that someone else understands it?
Maintaining curiosity about everyday things could significantly change our understanding of the world in several ways:
1. **Deepened Knowledge:** We would have a deeper, more detailed understanding of the world around us. Knowing how things work on a fundamental level gives us a greater appreciation and understanding of the complexity and beauty of the world.
2. **Innovation and Problem-Solving:** With a deeper understanding of how things work, we might be better equipped to innovate and solve problems. For instance, understanding how a light bulb works might inspire ideas for more efficient or sustainable lighting solutions.
3. **Critical Thinking:** Continual curiosity encourages critical thinking. Rather than accepting information at face value, we would be more inclined to question, investigate, and verify, leading to a more accurate and nuanced understanding of the world.
4. **Lifelong Learning:** If we stay curious, we become lifelong learners. The world is rapidly changing, and what we know today might not apply tomorrow. By continually questioning and learning, we can adapt to new information and changes more effectively.
5. **Empathy and Connection:** Understanding more about the world can also lead to greater empathy and connection with others. For instance, understanding the science behind climate change could lead to more empathetic views towards those affected by it and could inspire action to address the issue.
In essence, maintaining curiosity could lead to a more informed, innovative, and empathetic society.
Reflecting on the benefits of maintaining curiosity about everyday things, what potential drawbacks or challenges might arise from this approach?
While maintaining curiosity about everyday things has many benefits, it can also present some challenges:
1. **Time Consumption:** Deeply exploring how everything works can be time-consuming. With countless things around us, it may not be feasible to understand everything in detail, especially when balancing other responsibilities.
2. **Information Overload:** With the vast amount of information available, it's possible to experience information overload. This can lead to confusion, stress, and difficulty in making decisions.
3. **Analysis Paralysis:** The desire to understand everything can lead to overthinking and getting stuck in analysis, which can hinder decision-making and action.
4. **Perfectionism:** Constant curiosity could lead to perfectionism, where one might feel nothing is understood well enough, leading to dissatisfaction or frustration.
5. **Distraction from Goals:** If not managed well, curiosity can distract from primary goals. For example, spending time understanding how a microwave works when you're supposed to be studying for an exam might not be the best use of time.
6. **Misinformation and Misinterpretation:** Without proper guidance or expertise, one might misunderstand or misinterpret information, leading to incorrect conclusions.
While these challenges exist, they don't negate the value of curiosity. The key is to balance curiosity with practicality, ensuring it enhances our lives and understanding of the world without causing unnecessary stress or distraction.
Considering the potential drawbacks of maintaining curiosity about everyday things, how can one effectively manage their curiosity to avoid information overload and distraction from primary goals?
Managing curiosity effectively is key to reaping its benefits without succumbing to potential drawbacks. Here are some strategies to do so:
1. **Prioritize Your Curiosity:** Not all curiosities are created equal. Some are more relevant to your life, interests, or goals than others. Prioritize learning about things that are most relevant or intriguing to you.
2. **Set Learning Goals:** Define what you want to learn and set specific, achievable goals. This will give your curiosity direction and prevent you from getting lost in an endless sea of information.
3. **Time Management:** Allocate specific time for exploring your curiosities. This could be a set time each day or week. This way, your learning won't interfere with your other responsibilities.
4. **Use Reliable Sources:** To avoid misinformation, use reliable and credible sources of information. This will ensure that the time you spend learning is productive and valuable.
5. **Practice Mindfulness:** Be mindful of your current task. If you find yourself getting distracted by a curiosity, note it down to explore later and refocus on your task.
6. **Learn to Let Go:** Accept that you can't know everything. If a particular line of inquiry isn't leading anywhere, or is taking up too much time with little reward, it's okay to let it go.
7. **Seek Guidance:** If you're delving into a complex topic, don't hesitate to seek guidance from experts or mentors. They can help you navigate the information and understand it correctly.
By implementing these strategies, you can feed your curiosity without it leading to information overload or distracting you from your primary goals. |
ba1b50f2-d9b2-49c0-b1ca-fb541dff3783 | trentmkelly/LessWrong-43k | LessWrong | Contra Yudkowsky on AI Doom
Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. Moreover he predicts that this is the default outcome, and AI alignment is so incredibly difficult that even he failed to solve it.
EY is an entertaining and skilled writer, but do not confuse rhetorical writing talent for depth and breadth of technical knowledge. I do not have EY's talents there, or Scott Alexander's poetic powers of prose. My skill points instead have gone near exclusively towards extensive study of neuroscience, deep learning, and graphics/GPU programming. More than most, I actually have the depth and breadth of technical knowledge necessary to evaluate these claims in detail.
I have evaluated this model in detail and found it substantially incorrect and in fact brazenly naively overconfident.
Intro
Even though the central prediction of the doom model is necessarily un-observable for anthropic reasons, alternative models (such as my own, or moravec's, or hanson's) have already made substantially better predictions, such that EY's doom model has low posterior probability.
EY has espoused this doom model for over a decade, and hasn't updated it much from what I can tell. Here is the classic doom model as I understand it, starting first with key background assumptions/claims:
1. Brain inefficiency: The human brain is inefficient in multiple dimensions/ways/metrics that translate into intelligence per dollar; inefficient as a hardware platform in key metrics such as thermodynamic efficiency.
2. Mind inefficiency or human incompetence: In terms of software he describes the brain as an inefficient complex "kludgy mess of spaghetti-code". He derived these insights from the influential evolved modularity hypothesis as popularized in ev pysch by Tooby and Cosmides. He boo-hooed neural networks, and in fact actively bet against them in actions by hiring researchers trained in abstra |
161c5be5-0343-4996-9823-baec79641ecf | trentmkelly/LessWrong-43k | LessWrong | The boat
> A short science fiction story about our ancestors and the ethical responsibility we have towards them.
Old Ana's legs fought against her now, but still she led her granddaughter up the mountain path. The girl's torch made shadows dance on the rocks. The river sang in the distance – the same river that had shown Ana her great making many moons ago.
"Look, Kaya. When I was young like you, I watched the fallen trees in water. They floated, always. One day I saw a big log and thought – what if we made it hollow, like a nest?"
Kaya knew this tale well – how her grandmother made the first boat. How the people of their tribe laughed and said her head was touched by bad spirits. How she worked through many moons, scraping out the old tree's heart with bone tools. How their mouths opened wide when they saw that in this hollow tree, they could ride the river.
"But listen, little one," – Ana stopped to draw breath. – "One day our people will make boats that float up there," she pointed to the night sky. "They will go between the bright spots that shine in the dark, will cross the sky river."
Kaya looked at her grandmother with big eyes: "But how?"
"I do not know," – Ana smiled. – "But our people are clever. They will find the way. And maybe..." – she touched her chest where her heart beat – "maybe one day they will be so clever they can bring back to life those who have long gone. They'll read our traces as I read animal tracks in the snow. With the traces, they'll track down our spirits among the sand grains."
At last, they came to the cave. Inside, the torch showed the walls of their ancestors – hundreds of hands marked in earth-red.
"Now it's your turn, Kaya," – Ana took out a small pot of red ochre. – "Place your hand next to mine. Let those who come here many, many moons from now see us together, so they don't forget about bringing us back."
The awakening was gentle. Ana opened her eyes to an unfamiliar cave of smooth blue walls. Next to her, Kaya was alr |
d5d2d061-7818-486e-90ed-de58e2651c6e | StampyAI/alignment-research-dataset/blogs | Blogs | Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
#### Introduction
There has been much recent discussion about AI risk, meaning specifically the potential pitfalls (both short-term and long-term) that AI with improved capabilities could create for society. Discussants include AI researchers such as [Stuart Russell](https://www.cs.berkeley.edu/~russell/research/future/) and [Eric Horvitz and Tom Dietterich](https://medium.com/@tdietterich/benefits-and-risks-of-artificial-intelligence-460d288cccf3), entrepreneurs such as [Elon Musk and Bill Gates](http://lukemuehlhauser.com/musk-and-gates-on-superintelligence-and-fast-takeoff/), and research institutes such as the [Machine Intelligence Research Institute](https://intelligence.org/) (MIRI) and [Future of Humanity Institute](http://www.fhi.ox.ac.uk/research/research-areas/) (FHI); the director of the latter institute, Nick Bostrom, has even written a bestselling [book](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111) on this topic. Finally, ten million dollars in funding have been [earmarked](http://futureoflife.org/grants/large/initial) towards research on ensuring that AI will be safe and beneficial. Given this, I think it would be useful for AI researchers to discuss the nature and extent of risks that might be posed by increasingly capable AI systems, both short-term and long-term. As a PhD student in machine learning and artificial intelligence, this essay will describe my own views on AI risk, in the hopes of encouraging other researchers to detail their thoughts, as well.
For the purposes of this essay, I will define “AI” to be technology that can carry out tasks with limited or no human guidance, “advanced AI” to be technology that performs substantially more complex and domain-general tasks than are possible today, and “highly capable AI” to be technology that can outperform humans in all or almost all domains. As the primary target audience of this essay is other researchers, I have used technical terms (e.g. weakly supervised learning, inverse reinforcement learning) whenever they were useful, though I have also tried to make the essay more generally accessible when possible.
#### Outline
I think it is important to distinguish between two questions. First, does artificial intelligence merit the same degree of engineering safety considerations as other technologies (such as bridges)? Second, does artificial intelligence merit additional precautions, beyond those that would be considered typical? I will argue that the answer is yes to the first, even in the short term, and that current engineering methodologies in the field of machine learning do not provide even a typical level of safety or robustness. Moreover, I will argue that the answer to the second question in the long term is likely also yes — namely, that there are important ways in which highly capable artificial intelligence could pose risks which are not addressed by typical engineering concerns.
The point of this essay is not to be alarmist; indeed, I think that AI is likely to be net-positive for humanity. Rather, the point of this essay is to encourage a discussion about the potential pitfalls posed by artificial intelligence, since I believe that research done now can mitigate many of these pitfalls. Without such a discussion, we are unlikely to understand which pitfalls are most important or likely, and thus unable to design effective research programs to prevent them.
A common objection to discussing risks posed by AI is that it seems somewhat early on to worry about such risks, and the discussion is likely to be more germane if we wait to have it until after the field of AI has advanced further. I think this objection is quite reasonable in the abstract; however, as I will argue below, I think we do have a reasonable understanding of at least some of the risks that AI might pose, that some of these will be realized even in the medium term, and that there are reasonable programs of research that can address these risks, which in many cases would also have the advantage of improving the usability of existing AI systems.
#### Ordinary Engineering
There are many issues related to AI safety that are just a matter of good engineering methodology. For instance, we would ideally like systems that are transparent, modular, robust, and work under well-understood assumptions. Unfortunately, machine learning as a field has not developed very good methodologies for obtaining any of these things, and so this is an important issue to remedy. In other words, I think we should put at least as much thought into building an AI as we do into building a bridge.
Just to be very clear, I do not think that machine learning researchers are bad engineers; looking at any of the open source tools such as [Torch](http://torch.ch/), [Caffe](http://caffe.berkeleyvision.org/), [MLlib](https://spark.apache.org/docs/1.1.0/mllib-guide.html), and others make it clear that many machine learning researchers are also good software engineers. Rather, I think that as a field our methodologies are not mature enough to address the specific engineering desiderata of statistical *models* (in contrast to the *algorithms* that create them). In particular, the statistical models obtained from machine learning algorithms tend to be:
1. **Opaque**: Many machine learning models consist of hundreds of thousands of parameters, making it difficult to understand how predictions are made. Typically, practitioners resort to error analysis examining the covariates that most strongly influence each incorrect prediction. However, this is not a very sustainable long-term solution, as it requires substantial effort even for relatively narrow-domain systems.
2. **Monolithic**: In part due to their opacity, models act as a black box, with no modularity or encapsulation of behavior. Though machine learning systems are often split into pipelines of smaller models, the lack of encapsulation can make these pipelines even harder to manage than a single large model; indeed, since machine learning models are by design optimized for a particular input distribution (i.e. whatever distribution they are trained on), we end up in a situation where “Changing Anything Changes Everything” [1].
3. **Fragile**: As another consequence of being optimized for a particular training distribution, machine learning models can have arbitrarily poor performance when that distribution shifts. For instance, Daumé and Marcu [2] show that a named entity classifier with 92% accuracy on one dataset drops to 58% accuracy on a superficially similar dataset. Though such issues are partially addressed by work on transfer learning and domain adaptation [3], these areas are not very developed compared to supervised learning.
4. **Poorly understood**: Beyond their fragility, understanding when a machine learning model will work is difficult. We know that a model will work if it is tested on the same distribution it is trained on, and have some extensions beyond this case (e.g. based on robust optimization [4]), but we have very little in the way of practically relevant conditions under which a model trained in one situation will work well in another situation. Although they are related, this issue differs from the opacity issue above in that it relates to making predictions about the system’s future behavior (in particular, generalization to new situations), versus understanding the internal workings of the current system.
That these issues plague machine learning systems is likely uncontroversial among machine learning researchers. However, in comparison to research focused on extending capabilities, very little is being done to address them. Research in this area therefore seems particularly impactful, especially given the desire to deploy machine learning systems in increasingly complex and safety-critical situations.
#### Extraordinary Engineering
Does AI merit additional safety precautions, beyond those that are considered standard engineering practice in other fields? Here I am focusing only on the long-term impacts of advanced or highly capable AI systems.
My tentative answer is yes; there seem to be a few different ways in which AI could have bad effects, each of which seems individually unlikely but not implausible. Even if each of the risks identified so far are not likely, (i) the total risk might be large, especially if there are additional unidentified risks, and (ii) the existence of multiple “near-misses” motivates closer investigation, as it may suggest some underlying principle that makes AI risk-laden. In the sequel I will focus on so-called “global catastrophic” risks, meaning risks that could affect a large fraction of the earth’s population in a material way. I have chosen to focus on these risks because I think there is an important difference between an AI system messing up in a way that harms a few people (which would be a legal liability but perhaps should not motivate a major effort in terms of precautions) and an AI system that could cause damage on a global scale. The latter *would* justify substantial precautions, and I want to make it clear that this is the bar I am setting for myself.
With that in place, below are a few ways in which advanced or highly capable AI could have specific global catastrophic risks.
**Cyber-attacks.** There are two trends which taken together make the prospect of AI-aided cyber-attacks seem worrisome. The first trend is simply the increasing prevalence of cyber-attacks; even this year we have seen Russia attack Ukraine, North Korea attack Sony, and China attack the U.S. Office of Personnel Management. Secondly, the “Internet of Things” means that an increasing number of physical devices will be connected to the internet. Assuming that software exists to autonomously control them, many internet-enabled devices such as cars could be hacked and then weaponized, leading to a decisive military advantage in a short span of time. Such an attack could be enacted by a small group of humans aided by AI technologies, which would make it hard to detect in advance. Unlike other weaponizable technology such as nuclear fission or synthetic biology, it would be very difficult to control the distribution of AI since it does not rely on any specific raw materials. Finally, note that even a team with relatively small computing resources could potentially “bootstrap” to much more computing power by first creating a botnet with which to do computations; to date, the largest botnet has spanned 30 million computers and several other botnets have exceeded 1 million.
**Autonomous weapons.** Beyond cyber-attacks, improved autonomous robotics technology combined with ubiquitous access to miniature UAVs (“drones”) could allow both terrorists and governments to wage a particularly pernicious form of remote warfare by creating weapons that are both cheap and hard to detect or defend against (due to their small size and high maneuverability). Beyond direct malicious intent, if autonomous weapons systems or other powerful autonomous systems malfunction then they could cause a large amount of damage.
**Mis-optimization.** A highly capable AI could acquire a large amount of power but pursue an overly narrow goal, and end up harming humans or human value while optimizing for this goal. This may seem implausible at face value, but as I will argue below, it is easier to improve AI *capabilities* than to improve AI *values*, making such a mishap possible in theory.
**Unemployment.** It is already the case that increased automation is decreasing the number of available jobs, to the extent that some economists and policymakers are discussing what to do if the number of jobs is systematically smaller than the number of people seeking work. If AI systems allow a large number of jobs to be automated over a relatively short time period, then we may not have time to plan or implement policy solutions, and there could then be a large unemployment spike. In addition to the direct effects on the people who are unemployed, such a spike could also have indirect consequences by decreasing social stability on a global scale.
**Opaque systems.** It is also already the case that increasingly many tasks are being delegated to autonomous systems, from trades in financial markets to aggregation of information feeds. The opacity of these systems has led to issues such as the 2010 Flash Crash and will likely lead to larger issues in the future. In the long term, as AI systems become increasingly complex, humans may lose the ability to meaningfully understand or intervene in such systems, which could lead to a loss of sovereignty if autonomous systems are employed in executive-level functions (e.g. government, economy).
Beyond these specific risks, it seems clear that, eventually, AI will be able to outperform humans in essentially every domain. At that point, it seems doubtful that humanity will continue to have direct causal influence over its future unless specific measures are put in place to ensure this. While I do not think this day will come soon, I think it is worth thinking now about how we might meaningfully control highly capable AI systems, and I also think that many of the risks posed above (as well as others that we haven’t thought of yet) will occur on a somewhat shorter time scale.
Let me end with some specific ways in which control of AI may be particularly difficult compared to other human-engineered systems:
1. AI may be “agent-like”, which means that the space of possible behaviors is much larger; our intuitions about how AI will act in pursuit of a given goal may not account for this and so AI behavior could be hard to predict.
2. Since an AI would presumably learn from experience, and will likely run at a much faster serial processing speed than humans, its capabilities may change rapidly, ruling out the usual process of trial-and-error.
3. AI will act in a much more open-ended domain. In contrast, our existing tools for specifying the necessary properties of a system only work well in narrow domains. For instance, for a bridge, safety relates to the ability to successfully accomplish a small number of tasks (e.g. not falling over). For these, it suffices to consider well-characterized engineering properties such as tensile strength. For AI, the number of tasks we would potentially want it to perform is large, and it is unclear how to obtain a small number of well-characterized properties that would ensure safety.
4. Existing machine learning frameworks make it very easy for AI to acquire *knowledge*, but hard to acquire *values*. For instance, while an AI’s model of reality is flexibly learned from data, its goal/utility function is hard-coded in almost all situations; an exception is some work on inverse reinforcement learning [5], but this is still a very nascent framework. Importantly, the asymmetry between knowledge (and hence capabilities) and values is fundamental, rather than simply a statement about existing technologies. This is because knowledge is something that is regularly informed by reality, whereas values are only weakly informed by reality: an AI which learns incorrect facts could notice that it makes wrong predictions, but the world might never “tell” an AI that it learned the “wrong values”. At a technical level, while many tasks in machine learning are fully supervised or at least semi-supervised, value acquisition is a weakly supervised task.
In summary: there are several concrete global catastrophic risks posed by highly capable AI, and there are also several reasons to believe that highly capable AI would be difficult to control. Together, these suggest to me that the control of highly capable AI systems is an important problem posing unique research challenges.
#### Long-term Goals, Near-term Research
Above I presented an argument for why AI, in the long term, may require substantial precautionary efforts. Beyond this, I also believe that there is important research that can be done right now to reduce long-term AI risks. In this section I will elaborate on some specific research projects, though my list is not meant to be exhaustive.
1. **Value learning**: In general, it seems important in the long term (and also in the short term) to design algorithms for learning values / goal systems / utility functions, rather than requiring them to be hand-coded. One framework for this is inverse reinforcement learning [5], though developing additional frameworks would also be useful.
2. **Weakly supervised learning**: As argued above, inferring values, in contrast to beliefs, is an at most weakly supervised problem, since humans themselves are often incorrect about what they value and so any attempt to provide fully annotated training data about values would likely contain systematic errors. It may be possible to infer values indirectly through observing human actions; however, since humans often act immorally and human values change over time, current human actions are not consistent with our ideal long-term values, and so learning from actions in a naive way could lead to problems. Therefore, a better fundamental understanding of weakly supervised learning — particularly regarding guaranteed recovery of indirectly observed parameters under well-understood assumptions — seems important.
3. **Formal specification / verification**: One way to make AI safer would be to formally specify desiderata for its behavior, and then prove that these desiderata are met. A major open challenge is to figure out how to meaningfully specify formal properties for an AI system. For instance, even if a speech transcription system did a near-perfect job of transcribing speech, it is unclear what sort of specification language one might use to state this property formally. Beyond this, though there is much existing work in formal verification, it is still extremely challenging to verify large systems.
4. **Transparency**: To the extent that the decision-making process of an AI is transparent, it should be relatively easy to ensure that its impact will be positive. To the extent that the decision-making process is opaque, it should be relatively difficult to do so. Unfortunately, transparency seems difficult to obtain, especially for AIs that reach decisions through complex series of serial computations. Therefore, better techniques for rendering AI reasoning transparent seem important.
5. **Strategic assessment and planning**: Better understanding of the likely impacts of AI will allow a better response. To this end, it seems valuable to map out and study specific concrete risks; for instance, better understanding ways in which machine learning could be used in cyber-attacks, or forecasting the likely effects of technology-driven unemployment, and determining useful policies around these effects. It would also be clearly useful to identify additional plausible risks beyond those of which we are currently aware. Finally, thought experiments surrounding different possible behaviors of advanced AI would help inform intuitions and point to specific technical problems. Some of these tasks are most effectively carried out by AI researchers, while others should be done in collaboration with economists, policy experts, security experts, etc.
The above constitute at least five concrete directions of research on which I think important progress can be made today, which would meaningfully improve the safety of advanced AI systems and which in many cases would likely have ancillary benefits in the short term, as well.
#### Related Work
At a high level, while I have implicitly provided a program of research above, there are other proposed research programs as well. Perhaps the earliest proposed program is from MIRI [6], which has focused on AI alignment problems that arise even in simplified settings (e.g. with unlimited computing power or easy-to-specify goals) in hopes of later generalizing to more complex settings. The Future of Life Institute (FLI) has also published a research priorities document [7, 8] with a broader focus, including non-technical topics such as regulation of autonomous weapons and economic shifts induced by AI-based technologies. I do not necessarily endorse either document, but think that both represent a big step in the right direction. Ideally, MIRI, FLI, and others will all justify why they think their problems are worth working on and we can let the best arguments and counterarguments rise to the top. This is already happening to some extent [9, 10, 11] but I would like to see more of it, especially from academics with expertise in machine learning and AI [12, 13].
In addition, several specific arguments I have advanced are similar to those already advanced by others. The issue of AI-driven unemployment has been studied by Brynjolfsson and McAfee [14], and is also discussed in the FLI research document. The problem of AI pursuing narrow goals has been elaborated through Bostrom’s “paperclipping argument” [15] as well as the orthogonality thesis [16], which states that beliefs and values are independent of each other. While I disagree with the orthogonality thesis in its strongest form, the arguments presented above for the difficulty of value learning can in many cases reach similar conclusions.
Omohundro [17] has argued that advanced agents would pursue certain instrumentally convergent drives under almost any value system, which is one way in which agent-like systems differ from systems without agency. Good [18] was the first to argue that AI capabilities could improve rapidly. Yudkowsky has argued that it would be easy for an AI to acquire power given few initial resources [19], though his example assumes the creation of advanced biotechnology.
Christiano has argued for the value of transparent AI systems, and proposed the “advisor games” framework as a potential operationalization of transparency [20].
#### Conclusion
To ensure the safety of AI systems, additional research is needed, both to meet ordinary short-term engineering desiderata as well as to make the additional precautions specific to highly capable AI systems. In both cases, there are clear programs of research that can be undertaken today, which in many cases seem to be under-researched relative to their potential societal value. I therefore think that well-directed research towards improving the safety of AI systems is a worthwhile undertaking, with the additional benefit of motivating interesting new directions of research.
#### Acknowledgments
Thanks to Paul Christiano, Holden Karnofsky, Percy Liang, Luke Muehlhauser, Nick Beckstead, Nate Soares, and Howie Lempel for providing feedback on a draft of this essay.
#### References
[1] D. Sculley, et al. [*Machine Learning: The High-Interest Credit Card of Technical Debt*](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43146.pdf). 2014.
[2] Hal Daumé III and Daniel Marcu. Domain adaptation for statistical classifiers. *Journal of Artificial Intelligence Research*, pages 101–126, 2006.
[3] Sinno J. Pan and Qiang Yang. A survey on transfer learning. *IEEE Transactions on Knowledge and Data Engineering*, 22(10):1345–1359, 2010.
[4] Dimitris Bertsimas, David B. Brown, and Constantine Caramanis. Theory and applications of robust optimization. *SIAM Review*, 53(3):464–501, 2011.
[5] Andrew Ng and Stuart Russell. Algorithms for inverse reinforcement learning. In *International Conference in Machine Learning*, pages 663–670, 2000.
[6] Nate Soares and Benja Fallenstein. [*Aligning Superintelligence with Human Interests: A Technical Research Agenda*](https://intelligence.org/files/TechnicalAgenda.pdf). 2014.
[7] Stuart Russell, Daniel Dewey, and Max Tegmark. [*Research priorities for robust and beneficial artificial intelligence*](http://futureoflife.org/static/data/documents/research_priorities.pdf). 2015.
[8] Daniel Dewey, Stuart Russell, and Max Tegmark. [*A survey of research questions for robust and beneficial AI*](http://futureoflife.org/static/data/documents/research_survey.pdf). 2015.
[9] Paul Christiano. [*The Steering Problem*](https://medium.com/ai-control/the-steering-problem-a3543e65c5c4). 2015.
[10] Paul Christiano. [*Stable self-improvement as an AI safety problem*](https://medium.com/ai-control/stable-self-improvement-as-an-ai-safety-problem-46e2a44e73e). 2015.
[11] Luke Muehlhauser. [*How to study superintelligence strategy*](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/). 2014.
[12] Stuart Russell. [*Of Myths and Moonshine*](http://edge.org/conversation/the-myth-of-ai#26015). 2014.
[13] Tom Dietterich and Eric Horvitz. [*Benefits and Risks of Artificial Intelligence*](https://medium.com/@tdietterich/benefits-and-risks-of-artificial-intelligence-460d288cccf3). 2015.
[14] Erik Brynjolfsson and Andrew McAfee. *The second machine age: work, progress, and prosperity in a time of brilliant technologies*. WW Norton & Company, 2014.
[15] Nick Bostrom (2003). [*Ethical Issues in Advanced Artificial Intelligence*](http://www.nickbostrom.com/ethics/ai.html). *Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence*.
[16] Nick Bostrom. “The superintelligent will: Motivation and instrumental rationality in advanced artificial agents.” *Minds and Machines* 22.2 (2012): 71-85.
[17] Stephen M. Omohundro (2008). [*The Basic AI Drives*](http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/). *Frontiers in Artificial Intelligence and Applications* (IOS Press).
[18] Irving J. Good. “Speculations concerning the first ultraintelligent machine.” *Advances in computers* 6.99 (1965): 31-83.
[19] Eliezer Yudkowsky. “Artificial intelligence as a positive and negative factor in global risk.” *Global catastrophic risks* 1 (2008): 303.
[20] Paul Christiano. [*Advisor Games*](https://medium.com/ai-control/advisor-games-b33382fef68c). 2015. |
aef6e5c1-4e2f-4437-87ca-e046868a34bf | trentmkelly/LessWrong-43k | LessWrong | Covid 8/11/22: The End Is Never The End
On Tuesday morning as I was grabbing croissants, the man in front of me turned around and told me to back away. ‘Because of the pandemic.’ He was the only one in the bakery wearing a mask, yet in every way he gave off the clear vibe that I, not he, was the one being unreasonable. There was a time he would have been right. There was even a time I was the person being crowded and upset about it.
Now the pandemic, for this kind of purpose, is over. Now I had to tell myself to let the Wookie win. There was space behind me, in no way was it Worth It to do anything but put my hands up and walk away.
I remember it because it is in such sharp contrast to the everyone and everything else. Covid over. People sick sometimes, sure, and yes I missed a 1-on-1 meeting because of someone’s Covid exposure. Eric Topol continues to warn us how bad things are and not to get numb, despite it being way too late for that. Still.
I was going to say that this week marks the end of the Covid posts being majority Covid content. Then it turned out that this week’s is still largely Covid content, as it snuck up on us with a lot of content this week (none of which is alarming). So the point of transition isn’t quite there, but we will get there soon. I predict slash hope there will only be a handful more of these posts with this high a ratio of Covid to non-Covid news.
The biggest development this week in things-relevant-to-my-interests was that PredictIt’s no action letter was revoked, effectively forcing it to shut down its markets in February 2023 unless something changes by then. The most credible story is that this is new prediction market website Kalshi performing a hit job to take out the competition, but I have not had the time to look into it yet. Whether or not that was the reason, the shut-down is quite bad, speaks quite badly of everyone who let this happen, and is something worth fighting against if there is a practical way to do that. PredictIt was far from perfect, but it was |
851d8c7b-1a73-4f9e-b2eb-98dd8ecacd28 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Orange County Atheist Meetup Thursday September 8
Discussion article for the meetup : Orange County Atheist Meetup Thursday September 8
WHEN: 08 September 2011 07:30:00PM (-0700)
WHERE: 18542 MacArthur Blvd Irvine, CA 92612
Once again we are going adjust the weekly Irvine meetup to join the Orange County Atheists for their September Meetup. The main event starts at 7:30 at the upstairs table at the IHOP. Some members also show up for drinks and appetizers at the El Torito across the street at 6:30. They would appreciate if you RSVP in the comments of the linked announcement.
Discussion article for the meetup : Orange County Atheist Meetup Thursday September 8 |
5e154ff2-5c58-49c1-9d95-9cf026b74bc3 | trentmkelly/LessWrong-43k | LessWrong | A FLI postdoctoral grant application: AI alignment via causal analysis and design of agents
TL;DR: This is a proposal I wrote for the Future of Life postdoctoral grants, on the topic of AI Safety. The topic of this proposal is being able to model human preferences in a causal model. This is somewhat similar to the agenda of value learning, but more related to that one of causal incentive design https://causalincentives.com/, which I think is interesting enough that more people should be aware of its existence. I think there might be ideas worth considering, but given that I am no expert in the topic of causal models there might be important technical or strategic errors, which I would be glad to know about. In other words, this is more a structured compilation of half-baked research ideas, rather than well-grounded research projects. But I think they might be worth considering nevertheless.
Introduction
The most straightforward approaches to intent AI alignment usually consist of what is called value learning: task the agent with the problem of simultaneously figuring out what the human wants, and optimizing for it. On the other hand, providing additional causal structure to the problem might improve the robustness of the system to new environments, make it more interpretable, and able to learn from scarce human interactions.
The causal incentives agenda [1] is the AI Safety agenda based on modeling and analyzing the incentives of an agent to ensure that it is aligned with human preferences. It leverages the framework of causal reasoning and inference [2] and was originally conceived as a way to study which causal models do not incentivize reward function tampering (the agent modifying its reward function to attain higher reward) and reward function input tampering (the agent self-deceiving to get a higher reward) [3, 4]. The latter problem also includes the system manipulating a human that provides feedback to make him/her more predictable and get a higher reward. Overall, the aim of this agenda is that since we should not expect to be able to box powe |
0bc15d5e-c90c-4153-8b07-b850d45792a6 | trentmkelly/LessWrong-43k | LessWrong | April links
None |
b42a730f-6036-48f6-997c-41a625dcc59c | trentmkelly/LessWrong-43k | LessWrong | Why agents are powerful
[Written for Blog Post Day. Not super happy with it, it’s too rambly and long, but I’m glad it exists.]
Here are some questions I think this theory of agency can answer:
* What are agents?
* Why should we expect AI agents to be useful for various important tasks?
* Why should we think agentic mesa-optimizers may arise for some data+reward-signals that weren’t designed explicitly to produce them?
* Why should we think humans+AI tools will eventually be outcompeted by AI agents?
* Why should we expect, on priors, mildly superhuman AI agents to be powerful enough to take over the world, if they wanted to?
What are agents?
Earlier in “Agents as P2B chain reactions” I defined agents as successful P2B feedback loops—a learner algorithm and a planner algorithm hooked up together, the planner plans to plan better and thus outputs actions that result in getting more useful data into the learner, more resources for the planner to plan with… then the process repeats… Like how “fire” is a successful feedback loop in which heat+plasma+fuel come together to produce more heat+plasma, and envelop more fuel, if any is nearby.
Then I asked: Is the planner algorithm strictly necessary here? Could you get something similar to a P2B feedback loop, but built around something other than a planner? I explored this question indirectly in “Gradations of Agency.” The answer seems to be “sorta.” You can have e.g. a Level 3 system that doesn’t do any planning and yet behaves remarkably similar to a P2B feedback loop in many contexts, due to imitating the success of others. Such a system would tend to underperform P2B in contexts where there aren’t good examples to imitate.
We could stick to the original definition and say: Agents are P2B feedback loops, so, level 4 and above.
But I think it’s probably better to be a bit more galaxy brain and say: the key thing is convergent instrumental resource feedback loops and that thinking about P2B is a stepping stone which helps us see why su |
1047faed-a35a-4289-9631-d5c598ce661e | trentmkelly/LessWrong-43k | LessWrong | A Layman's Explanation of "Safely Interruptible Agents"
|
e9c28482-6ef0-45a1-9f7e-5a44cfe9b69e | trentmkelly/LessWrong-43k | LessWrong | Safety-First Agents/Architectures Are a Promising Path to Safe AGI
Summary
Language model agents (LMAs) like AutoGPT have promising safety characteristics compared to traditional conceptions of AGI. The LLMs they are composed of plan, think, and act in highly transparent and correctable ways, although not maximally so, and it is unclear whether safety will increase or decrease in the future.
Regardless of where commercial trends will take us, it is possible to develop safer versions of LMAs, as well as other "cognitive architectures" that are not dependent on LLMs. Notable areas of potential safety work include effectively separating and governing how agency, cognition, and thinking arise in cognitive architectures.
If needed, safety-first cognitive architectures (SCAs) can match or exceed the performance of less safe systems, and can be compatible with many ways AGI may develop. This makes SCAs a promising path towards influencing and ensuring safe AGI development in everything from very-short-timeline (e.g. LMAs are the first AGIs) to long-timeline scenarios (e.g. future AI models are incorporated into or built explicitly for an existing SCA).
Although the SCA field has begun emerging over the past year, awareness seems low, and the field seems underdeveloped. I wanted to write this article so that more people are aware of what's happening with SCAs, document my thinking on the SCA landscape and promising areas of work, and advocate for more people, funding, and research going towards SCAs.
Background
Language model agents (LMAs), systems that integrate thinking performed by large language models (LLMs) prompting themselves in loops, have exploded in popularity since the release of AutoGPT at the end of March 2023.
Even before AutoGPT, the related field that I call "safety-first cognitive architectures" (SCAs) emerged in the AI safety community. Most notably, in 2022, Eric Drexler formulated arguments for the safety of such systems and developed a high-level design for an SCA called the open agency model. Shortly thereafter |
56360560-c46b-4bb3-9073-430a6929c155 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
[[THIRD EDIT: Thanks so much for all of the questions and comments! There are still a few more I'd like to respond to, so I may circle back to them a bit later, but, due to time constraints, I'm otherwise finished up for now. Any further comments or replies to anything I've written are also still be appreciated!]]
Hi!
I'm Ben Garfinkel, a researcher at the Future of Humanity Institute. I've worked on a mixture of topics in AI governance and in the somewhat nebulous area FHI calls "[macrostrategy](https://www.fhi.ox.ac.uk/research/research-areas/)", including: [the long-termist case](https://www.effectivealtruism.org/articles/ea-global-2018-how-sure-are-we-about-this-ai-stuff/) for prioritizing work on AI, plausible near-term [security issues](https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf) associated with AI, [surveillance](https://www.effectivealtruism.org/articles/ea-global-2018-the-future-of-surveillance/) and [privacy issues](https://benmgarfinkel.wordpress.com/2020/03/09/privacy-optimism-2/), the [balance between offense and defense](https://www.tandfonline.com/doi/full/10.1080/01402390.2019.1631810), and the [obvious impossibility](https://arxiv.org/abs/1703.10987) of building machines that are larger than humans.
80,000 Hours recently released [a long interview](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/) I recorded with Howie Lempel, about a year ago, where we walked through various long-termist arguments for prioritizing work on AI safety and AI governance relative to other cause areas. The longest and probably most interesting stretch explains why I no longer find the central argument in *Superintelligence*, and in related writing, very compelling. At the same time, I do continue to regard AI safety and AI governance as high-priority research areas.
(These two slide decks, which were linked in the show notes, give more condensed versions of my views: "[Potential Existential Risks from Artificial Intelligence](https://docs.google.com/presentation/d/1qHIi7Swd8LNwyyvoQUcvlRQsjrWAuIA_QY2E0wM0IfQ/edit#slide=id.p)" and "[Unpacking Classic Arguments for AI Risk](https://docs.google.com/presentation/d/1sHA3rwTHLIxyZPQObcw8mbNo2jffswH8uYV7N5PwqZE/edit#slide=id.g6230db10d0_0_305)." This [piece of draft writing](https://docs.google.com/document/d/1lgcBauWyYk774gBwKn8P_h8_wL9vLLiWBr6JMmEd_-I/edit) instead gives a less condensed version of my views on classic "fast takeoff" arguments.)
Although I'm most interested in questions related to AI risk and cause prioritization, feel free to ask me anything. I'm likely to eventually answer most questions that people post this week, on an as-yet-unspecified schedule. You should also feel free just to use this post as a place to talk about the podcast episode: there was a thread [a few days ago](https://forum.effectivealtruism.org/posts/m2MyAfsDAYE4bFdrN/the-80-000-hours-podcast-should-host-debates) suggesting this might be useful. |
791765fd-4ef9-43d8-abba-8e4b94a49ecc | trentmkelly/LessWrong-43k | LessWrong | We Should Introduce Ourselves Differently
I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.
While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.
Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely." |
ea7c8059-3196-4c10-b71e-74d22f37b24d | trentmkelly/LessWrong-43k | LessWrong | Economist Irene Ng on Market design, entrepreneurship, and innovation.
At the Futurati Podcast we recently recorded an interview with Irene Ng on market design, modularity and legibility in systems of behavior, innovation, game theory, privacy, the blockchain, and a few other subjects.
Though it's far removed from the AI alignment and Bayesian epistemology that LWers normally discuss, I know that there's enough interest in economics and related disciplines for me to think it'd be worth posting the episode and the show notes from it.
Show Notes
* Irene began her journey as an entrepreneur in the 1990's at age of 26, where she successfully turned around a cruise ship company before selling it at 33.
* After this she enrolled in a PhD program to study economics, industrial design, and market design.
* 'Market design', which occupied much of our conversation, centers around designing systems of transactions which are safe and efficient.
* These can often be used by public sector entities as a way of handling problems when public funds don't do the trick.
* We spent some time discussing the connection to repugnant markets. An example might be a market for kidneys.
* On the one hand, we want to do whatever we can to shorten the time required to get someone a kidney. On the other, we don't want to incentivize people to steal each other's kidneys.
* Dr. Ng cites the work of other economists in successfully using market design to create markets which solve this problem without repugnant side effects.
* I take a step back and ask Dr. Ng what it even means to design a market. Most economists treat markets as spontaneous orders which emerge when humans engage in certain kinds of activity, so how can one be designed.
* Dr. Ng's answer centers around the idea of transactions, which can involve exchanges of money or favors or almost anything else.
* Transactions can be thick or thin, and in a well-designed market there will be thin transaction points where new resources can enter. This depends on the modularity of the system.
* An exa |
20021448-1a48-477c-b8e2-78d10d65ef68 | trentmkelly/LessWrong-43k | LessWrong | Can you control the past?
(Cross-posted from Hands and Cities. Lots of stuff familiar to LessWrong folks interested in decision theory.)
I think that you can “control” events you have no causal interaction with, including events in the past, and that this is a wild and disorienting fact, with uncertain but possibly significant implications. This post attempts to impart such disorientation.
My main example is a prisoner’s dilemma between perfect deterministic software twins, exposed to the exact same inputs. This example that shows, I think, that you can write on whiteboards light-years away, with no delays; you can move the arm of another person, in another room, just by moving your own. This, I claim, is extremely weird.
My topic, more broadly, is the implications of this weirdness for the theory of instrumental rationality (“decision theory”). Many philosophers, and many parts of common sense, favor causal decision theory (CDT), on which, roughly, you should pick the action that causes the best outcomes in expectation. I think that deterministic twins, along with other examples, show that CDT is wrong. And I don’t think that uncertainty about “who are you,” or “where your algorithm is,” can save it.
Granted that CDT is wrong, though, I’m not sure what’s right. The most famous alternative is evidential decision theory (EDT), on which, roughly, you should choose the action you would be happiest to learn you had chosen. I think that EDT is more attractive (and more confusing) than many philosophers give it credit for, and that some putative counterexamples don’t withstand scrutiny. But EDT has problems, too.
In particular, I suspect that attractive versions of EDT (and perhaps, attractive attempts to recapture the spirit of CDT) require something in the vicinity of “following the policy that you would’ve wanted yourself to commit to, from some epistemic position that ‘forgets’ information you now know.” I don’t think that the most immediate objection to this – namely, that it implies cho |
fe52b147-4a58-4535-9b2d-65d44443d57c | trentmkelly/LessWrong-43k | LessWrong | Shrödinger’s lottery or: Why you are going to live forever
[I wrote this for my work lunch group after listening to David Wallace on the many-worlds theory of quantum mechanics and its implications. It seemed like a good first post here after a long time lurking.]
Shrödinger’s cat is the famous thought experiment that illustrates the essential weirdness of quantum mechanics. That an object in a superposition simultaneously exists in multiple states that only resolve when an observer measures the object. The many-worlds interpretation of quantum mechanics bites the bullet of quantum measurement by claiming the world ‘splits’ whenever a quantum state is measured, implying there exists a universe where every physical quantum state exists. In this interpretation there is a universe where Shrödinger’s cat is alive and a separate universe where his cat is dead. This continuous branching can be thought of as just another dimension of expansion, no different from how time and space are continuing toward infinity. The key difference being our awareness of the universe depends on the resolved quantum configuration so we cannot directly observe expansion on this dimension as we can with space and time.
What does it mean for your life if the many-worlds interpretation turned out to be correct? Imagine a version of Shrödinger’s cat but instead of the cat, 1023 people signed up for a peculiar quantum lottery. Each participant promises their savings and belongings to the sole survivor of the lottery (assume everyone has about the same moderate wealth). As believers of the many-worlds interpretation of quantum mechanics, the lottery participants are happy to enter this lottery where only one of them will survive because of how the drawing mechanism works. On the day of the drawing each participant enters a chamber numbered 0-1024. The drawing proceeds as follows:
* Ten electrons in superposition are assigned bits 1-10
* The spin of each electron is measured in sequence.
* If the measurement of an electron results in a spin up sta |
940ed9f1-1ce6-459c-a27d-07e7ac919cc0 | trentmkelly/LessWrong-43k | LessWrong | Sleeping on Stage
When I think of the ideal place for sleeping it's something like, peaceful, dark, and quiet. The chaotic bright loud stage of a contra dance is pretty far from this, and yet generations of kids have curled up behind their parents and fallen asleep:
It's a good idea to make them a nest they can crawl into when they're feeling sleepy:
A keyboard case can work well, especially a fuzzy one:
It's worth thinking about what this will be before you leave the house so you can bring something comfortable.
Pictured: much more bedding than required for this purpose, because this is the car packed for vacation. I don't have a good picture of the car packed for playing dances.
If you forget (or believe them when they insist they'll set up their bed when they're ready for it) they might go to sleep less comfortably:
Or less comfortably:
Or much less comfortably:
Headphones and a story tape can help:
You want to make sure they've used the bathroom and, ideally, brushed their teeth and put on whatever clothes they want to sleep in, since if everything goes well they'll be asleep until the next morning. When I'm lucky, which is about 75% of the time:
* They sleep until the end of the dance
* I strike my gear and pack the car
* I carry them to the car
* We drive home or to our hosts'
* I carry them to their bed
all without them waking up.
I asked them what they thought:
> Lily: It's not my favorite thing? But it's ok, especially if I have an audiobook to drown out the noise.
> Anna: I like it.
I don't think I would have believed this worked if I hadn't seen it, but it's reasonably common so it must work for a lot of families.
Comment via: facebook, mastodon |
c481890c-796f-4a85-9f98-a14a24bedf90 | StampyAI/alignment-research-dataset/blogs | Blogs | Import AI 326:Chinese AI regulations; Stability's new LMs If AI is fashionable in 2023, then what will be fashionable in 2024?
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.
[Subscribe now](https://importai.substack.com/subscribe)
**Want better AI policy? Figure out how to measure what you care about:***…Tim O'Reilly lists some simple ways for better AI governance…*If we want to govern AI systems, we need to be able to measure and assess their properties, says Tim O'Reilly. "Alignment will be impossible without robust institutions for disclosure and auditing," he writes. "If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved".
**Measurement rules everything around me:** O'Reilly's basic idea is that AI regulation comes from measuring AI systems for positives and negatives and then designing regulatory frameworks around that. The best way to start here is for regulators to draw on what AI companies themselves do.
"Regulators should start by formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems," he writes. "Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be."
Import AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
**One thing measurement doesn't help with:** There is one area of AI policy where measurement isn't necessarily going to be that helpful: "with small LLMs that can be run on a laptop, there is a risk of an irreversible and uncontrollable proliferation of technologies that are still poorly understood," he writes.
**Why this matters:** You can't manage what you can't measure: The longer AI policy runs on rhetorical soundbites and less on quantitative methods, the harder it's going to be to get down to brass tacks about what behaviors are good, what behaviors are bad, and what behaviors people should pay attention to. Proposals like O'Reilly's are eminently sensible - but of course I'd say this, as I've [proposed similar ideas myself](https://arxiv.org/abs/2108.12427)!
**Read more:** [You Can’t Regulate What You Don’t Understand (O'Reilly)](https://www.oreilly.com/content/you-cant-regulate-what-you-dont-understand-2/).
####################################################
**China publishes some very detailed generative AI regulations:***…Broad regulations see China try to exert control over generative ideological engines…*Chinese policymakers have published draft generative AI regulations which would target services and products offered in China. Stanford's DigiChina project has published an analysis of the regulations as well as a full translation of them. The takeaway from the recommendations is the Chinese government wants to exercise a lot more control over what AI-imbued services are allowed in its country, and it also wants to place a lot more responsibility and liability onto the providers of the underlying generative AI models.
**What the regulations mean:** It's worth reading them in full, but here are some highlights (translated via Stanford's 'DigiChina' project):
* "Content generated through the use of generative AI shall reflect the Socialist Core Values"
* "Respect intellectual property rights and commercial ethics"
* "Organizations or individuals that use generative AI to provide services such as chat, text, image, or audio generation … including providing programmable interfaces … bear responsibility as the producer of the content generated by the product."
* "Before using generative AI products to provide services to the public, a security assessment must be submitted to the state cyberspace and information department"
* "When providing generative AI services, users shall be required to provide real identity information"
* "When generated content that does not conform to the requirements of these Measures is discovered during operations or reported by users … repeat generation is to be prevented through such methods as optimization training within three months."
**AI companies are political parties**: One interpretation of this rulemaking is a recognition by the Chinese government that AI models - and therefore the companies that make them - are political forces which produce political artifacts; here, AI systems which magnify specific ideologies.
""Suddenly, instead of trying to control searches on websites and monitor forbidden terms in emails, the system will have to deal with individual users being able to ask questions to a generative AI application without any ability to monitor and block the output for sensitivity and offending word," writes Paul Triolo, Senior Associate, Trustee Chair in Chinese Business and Economics, Center for Strategic and International Studies, in DigiChina. ""Beijing and the CAC are in the initial stages of coming up with a regulatory regime that pushes companies toward political alignment as they develop their models. This is new territory for regulatory bodies like CAC, and for the entire Internet censorship apparatus that China has developed over the past three decades."
**The 'tiananmen problem' - one thought about AI safety and authoritarianism**: I think it's probably just as hard to get models to not help you make an explosive, as it is to get models to not display knowledge of Tiananmen Square in 1989. I think this illustrates how radically different ideological frames may end up having a strange area of agreement when it comes to investing in technologies relating to safety and alignment.
**Read more**: [Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023 (DigiChina)](https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/).
**Read more**: [How will China’s Generative AI Regulations Shape the Future? A DigiChina Forum (DigiChina)](https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/).
####################################################
**Stability tries to catch lightning in a bottle twice with release of 'StableLM' LLMs:***…Open source models++...*Stability AI, the company which released the open source 'Stable Diffusion' model into the world, has released a 3bn and 7bn parameter language model called StableLM. Stability plans to soon release 15bn and 65bn parameter models as well. "Developers can freely inspect, use, and adapt our StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license."
**What's special about StableLM?** This year, tons of open source language models have been released, ranging from Dolly-2, Cerebras-GPT, Eleuther's Pythia models, Facebook's lab leak 'LLaMa' model, and more. StableLM differs to these by virtue of being trained on a new dataset which, at 1.5 trillion tokens of content, is even larger than the 1.2trillion parameter dataset (RedPajama) written about elsewhere in this issue.
"We will release details on the dataset in due course. The richness of this dataset gives StableLM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters," Stability writes.
Stability has also released some models finetuned for instruction following - "these fine-tuned models are intended for research use only and are released under a noncommercial CC BY-NC-SA 4.0 license," the company wrote.
**Why this matters:** Stability believes that open source is the safest and best way to deploy AI in a large-scale manner, while many other organizations (e.g, OpenAI) skew more towards proprietary control. Both groups hold their beliefs due to a combination of idiosyncratic philosophies around the safety impacts of different types of release, as well as by virtue of their distinct business models. In the coming years we'll get to see which approach is more correct.
**Read more:** [Stability AI Launches the FIrst of its StableLM Suite of Language Models (stability.ai blog)](https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models).
**Get** the [StableLM models here (Stability GitHub)](https://github.com/stability-AI/stableLM/).
**Chat** with a [7B StableLM model here (StableLM-Tuned-Alpha-7b Chat, Hugging Face)](https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat).
####################################################
**Better language models via retrieval:***….Retrieval might just be a generically good idea…*Researchers with NVIDIA, the University of Illinois Urbana-Champaign, and Arizona State University, have trained and released some language models using a technique called 'retrieval' based on DeepMind's RETRO paper. The idea of retrieval is that you train your language model to have a module that helps it retrieve over a large external dataset during training - the idea seems effective, so in this research the scientists try and answer the question "Shall we pretrain autoregressive (decode-only) LMs with retrieval by default or not?"
**What they did:** In tests, their models (called RETRO), "outperforms GPT on text generation with much less degeneration (i.e., repetition), moderately higher factual accuracy, and slightly lower toxicity with a nontoxic retrieval database," they write. "Our findings demonstrate that RETRO can leverage retrieved neighbors and significantly improves accuracy for knowledge intensive tasks in zero-shot evaluations."
They test out their approach on models which range from 148M up to 9.5B parameters in size.
**How well does it work?** "Shall we pretrain decoder-only LMs with retrieval? We observe consistent improvements in text generation quality, factual accuracy, lower toxicity, and downstream task accuracy, especially for knowledge-intensive tasks, including open-domain QA," they write. "Given the ∼ 25% percentage of additional GPU hours for pretraining, we argue pre-training generative language models with retrieval is a promising direction."
**Why this matters - retrieval might just be a robustly good idea:** Papers like this show that techniques like retrieval might be sufficiently good that it's worth just broadly integrating them into most language models.
**Read more:** [Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study (arXiv)](https://arxiv.org/abs/2304.06762).
**More about** [RETRO: Improving language models by retrieving from trillions of tokens (DeepMind blog)](https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens).
####################################################
**Together.xyz releases a vast dataset for training huge language models:***…Distributed AI research startup releases the ingredients to replicate a large LLaMa…*Together.xyz, an AI startup pushing decentralized training and an open AI ecosystem, has published RedPajama. RedPajama is "an effort to produce a reproducible, fully-open, leading language model. RedPajama is a collaboration between Together, Ontocord.ai, ETH DS3Lab, Stanford CRFM, Hazy Research, and MILA Québec AI Institute."
As a first step, Together has released a vast dataset to help people train large language models. "We aim to create a fully open-source reproduction of LLaMA, which would be available for commercial applications, and provide a more transparent pipeline for research," the company says.
**The dataset:** The full dataset, RedPajama-Data-1T, is 1.2 trillion tokens, totalling ~5TB unzipped on disk and ~3TB to download compressed. The dataset consists of seven large-scale data slices. These are:
* CommonCrawl: Five dumps of CommonCrawl, filtered for quality.
* C4: the Standard C4 dataset.
* GitHub: GitHub data, filtered by licenses and quality.
* arXiv: Scientific articles with boilerplate removed.
* Books: A corpus of open books.
* Wikipedia: Subset of Wikipedia pages with boilerplate removed.
* StackExchange: Popular websites under StackExchange, with boilerplate removed.
**Why this matters**: The biggest AI policy debate of the 2020s relates to centralization versus decentralization - will AI models be controlled by a tiny set of actors or will they be broadly developed and distributed by a collective? Companies like Stability.ai (of Stable Diffusion fame) and Together.xyz are betting on the latter.
**Read more:** [RedPajama, a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens (Together.xyz)](https://www.together.xyz/blog/redpajama).
####################################################
**Synthetic + Real images = more performance than training on reality alone:***…Google paper shows tantalizing hints of being able to speed up another part of AI research…*
Researchers with Google have shown that they can augment a dataset (ImageNet) with AI-generated images, then get greater performance on models trained on that dataset. This means that by combining synthetic imagery with real imagery you can train models with greater performance than if they were just trained on reality. This has big implications - it suggests that synthetically generated data may not only be a substitute for real data but may (sometimes) let you get better results than with real data alone.
"Augmenting the ImageNet training set with samples from the resulting models yields significant improvements in ImageNet classification accuracy over strong ResNet and Vision Transformer baselines," they write. "We show that performance of models trained on generative data further improves by combining synthetic data with real data, with larger amounts of synthetic data, and with longer training times. These results hold across a host of convolutional and Transformer-based architectures."
**What they did**: They mix in Imagen-generated images with the larger ImageNet dataset and the result is a model with better performance and more accurate labels (e.g, some of the original ImageNet dataset is mislabeled so the generated images offset this a bit). "Our results indicate that the fine-tuned generative diffusion model outperforms the previous methods by a substantial margin," they say. "As one might expect, models trained solely on generated samples perform worse than models trained on real data. Nevertheless, augmenting real data with synthetic images from the diffusion model yields a substantial boost in performance across all classifiers tested."
**Why this matters - the 'AI production inputs' keep getting cheaper:** For a long time, people said AI had three main ingredients - compute, algorithms, and data. Well, in recent years, compute has got ever cheaper (thanks, Moore's Law), and algorithms have become somewhat cheaper (most people use transformer-architecture models for an increasingly wide range of tasks), but the costs of data have seemed quite stable - you need to create or scrape it from some part of the world.
Papers like this suggest that the cost of data as an input might fall as a consequence of being able to 'mix in' synthetic data via increasingly capable models. All of this adds up to further speedups in AI development as a consequence of the reduction of the costs of basic inputs into AI research.
**Read more:** [Synthetic Data from Diffusion Models Improves ImageNet Classification (arXiv)](https://arxiv.org/abs/2304.08466).
####################################################
**Tech tales**
**Unregistered Computer**
We had a big Unregistered Computer built out of a bunch of pre-Tracking Accords hardware. We used it to make money off of porn and illegal-ideology models and weapons systems and the other things that the ruling class sought to control or stamp out.
We had to bring data in via disc or USB and getting it out was even more complicated - we had to launder the data through a few different mediums before we let it touch the internet, so that it'd be hard for anyone to isolate the trail and find our computer.
We made a lot of jokes about getting found out by the Compute Police and going to jail. One year, we made some money by making T-Shirts that said 'Don't Tread On Me' and had a picture of a GPU on them. Then we made mugs that said 'Out of My Cold Dead Hands' with two hands clutching the circle&line cats cradle symbol of a neural net.
As the years went on, we found ourselves dealing more and more with criminals and less and less with hobbyists. Things got scarier and the software we got asked to run felt stranger to allow. We started doing lots of disinformation operations for third parties who probably represented nation states, or intelligence agency cut outs.
One time, someone asked us to run some very particular scientific questions about some very particular chemicals - we could never work out if this was for drugs or poison or explosives, and we were too scared to check.
Another time, we trained some model and whenever we ran inferences off of it to test it during training we found it did strange things to us - after looking at the outputs, people reported confusing left and right, or finding it difficult to spell words that previously had been easy to spell.
The problem was that as time went on the Unregistered Computer became so valuable that the criminals started 'protecting' it - which meant they both protected us and watched us. So here we are, working like cooks in a meth lab for some drug dealer, watching over servers and hot-swapping hard drives, maintaining a baroque machine so that it can produce things banned by polite society.
**Things that inspired this story:** Thinking about what happens if AI policy ends up leading to compute controls; the logic of the criminal underground; libertarian AI; data centers; distributed training over heterogeneous computing nodes.
Import AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. |
7a3e1ea0-0c20-4f69-a14c-3b4c94f15850 | trentmkelly/LessWrong-43k | LessWrong | Video: The Phenomenology of Intentions
We all know intuitively what it feels like to "intend" a certain outcome or action. However, knowing what a thing is intuitively is different from having a deep introspective take on how the thing actually plays out in your own brain. In this Mental Model Monday, I go deep into the phenomology of intentions, and how you can create intentions and intentionality, as well as decide how much intentionality to have in any given situation.
Watch The Video
Edited Transcript Below
What I Learned About Intentions from Hypnotizing My Hands Together
I first started playing with intentions when I was maybe about 13, 14.
I was playing around with self-hypnosis. I a common hypnosis exercise where I hypnotized myself into believing my hands were glued together, that the harder I tried to get them apart, the more they would become stuck together. Then, I tried as hard as I could to pull them apart. I found no matter how much I struggled, I couldn't get them apart.
What was happening was my arms were tightening and doing some of the things you might do when you're actually trying hard to pull your hands apart, but they weren't actually moving outwards. After I played with this for a while, I decided to stop trying to pull them apart. Instead I focused on the part that was blocking me from pulling them apart. For me, it was located up in the right area of my forehead. Instead of pulling harder, I had to consciously let that block go. After I let it go, my hands came apart with no struggle at all.
I did this a number of times with various hypnosis exercises like forgetting my name, not being able to say my name,and not being able to say the word 'the', until I could implant that thing in my forehead without the hypnosis, and I could get rid of it when I wanted to. That part, it turns out, was how I experienced intentions.
Intentions Vs. Other Things
Intentions are different from desires. What you want is different from what you intend to do. I think there's value in separati |
0ad40bcd-095d-484d-bdaa-4b57ab090ba6 | trentmkelly/LessWrong-43k | LessWrong | Measuring Nonlinear Feature Interactions in Sparse Crosscoders [Project Proposal]
TL;DR
* Problem: Sparse crosscoders are powerful tools for compressing neural network representations into interpretable features. However, we don’t understand how features interact.
* Perspective: We need systematic procedures to measure and rank nonlinear feature interactions. This will help us identify which interactions deserve deeper interpretation. Success can be measured by how useful these metrics are for applications like model diffing and finding adversarial examples.
* Starting contribution: We develop a procedure based on compact proofs. Working backwards from the assumption that features are linearly independent, we derive mathematical formulations for measuring feature interactions in ReLU and softmax attention.
Introduction
Training a sparse crosscoder (henceforth simply crosscoders) can be thought of as stacking the activations of the residual stream across all layers and training an SAE on the stacked vector. This gives us a mapping between model activations and compressed, interpretable features. We’re excited about this approach to automated interpretability, as it paves the way for accounting for feature interaction in addition to feature discovery.
In this draft project proposal, we describe the linear interaction assumption, and provide a compact-proofs-based explanation for why we’d like to measure feature interaction. We are sharing concrete metrics for measuring feature interaction in ReLU and softmax attention. We close with a discussion of empirical projects that we’re excited about, and some interesting takeaways. Please reach out to us if you have feedback or would like to collaborate!
Linear Interaction Assumption
Crosscoders would be all we need if we could make the following two assumptions about how neural networks process information:
1. Linear Representation Hypothesis: Features are encoded explicitly and linearly in the residual stream. For example, if a model needs to work with geometric areas, this hypothesis suggest |
378a7ccb-e48d-4ee2-8653-33671beb1c48 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How to ‘troll for good’: Leveraging IP for AI governance
A new *Science* paper proposes using existing intellectual property law to create and enforce AI regulation, an approach much faster than traditional legislation.
“Leveraging IP for AI governance” by Cason Schmit, Meg Doerr and Jennifer K. Wagner:
>
> Our model leverages two radically different approaches to manage intellectual property (IP) rights. The first is copyleft licensing, which is traditionally used to enable widespread sharing of created content, including open-source software. The second is the “patent troll” model, which is often derided for suppressing technological development.
>
>
>
Copyleft licensing is designed to spread virally. Examples include the GNU General Public License (GPL) and the Creative Commons ShareAlike license.
This idea has been discussed in the forum before. See [A Viral License for AI Safety: The GPL as a Model for Cooperation](https://forum.effectivealtruism.org/posts/dsEMaqKNmArdCRGeH/a-viral-license-for-ai-safety) by Ivan Vendrov and Nat Kozak.
Another related project is [Responsible AI Licenses (RAIL)](https://www.licenses.ai/) and their paper "[Behavioral Use Licensing for Responsible AI](https://facctconference.org/static/pdfs_2022/facct22-63.pdf)." |
0d2fe400-1e46-4e93-966c-853e2845ea31 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Graphical Representations of Paul Christiano's Doom Model
Paul gives some numbers on AI doom (text below). Here they are in graphical forms, which I find easier to understand. Please correct me if wrong.
Michael Trazzi's Probability Flow Diagram
=========================================
I really like this one. I can really easily read how he thinks future worlds are distributed. I guess the specific flows are guesses from Paul's model so might be wrong but I think it's fine.

Link to tweet: <https://twitter.com/MichaelTrazzi/status/1651990282282631168/photo/1>
My probability model version
============================
This is messier, but interactive. You get to see what the chances Paul puts on specific breakpoints are. Do you disagree with any?
Link: <https://bit.ly/AI-model-Chrisitaino>
Paul's model in text
====================
[Link](https://ai-alignment.com/my-views-on-doom-4788b1cd0c72)
> Probability of an AI takeover: **22%**
>
> * Probability that humans build AI systems that take over: **15%**
> (Including anything that happens before human cognitive labor is basically obsolete.)
> * Probability that the AI we build doesn’t take over, but that *it* builds even smarter AI and there is a takeover some day further down the line: **7%**
>
> Probability that most humans die within 10 years of building powerful AI (powerful enough to make human labor obsolete): **20%**
>
> * Probability that most humans die because of an AI takeover: **11%**
> * Probability that most humans die for non-takeover reasons (e.g. more destructive war or terrorism) either as a direct consequence of building AI or during a period of rapid change shortly thereafter: **9%**
>
> Probability that humanity has somehow irreversibly messed up our future within 10 years of building powerful AI: **46%**
>
> * Probability of AI takeover: **22%** (see above)
> * Additional extinction probability: **9%** (see above)
> * Probability of messing it up in some other way during a period of accelerated technological change (e.g. driving ourselves crazy, creating a permanent dystopia, making unwise commitments…): **15%**
> |
b440c3bc-1e85-45a0-8e16-1a91491df921 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The two-layer model of human values, and problems with synthesizing preferences
I have been thinking about Stuart Armstrong's [preference synthesis research agenda](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into), and have long had the feeling that there's something off about the way that it is currently framed. In the post I try to describe why. I start by describing my current model of human values, how I interpret Stuart's implicit assumptions to conflict with it, and then talk about my confusion with regard to reconciling the two views.
The two-layer/ULM model of human values
=======================================
[In Player vs. Character: A Two-Level Model of Ethics](https://www.lesswrong.com/posts/fyGEP4mrpyWEAfyqj/player-vs-character-a-two-level-model-of-ethics), Sarah Constantin describes a model where the mind is divided, in game terms, into a "player" and a "character". The character is everything that we consciously experience, but our conscious experiences are not our true reasons for acting. As Sarah puts it:
> In many games, such as Magic: The Gathering, Hearthstone, or Dungeons and Dragons, there’s a two-phase process. First, the player constructs a *deck* or *character* from a very large sample space of possibilities. This is a particular combination of strengths and weaknesses and capabilities for action, which the player thinks can be successful against other decks/characters or at winning in the game universe. The choice of deck or character often determines the strategies that deck or character can use in the second phase, which is actual gameplay. In gameplay, the character (or deck) can only use the affordances that it’s been previously set up with. This means that there are two separate places where a player needs to get things right: first, in designing a strong character/deck, and second, in executing the optimal strategies for that character/deck during gameplay. [...]
> The idea is that human behavior works very much like a two-level game. [...] The player determines what we find rewarding or unrewarding. The player determines what we notice and what we overlook; things come to our attention if it suits the player’s strategy, and not otherwise. The player gives us emotions when it’s strategic to do so. The player sets up our subconscious evaluations of what is good for us and bad for us, which we experience as “liking” or “disliking.”
> The character is what *executing the player’s strategies feels like from the inside*. If the player has decided that a task is unimportant, the character will experience “forgetting” to do it. If the player has decided that alliance with someone will be in our interests, the character will experience “liking” that person. Sometimes the player will notice and seize opportunities in a very strategic way that feels to the character like “being lucky” or “being in the right place at the right time.”
> This is where confusion often sets in. People will often protest “but I *did*care about that thing, I just forgot” or “but I’m *not*that Machiavellian, I’m just doing what comes naturally.” This is true, because when we talk about ourselves and our experiences, we’re speaking “in character”, as our character. The strategy is not going on at a conscious level. In fact, I don’t believe we (characters) have direct access to the player; we can only *infer*what it’s doing, based on what patterns of behavior (or thought or emotion or perception) we observe in ourselves and others.
I think that this model is basically correct, and that our emotional responses, preferences, etc. are all the result of a deeper-level optimization process. This optimization process, then, is something like that described in [The Brain as a Universal Learning Machine](https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine):
> The universal learning hypothesis proposes that *all* significant mental algorithms are learned; nothing is innate except for the learning and reward machinery itself (which is somewhat complicated, involving a number of systems and mechanisms), the initial rough architecture (equivalent to a prior over mindspace), and a small library of simple innate circuits (analogous to the operating system layer in a computer). In this view the mind (software) is distinct from the brain (hardware). The mind is a complex software system built out of a general learning mechanism. [...]
> An initial untrained seed ULM can be defined by 1.) a prior over the space of models (or equivalently, programs), 2.) an initial utility function, and 3.) the universal learning machinery/algorithm. The machine is a real-time system that processes an input sensory/observation stream and produces an output motor/action stream to control the external world using a learned internal program that is the result of continuous self-optimization. [...]
> The key defining characteristic of a ULM is that it uses its universal learning algorithm for continuous recursive self-improvement with regards to the utility function (reward system). We can view this as second (and higher) order optimization: the ULM optimizes the external world (first order), and also optimizes its own internal optimization process (second order), and so on. Without loss of generality, any system capable of computing a large number of decision variables can also compute internal self-modification decisions.
> Conceptually the learning machinery computes a probability distribution over program-space that is proportional to the expected utility distribution. At each timestep it receives a new sensory observation and expends some amount of computational energy to infer an updated (approximate) posterior distribution over its internal program-space: an approximate 'Bayesian' self-improvement.
Rephrasing these posts in terms of each other, in a person's brain "the player" is the underlying learning machinery, which is searching the space of programs (brains) in order to find a suitable configuration; the "character" is whatever set of emotional responses, aesthetics, identities, and so forth the learning program has currently hit upon.
Many of the things about the character that seem fixed, can in fact be modified by the learning machinery. One's sense of aesthetics can be [updated by propagating new facts into it](https://www.lesswrong.com/posts/YN6daWakNnkXEeznB/propagating-facts-into-aesthetics), and strongly-held identities (such as "I am a technical person") [can change](https://www.lesswrong.com/posts/JTzxg7y5HFYBBWfBj/identities-are-subconscious-strategies) in response to new kinds of strategies becoming viable. [Unlocking the Emotional Brain describes](https://www.lesswrong.com/posts/i9xyZBS3qzA8nFXNQ/book-summary-unlocking-the-emotional-brain) a number of such updates, such as - in these terms - the ULM eliminating subprograms blocking confidence after receiving an update saying that the consequences of expressing confidence will not be as bad as previously predicted.
Another example of this kind of a thing was the framework that I sketched in [Building up to an Internal Family Systems model](https://www.lesswrong.com/posts/5gfqG3Xcopscta3st/building-up-to-an-internal-family-systems-model): if a system has certain kinds of bad experiences, it makes sense for it to spawn subsystems dedicated to ensuring that those experiences do not repeat. Moral psychology's [social intuitionist model](https://en.wikipedia.org/wiki/Social_intuitionism) claims that people often have an existing conviction that certain actions or outcomes are bad, and that they then level seemingly rational arguments for the sake of preventing those outcomes. Even if you rebut the arguments, the conviction remains. This kind of a model is compatible with an IFS/ULM style model, where the learning machinery sets the goal of preventing particular outcomes, and then applies the "reasoning module" for that purpose.
[Qiaochu Yuan notes](https://twitter.com/QiaochuYuan/status/1219281477898244096) that once you see people being upset at their coworker for criticizing them and you do therapy approaches with them, and this gets to the point where they are crying about how their father never told them that they were proud of them... then it gets really hard to take people's reactions to things at face value. Many of our consciously experienced motivations, actually have nothing to do with our real motivations. (See also: [Nobody does the thing that they are supposedly doing](https://www.lesswrong.com/posts/8iAJ9QsST9X9nzfFy/nobody-does-the-thing-that-they-are-supposedly-doing), [The Elephant in the Brain](https://www.lesswrong.com/posts/BgBrXpByCSmCLjpwr/book-review-the-elephant-in-the-brain), [The Intelligent Social Web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web).)
Preference synthesis as a character-level model
===============================================
While I like a lot of the work that Stuart Armstrong has done on [synthesizing human preferences](https://www.lesswrong.com/posts/m2bwD87ctjJDXC3SZ/ultra-simplified-research-agenda), I have a serious concern about it which is best described as: everything in it is based on the character level, rather than the player/ULM level.
For example, in "[Our values are underdefined, changeable, and manipulable](https://www.lesswrong.com/posts/KCg7NeKQ7MycXWpYd/our-values-are-underdefined-changeable-and-manipulable)", Stuart - in my view, correctly - argues for the claim stated in the title... except that, it is not clear to me to what extent the things we intuitively consider our "values", are actually our values. Stuart opens with this example:
> When asked whether "communist" journalists could report freely from the USA, only 36% of 1950 Americans agreed. A follow up question about Amerian journalists reporting freely from the USSR got 66% agreement. When the order of the questions was reversed, 90% were in favour of American journalists - and an astounding 73% in favour of the communist ones.
From this, Stuart suggests that people's values on these questions should be thought of as *underdetermined*. I think that this has a grain of truth to it, but that calling these opinions "values" in the first place is misleading.
My preferred framing would rather be that people's *values* - in the sense of some deeper set of rewards which the underlying machinery is optimizing for - are in fact underdetermined, *but* that is not what's going on in this particular example. The order of the questions does not change those values, which remain stable under this kind of a consideration. Rather, consciously-held political opinions are *strategies* for carrying out the underlying values. Receiving the questions in a different order caused the system to consider different kinds of information when it was choosing its initial strategy, causing different strategic choices.
Stuart's research agenda does talk about [incorporating meta-preferences](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into#2_6_Synthesising_the_preference_function__meta_preferences), but as far as I can tell, all the meta-preferences are about the character level too. Stuart mentions "I want to be more generous" and "I want to have consistent preferences" as examples of meta-preferences; in actuality, these meta-preferences might exist because of something like "the learning system has identified generosity as a socially admirable strategy and predicts that to lead to better social outcomes" and "the learning system has formulated consistency as a generally valuable heuristic and one which affirms the 'logical thinker' identity, which in turn is being optimized because of its predicted social outcomes".
My confusion about a better theory of values
============================================
If a "purely character-level" model of human values is wrong, how do we incorporate the player level?
I'm not sure and am mostly confused about it, so I will just [babble](https://www.lesswrong.com/s/pC6DYFLPMTCbEwH8W/p/i42Dfoh4HtsCAfXxL) & [boggle](https://rationality.org/files/cfar-handbook.pdf#page=16) at my confusion for a while, in the hopes that it would help.
The optimistic take would be that there exists some set of universal human values which the learning machinery is optimizing for. There exist various therapy frameworks which claim to have found something like this.
For example, the [NEDERA model](https://bioemotiveframework.com/wp-content/uploads/woocommerce_uploads/2017/07/Nedera-Guidebook.pdf) claims that there exist nine negative core feelings whose avoidance humans are optimizing for: people may feel Alone, Bad, Helpless, Hopeless, Inadequate, Insignificant, Lost/Disoriented, Lost/Empty, and Worthless. And [pjeby mentions](https://www.lesswrong.com/posts/i9xyZBS3qzA8nFXNQ/book-summary-unlocking-the-emotional-brain?commentId=JKrsoMcRkvyJd4KZZ) that in his empirical work, he has found three clusters of underlying fears which seem similar to these nine:
> For example, working with people on self-image problems, I've found that there appear to be only three critical "flavors" of self-judgment that create life-long low self-esteem in some area, and associated compulsive or avoidant behaviors:
> Belief that one is bad, defective, or malicious (i.e. lacking in care/altruism for friends or family)
> Belief that one is foolish, incapable, incompetent, unworthy, etc. (i.e. lacking in ability to learn/improve/perform)
> Belief that one is selfish, irresponsible, careless, etc. (i.e. not respecting what the family or community values or believes important)
> (Notice that these are things that, if you were bad enough at them in the ancestral environment, or if people only **thought** you were, you would lose reproductive opportunities and/or your life due to ostracism. So it's reasonable to assume that we have wiring biased to treat these as high-priority *long-term* drivers of compensatory signaling behavior.)
> Anyway, when somebody gets taught that some behavior (e.g. showing off, not working hard, forgetting things) *equates* to one of these morality-like judgments as a *persistent quality* of themselves, they often develop a compulsive need to prove otherwise, which makes them choose their goals, not based on the goal's actual utility to themself or others, but rather based on the goal's perceived value as a means of virtue-signalling. (Which then leads to a pattern of continually trying to achieve similar goals and either failing, or feeling as though the goal was unsatisfactory despite succeeding at it.)
So - assuming for the sake of argument that these findings are correct - one might think something like "okay, here are the things the brain is trying to avoid, we can take those as the basic human values".
But not so fast. After all, emotions are all computed in the brain, so "avoidance of these emotions" can't be the only goal any more than "optimizing happiness" can. It would only lead to wireheading.
Furthermore, it seems like one of the things that the underlying machinery *also* learns, is *situations in which it should trigger these feelings*. E.g. feelings of irresponsibility can be used as an internal carrot and stick scheme, in which the system comes to predict that if it will feel persistently bad, this will cause parts of it to pursue specific goals in an attempt to make those negative feelings go away.
Also, we are not *only* trying to avoid negative feelings. Empirically, it doesn't look like happy people end up doing *less* than unhappy people, and [guilt-free people may in fact do more](http://mindingourway.com/guilt-conclusion/) than guilt-driven people. The relationship is nowhere linear, but it seems like there are plenty of happy, energetic people who are happy *in part because* they are doing all kinds of fulfilling things.
So maybe we could look at the inverse of negative feelings: positive feelings. The current mainstream model of human motivation and basic needs is [self-determination theory](https://positivepsychology.com/self-determination-theory/), which explicitly holds that there exist three separate basic needs:
> **Autonomy:** people have a need to feel that they are the masters of their own destiny and that they have at least some control over their lives; most importantly, people have a need to feel that they are in control of their own behavior.
> **Competence:** another need concerns our achievements, knowledge, and skills; people have a need to build their competence and develop mastery over tasks that are important to them.
> **Relatedness (also called Connection):** people need to have a sense of belonging and connectedness with others; each of us needs other people to some degree
So one model could be that the basic learning machinery is, first, optimizing for avoiding bad feelings; and then, optimizing for things that have been associated with good feelings (even when doing those things is locally unrewarding, e.g. taking care of your children even when it's unpleasant). But this too risks running into the wireheading issue.
A problem here is that while it might make intuitive sense to say "okay, if the character's values aren't the real values, let's use the player's values instead", the split isn't actually anywhere that clean. In a sense the player's values are the real ones - but there's also a sense in which the player doesn't *have* anything that we could call values. It's just a learning system which observes a stream of rewards and optimizes it according to some set of mechanisms, and even the reward and optimization mechanisms themselves may end up getting at least partially rewritten. The underlying machinery has no idea about things like "existential risk" or "avoiding wireheading" or necessarily even "personal survival" - thinking about those is a character-level strategy, even if it is chosen by the player using criteria that it does not actually understand.
For a moment it felt like looking at the player level would help with the underdefinability and mutability of values, but the player's values seem like they could be *even less* defined and *even more* mutable. It's not clear to me that we can call *them* values in the first place, either - any more than it makes meaningful sense to say that a neuron in the brain "values" firing and releasing neurotransmitters. The player is just a set of code, or going one abstraction level down, just a bunch of cells.
To the extent that there exists something that intuitively resembles what we call "human values", it feels like it exists in some hybrid level which incorporates parts of the player *and* parts of the character. That is, assuming that the two can even be very clearly distinguished from each other in the first place.
Or something. I'm confused. |
7a04d5a6-c254-41be-b810-a77fc5392dd1 | trentmkelly/LessWrong-43k | LessWrong | Finding Neurons in a Haystack: Case Studies with Sparse Probing
Abstract
> Despite rapid adoption and deployment of large language models (LLMs), the internal computations of these models remain opaque and poorly understood. In this work, we seek to understand how high-level human-interpretable features are represented within the internal neuron activations of LLMs. We train $k$-sparse linear classifiers (probes) on these internal activations to predict the presence of features in the input; by varying the value of $k$ we study the sparsity of learned representations and how this varies with model scale. With $k=1$, we localize individual neurons which are highly relevant for a particular feature, and perform a number of case studies to illustrate general properties of LLMs. In particular, we show that early layers make use of sparse combinations of neurons to represent many features in superposition, that middle layers have seemingly dedicated neurons to represent higher-level contextual features, and that increasing scale causes representational sparsity to increase on average, but there are multiple types of scaling dynamics. In all, we probe for over 100 unique features comprising 10 different categories in 7 different models spanning 70 million to 6.9 billion parameters.
See twitter summary here.
Contributions
> In the first part of the paper, we outline several variants of sparse probing, discuss the various subtleties of applying sparse probing, and run a large number of probing experiments. In particular, we probe for over 100 unique features comprising 10 different categories in 7 different models spanning 2 orders of magnitude in parameter count (up to 6.9 billion). The majority of the paper then focuses on zooming-in on specific examples of general phenomena in a series of more detailed case studies to demonstrate:
>
> * There is a tremendous amount of interpretable structure within the neurons of LLMs, and sparse probing is an effective methodology to locate such neurons (even in superposition), but requires |
6fb1ebaa-5593-4f09-b960-7db4f1fc0913 | trentmkelly/LessWrong-43k | LessWrong | Activation Pattern SVD: A proposal for SAE Interpretability
Epistemic status: This is a rough-draft write-up about a thought experiment I did. Reasonably confident about the broad arguments being made here. That said, I haven't spent a lot of time rigorously polishing or reviewing my writing, so minor inaccuracies may be present.
Interpretability Illusions from Max-Activating Examples
When interpreting an SAE feature, one common technique is to look at the max-activating examples, and try to extrapolate a pattern from there. However, this approach has two flaws, namely:
Premise 1: It can be hard to extrapolate the correct pattern. Correctly identifying a pattern relies on accurate understanding of the semantics present in text. Many hypotheses may be likely, given the data, and the actual truth could be non-obvious. It seems easy to make a mistake when doing this.
Premise 2: Max activating examples may be anomalous. The most likely element of a distribution can look very different from the typical set. Conclusions drawn based on one (or a few) highly activating examples may turn out to be incorrect when evaluated against the majority of examples.
In the following discussion, I outline a proposal to interpret SAE features using the singular value decomposition of SAE activation patterns, which I think neatly addresses both of these issues.
Activation Pattern SVD
Suppose we have a set of M SAE features, which we would like to interpret using a dataset of N unique context windows. To do this, we compute the activation A∈RN×M, where Aij describes the activation of feature j on (the last token of) context window i. We then compute the singular value decomposition A=UΣV and take the top k elements. (Assume that we have a good way of choosing k, e.g. by looking for an "elbow point" in a reconstruction loss curve)
Remark 3: The SVD defines activation prototypes. note that U∈RN×k; each column (in RN) describes a "prototypical" activation pattern of a general SAE feature over all context windows in the dataset.
Remar |
bc7c5d09-d523-4d00-b0d7-fc8d43115e08 | trentmkelly/LessWrong-43k | LessWrong | Timeline of book-length works in machine ethics
Timeline:
* Danielson, Artificial Morality: Virtuous Robots for Virtual Games (1992)
* Marsh, Formalizing Trust as a Computational Concept (1994)
* Yudkowsky, Creating Friendly AI (2001)
* Hall, Beyond AI: Creating the Conscience of a Machine (2007)
* Wallach & Allen, Moral Machines: Teaching Robots Right from Wrong (2009)
* Anderson & Anderson, eds., Machine Ethics (2011)
* Lin, Abney, & Bekey, eds., Robot Ethics: The Ethical and Social Implications of Robotics (2011)
* Bostrom, Intelligence Explosion: Groundwork for a Strategic Analysis (forthcoming) [mentioned here]
|
edd5d4fc-97a8-43aa-89f1-6df6708c4bc2 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post338
Machine learning systems are typically trained to maximize average-case performance. However, this method of training can fail to meaningfully control the probability of tail events that might cause significant harm. For instance, while an artificial intelligence (AI) assistant may be generally safe, it would be catastrophic if it ever suggested an action that resulted in unnecessary large-scale harm. Current techniques for estimating the probability of tail events are based on finding inputs on which an AI behaves catastrophically. Since the input space is so large, it might be prohibitive to search through it thoroughly enough to detect all potential catastrophic behavior. As a result, these techniques cannot be used to produce AI systems that we are confident will never behave catastrophically. We are excited about techniques to estimate the probability of tail events that do not rely on finding inputs on which an AI behaves badly, and can thus detect a broader range of catastrophic behavior. We think developing such techniques is an exciting problem to work on to reduce the risk posed by advanced AI systems: Estimating tail risk is a conceptually straightforward problem with relatively objective success criteria; we are predicting something mathematically well-defined, unlike instances of eliciting latent knowledge (ELK) where we are predicting an informal concept like "diamond". Improved methods for estimating tail risk could reduce risk from a variety of sources, including central misalignment risks like deceptive alignment. Improvements to current methods can be found both by doing empirical research, or by thinking about the problem from a theoretical angle. This document will discuss the problem of estimating the probability of tail events and explore estimation strategies that do not rely on finding inputs on which an AI behaves badly. In particular, we will: Introduce a toy scenario about an AI engineering assistant for which we want to estimate the probability of a catastrophic tail event. Explain some deficiencies of adversarial training, the most common method for reducing risk in contemporary AI systems. Discuss deceptive alignment as a particularly dangerous case in which adversarial training might fail. Present methods for estimating the probability of tail events in neural network behavior that do not rely on evaluating behavior on concrete inputs. Conclude with a discussion of why we are excited about work aimed at improving estimates of the probability of tail events. This document describes joint research done with Jacob Hilton, Victor Lecomte, David Matolcsi, Eric Neyman, Thomas Read, George Robinson, and Gabe Wu. Thanks additionally to Ajeya Cotra, Lukas Finnveden, and Erik Jenner for helpful comments and suggestions. A Toy Scenario Consider a powerful AI engineering assistant. Write M for this AI system, and M ( x ) for the action it suggests given some project description x . We want to use this system to help with various engineering projects, but would like it to never suggest an action that results in large-scale harm, e.g. creating a doomsday device . In general, we define a behavior as catastrophic if it must never occur in the real world. [1] An input is catastrophic if it would lead to catastrophic behavior. Assume we can construct a catastrophe detector C that tells us if an action M ( x ) will result in large-scale harm. For the purposes of this example, we will assume both that C has a reasonable chance of catching all catastrophes and that it is feasible to find a useful engineering assistant M that never triggers C (see Catastrophe Detectors for further discussion). We will also assume we can use C to train M , but that it is prohibitively expensive to use C to filter all of M 's outputs after M is trained. [2] We are interested in estimating the probability that our model M behaves catastrophically on a particular distribution of inputs [3] : P x ∼ D ( C ( M ( x ) ) ) Deficiencies of Adversarial Training To reduce the risk of our AI system behaving catastrophically, we could use adversarial training : having a red team attempt to find catastrophic inputs and training M until those inputs are no longer catastrophic. However, such systems can still behave catastrophically in a range of possible scenarios: The users of an AI system might explore inputs not considered during the adversarial training process and uncover novel catastrophic inputs. Users of a system might specifically search for inputs designed to elicit catastrophic behavior, using more resources than the adversarial training process. [4] An intelligent AI system may attempt to purposefully avoid acting catastrophically when it suspects it is being trained. As catastrophic behavior becomes extremely rare, it can be computationally prohibitive for the red team to explore the input space thoroughly enough to find even a single catastrophic input. To determine which AI systems robustly avoid catastrophic behavior, we must be able to estimate the frequency of catastrophic inputs even when they are so rare that we cannot find any such inputs. Rather than viewing a red team as searching for particular inputs, we can view the red team as defining a process D for producing inputs. We can then ask the question: how likely is the red team to find an input on which the system behaves catastrophically? Equivalently, what is P D ( C ( M ( x ) ) ) ? Even if a red team is usually unable to find concrete inputs on which a system behaves catastrophically, the process they use could still have a small probability of success. The possibility of distributional shift complicates the way in which estimates of the rate of catastrophic behavior can be used to mitigate such behavior, which we discuss in Distributional Shift . Deliberate Subversion of Adversarial Training Malign intelligent AI systems might have incentives to pretend to be benign during training so they can cause harm later. Call a system "deceptively aligned" if it is a malign system pretending to be benign. [5] For example, a deceptively aligned engineering assistant might intentionally avoid suggesting an action that creates a doomsday device during training, but seek such opportunities when they would cause real-world harm. A sufficiently advanced system might be able to maintain this deception even through extensive adversarial training. [6] Even if a deceptively aligned model has extremely high accuracy at determining whether it is being adversarially trained, it will likely have some probability of making a mistake. If a deceptive system makes mistakes very rarely, then adversarial training would likely fail to find an input that distinguishes it from a benign model. However, the possibility of these mistakes would result in the deceptive model having a higher probability of behaving catastrophically than its benign counterpart, even if the probability is small in absolute terms (we discuss ways in which this might not be true in Catastrophe Detectors . By estimating the probability of catastrophic behavior, we would be able to distinguish between benign and deceptive systems. Importantly, our estimates do not need to be perfectly accurate to detect deceptive systems. Catastrophic actions are highly complex, so AI systems cannot randomly stumble into catastrophic actions. Deceptive systems behave catastrophically at much higher rates than their benign counterparts because they sometimes deliberately decide to take catastrophic action. To distinguish between benign and deceptive systems, we would not need to characterize the chance a benign model accidentally acts catastrophically; we would only need to notice that there are some conditions under which a model might deliberately act catastrophically. A Possible Approach for Estimating Tail Risk This section an approach for estimating the probability of catastrophic behavior of a neural network. We present several simple methods that represent our overall approach. We are optimistic that this kind of approach can ultimately deliver better risk estimates than can be obtained from conventional red teaming, but in experiments so far we have not surpassed baselines based on importance sampling. [7] Consider a neural network M : X → Y , a distribution D over X , and a binary event C : Y → { 0 , 1 } . Our goal is to estimate P D ( C ( M ( x ) ) ) . [8] For our engineering research assistant example above, X is a project description, Y is a suggested action, and C is our catastrophe detector. If P D ( C ( M ( x ) ) ) is very low, then even billions of samples might not contain an x such that C ( M ( x ) ) = 1 . However, it might still be possible to estimate P D ( C ( M ( x ) ) ) by identifying structure that suggests M may behave catastrophically on some inputs. For example, suppose C embeds an action M ( x ) into some latent space and flags it as catastrophic if it is "too large" in 20 specific directions simultaneously. For each of these directions, we could attempt to identify features in M that would result in the embedding of M ( x ) being large in that direction. If each such feature were active with probability 1 100 , then we could estimate the chance that M ( x ) is too large in all 20 directions simultaneously as ( 1 100 ) 20 = 10 − 40 . Our goal is to develop methods that can reliably detect significant risks of catastrophic behavior by identifying such structure. Layer-by-layer Activation Modeling We will present one possible approach to producing estimates of tail events in neural networks based on modeling the distribution of each layer of activations in the network. This approach illustrates one way in which a mechanistic analysis of a model could improve estimates of tail events. However, this particular framework also has a few fundamental flaws, which we discuss in Issues with Layer-by-layer Activation Modeling . We will assume that C is also a neural network and express the composition of C and M as a single function C ∘ M : X → { 0 , 1 } . This composition is a single function which is 1 if and only if x is a catastrophic input for M . Since C ∘ M is itself just a larger neural network, we can express it as the composition of n functions f 0 , f 1 , . . . , f n − 1 . Each f i represents a transition between layers in our model, such as a linear transformation followed by a ReLU activation. We will write X i for the domain of f i , which is typically equal to R k for some k . More specifically, for input x define: x 0 : = x ∈ X 0 x i + 1 : = f i ( x i ) ∈ X i + 1 C ( M ( x ) ) : = x n ∈ X n = { 0 , 1 } Our input distribution D is a distribution over X 0 . Through the composition of the transition functions f i , D also induces a distribution over X 1 , X 2 , . . . , X n . Our general method aims to estimate P D ( C ( M ( x ) ) ) by approximation these induced distributions over X i as they flow through the network, from input to output. Each implementation of this method will have two key components: For each layer i , we will have a class of distributions P i over X i that we will use to model the activations at each layer. Formally, P i ⊆ Δ X i . A method to update our model as we progress through layers: given a model P i ∈ P i for layer i , we need to determine P i + 1 ∈ P i + 1 for layer i + 1 . Formally, this method is a collection of n − 1 functions from P i → P i + 1 , one for each i = 0 , . . . , n − 1 . With these components in place, we can estimate P D ( C ( M ( x ) ) ) for any D ∈ P 0 as follows: We begin with D = P 0 ∈ P 0 , representing the input distribution. We apply our update method repeatedly, generating P 1 ∈ P 1 , P 2 ∈ P 2 , up to P n ∈ P n . Our estimate of P D ( C ( M ( x ) ) ) is the probability that P n assigns to the outcome 1. Toy Example: Finitely Supported Distributions If D was finitely supported, then it would be trivial to estimate the probability of catastrophe on D , but we can use this example to illustrate some general principles. Let all P i be the class of finitely supported distributions over the associated spaces X i . Given a finitely supported distribution D = P 0 , we can apply f 1 to each datapoint to generate the empirical distribution P 1 , which will be the exact distribution of x 1 . By repeating this process for all layers, we eventually obtain P n . The probability P n assigns to 1 will be the exact frequency of catastrophe on D . This calculation is not helpful for adversarial training; if we cannot find any inputs where a catastrophe occurs, then we also cannot find any finitely supported distribution D with non-zero probability of catastrophe. Instead, we would like to allow a red team to define a broader distribution that puts positive (although potentially very small) probability on catastrophic inputs. Method 1: Gaussian Distribution To move beyond empirical evaluations, we can approximate the distributions over activations by multivariate Gaussians. Let P i be the class of all multivariate Gaussians over the activations X i . Write a normal distribution with mean vector μ and covariance matrix Σ as N ( μ , Σ ) . To specify an algorithm, we need to a method for choosing P i + 1 given P i . In this case, we want to choose the multivariate Gaussian N ( μ i + 1 , Σ i + 1 ) that best approximates f i ( x i ) where x i ∼ N ( μ i , Σ i ) . A non-linear function distribution is typically no longer Gaussian, so perfect modeling is impossible. Instead, we can use various methods to select N ( μ i + 1 , Σ i + 1 ) based on different notions of approximation quality. [9] A standard notion of approximation quality is the Kullback-Leibler (KL) divergence between f i ( N ( μ i , Σ i ) ) and N ( μ i + 1 , Σ i + 1 ) . By a well-known "moment matching" theorem of Gaussian distributions, we can minimize KL ( f i ( N ( μ i , Σ i ) ) | | N ( μ i + 1 , Σ i + 1 ) ) by setting μ i + 1 and Σ i + 1 to the mean vector and covariance matrix of f i ( N ( μ i , Σ i ) ) . [10] This Gaussian approximation allows us to move beyond adversarial training on concrete catastrophic inputs. Instead, we can pick μ 0 and Σ 0 to maximize P N ( μ 0 , Σ 0 ) ( C ( M ( x ) ) ) , then train M to minimize P N ( μ 0 , Σ 0 ) ( C ( M ( x ) ) ) , hopefully capturing a broader range of catastrophic scenarios. Method 2: Independent Linear Features Some authors have attempted to extract meaningful features of a model's computation using sparse auto-encoders (SAEs), which find an overcomplete basis in which the activations are typically sparse. SAEs are typically learned by minimizing an empirical estimate of a reconstruction loss. However, if catastrophic inputs are rare, the SAE's training data is unlikely to contain any catastrophic inputs. As such, we might not learn features that are informative for estimating the probability of catastrophic inputs. We will describe a modified SAE for which we can analytically calculate the loss over a distribution that includes catastrophic inputs, instead of empirically estimating it with training data. The details of this algorithm can be found here , which we summarize below. Define a given distribution P i ∈ P i by specifying: m feature vectors v 1 , . . . , v m ∈ X i . For each feature v j , a distribution Q j over R representing how strongly that feature activates. To generate a single sample from this distribution, we: Sample feature weights a j ∼ Q j independently. Compute the weighted sum of these features Σ j a j v j SAEs are typically learned independently for different layers of a neural network. By contrast, our algorithm will use the features from a layer X i to learn the features for the next layer X i + 1 . More precisely, we will assume that the activations at layer i are distributed according to P i , and then find the distribution P i + 1 that minimizes KL ( f i ( P i ) | | P i + 1 ) ) . We show that we can analytically compute an bound for this KL divergence by using variational inference. [11] We can then train to P i + 1 to minimize this lower bound; the result is a variational autoencoder trained with an objective very similar to the conventional objective for a sparse autoencoder. More details of this bound and an algorithm for calculating it efficiently can be found here . Because we compute bounds analytically, in principle they can be very sensitive to rare events that would never be detected by sampling. However, because KL divergence is defined as an expectation, this loss function may still incentivize the learning of features that are relevant to the typical behavior of the model and not informative about the tails. For example, a "lethality" feature might be informative about the probability an AI system behaves catastrophically, but almost useless for modeling behavior on typical inputs. To overcome this problem, we would like to replace the KL divergence with a loss function that is more sensitive to distributional differences that affect catastrophe probability. Precisely defining this loss function is one of the major open questions for this approach. Relation to Other Work Formalizing the Presumption of Independence Formalizing the Presumption of Independence studies the problem of estimating the expectation of a circuit through the lens of heuristic estimation. The general framework described above is a particular family of heuristic estimation methods based on modeling the activations of successive layers of a neural network. Many of the methods we have described are inspired by algorithms for heuristic estimation. Most directly, Method 1 Gaussian Distribution is exactly the covariance propagation algorithm described in appendix D.3. Additionally, Method 2 Independent Linear Features can be thought of as finding a basis for which the presumption of independence approximately applies. For more examples of methods for heuristic estimation that can potentially be translated into techniques for estimating tail risk in neural networks, see Appendix A of Formalizing the Presumption of Independence . Mechanistic Interpretability Mechanistic interpretability is a field of research that aims to understand the inner workings of neural networks. We think such research represents a plausible path towards high-quality estimates of tail risk in neural networks, and many of our estimation methods are inspired by work in mechanistic interpretability. For example, a mechanistic understanding of a neural network might allow us to identify a set of patterns whose simultaneous activation implies catastrophic behavior. We can then attempt to estimate the probability that all features are simultaneously active by using experiments to collect local data and generalizing it with the presumption of independence. We also hope that estimating tail risk will require the development of methods for identifying interesting structure in neural networks. If so, then directly estimating tail risk in neural networks might lead to greater mechanistic understanding of how those neural networks behave and potentially automate portions of mechanistic interpretability research. Relaxed Adversarial Training Relaxed Adversarial Training (RAT) is a high-level proposal to overcome deficiencies in adversarial training by "relaxing" the problem of finding a catastrophic input. We expect our methods for estimating P D ( C ( M ( x ) ) ) to be instances of the relaxations required for RAT. Eliciting Latent Knowledge In Eliciting Latent Knowledge (ELK), we describe a SmartVault AI trained to take actions so a diamond appears to remain in the room. We are concerned that if the sensors are tampered with, the diamond can be stolen while still appearing safe. A key difficulty in ELK is the lack of sophisticated tampering attempts on which we can train our model to protect the sensors. In the ELK document, we describe some ways of training models that we hoped would generalize in desirable ways during tampering attempts, but ultimately concluded these methods would not always result in the desired generalization behavior. Instead of trying to indirectly control generalization, we can attempt to directly measure the quality of generalization by estimating the probability of tampering. We will not have a perfect tampering detector, but even if a robber (or our AI system itself) was skilled at tampering, they might get caught one-in-a-trillion times. Thus, by estimating and minimizing the probability of detectable tampering, we might be able to produce a SmartVault that defends the sensors even with no examples of sophisticated sensor tampering. More generally, we believe there are deeper connections between methods for estimating tail risk and ELK, which we might explore in a later post. Conclusion Contemporary methods for reducing risk from AI systems rely on finding concrete catastrophic inputs. As AI systems become more capable, such methods might not be able to reduce risk down to acceptable levels. In this document, we have argued that it is both useful and possible to develop approaches for estimating tail risk in neural networks that does not rely on finding concrete catastrophic inputs. We are currently pursuing this problem from both theoretical and empirical angles, and are excited to see others do the same. We have given a high-level overview of our motivation and approach. In the appendices, we explore various details in more depth: In Catastrophe Detectors , we provide a default picture of how to construct a catastrophe detector, and discuss its limitations. We also briefly describe our hope for finding a robust catastrophe detector. In Issues with Layer-by-layer Activation Modeling , we overview two major problems with the general estimation framework we discuss in the body, and potential routes for overcoming them. In Distributional Shift , we discuss how the possibility of distributional shift complicates the way in which estimates of the rate of catastrophic behavior can be used to mitigate such behavior. In Complexity Theoretic Barriers to Accurate Estimates , we discuss why computational barriers to accurate estimates might not prevent us from obtaining estimates "good enough" to mitigate catastrophic behavior. Appendices Catastrophe Detectors In the body, we assume that we can construct a catastrophe detector C that has the following properties: C has a reasonable chance of catching everything output by M we would consider a catastrophe. It is possible to find a useful M that almost never triggers C . However, we do not currently know how to construct a catastrophe detector with those properties. In this appendix, we will describe the a default picture for how to attempt such a construction, discuss its potential flaws, and briefly overview our hope for why these flaws are surmountable. Our default picture for how we would attempt to construct such a catastrophe detector is to use a weaker, trusted AI system M w e a k to examine the suggested action of a stronger system M . We could give our weaker system multiple advantages to give it the best chance of catching catastrophic actions: We could give M w e a k more time to think than M , so it could examine each portion of the action in detail. We could allow M w e a k to interrogate M about the function of various parts of its action, and do detailed checks for inconsistencies in M 's answers. We could allow M w e a k to perform separate experiments to verify M 's claims about its actions, and to evaluate whether or not the actions have any side effects. M w e a k could be conservative when evaluating M 's action, flagging part of the action it did not understand as potentially catastrophic. However, even given these advantages, our catastrophe detector C might not have our desired properties: If M is significantly smarter than M w e a k , it might be possible for M to suggest a catastrophic action that M w e a k will not recognize as such. If M w e a k merely has a low probability of detecting that such actions are catastrophic, then our estimates will be (potentially poor) approximations to the true probability that M acts catastrophically, but we will still be able to distinguish between deceptive and benign models. If, however, there are catastrophic actions that M w e a k has no chance of detecting, and M knows that M w e a k will not be able to detect them, then a deceptive M could act catastrophically while appearing benign. There potentially exist actions that M w e a k will mistakenly think are catastrophic, and inputs that cause even benign models to produce those actions. For example, contemporary image classifiers are often vulnerable to adversarial attacks that can result in arbitrary misclassification. There might similarly be actions that induce misclassification in M w e a k . If such actions are more common than the probability deceptive models act mistakenly act catastrophically during training, then we would be unable to distinguish between benign and deceptive models. In the body of this document, we discuss methods for estimating the chance that an AI system violates some specification of catastrophe. We think it is more likely that we fail to find such methods, than that we fail to find a specification of catastrophe that would be valuable to estimate. As such, we have not prioritized research into how we could construct more robust catastrophe specifications. Our current plan for how to produce such specifications involves increasing the robustness of an imperfect catastrophe detectors using mechanistic anomaly detection . Roughly speaking, we think it should be possible to distinguish between two different "reasons" why an action looked benign: It looked benign because it was benign It looked benign because M carefully selected the action to appear benign. This bears resemblance to our plan for how mechanistic anomaly detection will help with ELK, which we discuss in Finding gliders in the game of life . Issues with Layer-by-layer Activation Modeling In the body, we present a framework for producing estimates of tail events in a neural network by successively modeling layers of activations. We present this approach because it is easy to communicate and reason about, while still containing the core challenge of producing such estimates. However, we believe the framework to be fundamentally flawed for two reasons: Layer-by-layer modeling forgets too much information Probability distributions over activations are too restrictive Layer-by-layer modeling forgets too much information Thanks to Thomas Read for the example used in this section. When modeling successive layers of activations, we are implicitly forgetting how any particular piece of information was computed. This can result in missing large correlations between activations that are computed in the same way in successive layers. For example, suppose h is a pseudorandom boolean function that is 1 on 50% of inputs. Let x be distributed according to some simple distribution D . Define the following simple 2 layer neural network: f 1 : x ↦ ( x , h ( x ) ) f 2 : ( x , y ) ↦ ( 1 iff h ( x ) = = y ) Since h ( x ) = = h ( x ) definitionally, this network always outputs 1 . However, layer-by-layer activation modeling will give a very poor estimate. If h ( x ) is complex enough, then our activation model will not be able to understand the relationship between h ( x ) and x , and be forced to treat h ( x ) as independent from x . So at layer 1, we will model the distribution of activations as ( D , B e r n [ 0.5 ] ) , where B e r n [ 0.5 ] is 1 with 50% chance and 0 otherwise. Then, for layer 2, we will treat h ( x ) and y as independent coin flips, which are equal with 50% chance. So we will estimate P D ( f 2 ( f 1 ( x ) ) ) as 0.5 , when it is actually 1. In general, layer-by-layer activation modeling makes an approximation step at each layer, and implicitly assumes the approximation errors between layers are uncorrelated. However, if layers manipulate information in correlated ways, then approximation errors can also be correlated across layers. In this case, we hope to be able to notice that f 1 and f 2 are performing similar computations, and so to realize that the computation done by layer 1 and layer 2 both depend on the value of h ( x ) . Then, we can model the value of h ( x ) as an independent coin flip, and obtain the correct estimate for P D ( f 2 ( f 1 ( x ) ) ) . This suggests that we must model the entire distribution of activations simultaneously, instead of modeling each individual layer. Probability distributions over activations are too restrictive Thanks to Eric Neyman for the example used in this section. If we model the entire distribution over activations of M , then we must do one of two things: Either, the model must only put positive probability on activations which could have been realized by some input Or, the model must put positive probability on an inconsistent set of activations. Every set of activations actually produced by M is consistent with some input. If we performed consistency checks on M , then we would find that every set of activations was always consistent in this way, and the consistency checks would always pass. If, however, our approximate distribution over M 's activations placed positive probability on inconsistent activations, then we would incorrectly estimate the consistency checks as having some chance of failing. In these cases, our estimates could be arbitrarily disconnected from the actual functioning of M . So it seems we must strive to put only positive probability on consistent sets of activations. Only placing positive probability on consistent sets of activations means that our distribution over activations corresponds to some input distribution. This means that our catastrophe estimate will be exact over some input distribution. Unfortunately, this implies our catastrophe estimates will be often be quite poor. For example, suppose we had a distribution over boolean functions with the following properties: a randomly sampled function f is 50% likely to be the constant 0 function, and 50% likely to have a single input where f ( x ) = 1 it is computationally hard to tell if f is the all 0 function For example, an appropriately chosen distribution over 3- CNF s likely has these properties. If our input space was of size N , then it seems reasonable to estimate a P ( f ) as 1 2 N for randomly sampled f . However, since our catastrophe estimate is exact (over some input distribution), it can be non-zero only if f is not the all 0 function. By assumption, it is computationally hard to tell if f is not the all 0 function, so we must generally estimate P ( f ) as 0, making it impossible to be "reasonable". In general, the requirement that our estimate be derived from a logically consistent distribution means we cannot reliably put positive probability on all "reasonable" possibilities. If we wish to be able to produce estimates like 1 2 N , then we must be able to represent logically inconsistent possibilities. However, the possibility of models that do consistency checks means we must place almost no probability on any particular logical inconsistency. This line of reasoning suggests an overall picture where our estimates are not attached to any particular probability distribution over activations, but rather one where our estimates are derived directly from high-level properties of the distribution. For example, we might instead represent only the moments of the distribution over activations, and our estimate might come from an inconsistent set of moments that cannot come from any possible distribution (but we would not know of any particular inconsistency that was violated). [12] Distributional Shift In Layer-by-layer Activation Modeling , we describe a few methods for estimating the probability of a tail event on a fixed input distribution. However, due to distributional shift, these estimates might be highly uninformative about the true probability our AI systems will act catastrophically "in practice". This problem is not unique to our approach; practitioners of adversarial training also desire to find catastrophic inputs likely to appear "in practice", which might be very different from inputs used to train the AI system. We have a few rough ideas for constructing distributions that place non-trivial probability on realistic catastrophic inputs: Adding noise to the training distribution produces a wider distribution that is more likely to contain a catastrophic input. In the limit, the uniform distribution over inputs is guaranteed to contain catastrophic inputs should they exist. However, these catastrophic inputs might be highly unrealistic, and represent a small fraction of the catastrophic inputs likely to be encountered "in practice". That being said, lowering the probability an AI system acts catastrophically on random inputs might still make it less likely to act catastrophically in practice. Adversarial training teams might employ various techniques to produce more realistic catastrophic inputs. For instance, they might use other AI systems to modify training inputs, or attempt to directly synthesize catastrophic inputs. Such a strategy is a function from random noise to particular inputs, and thus can also be thought of as distributions over inputs. Instead of sampling from that distribution and hoping it places high enough probability on catastrophic inputs that we will sample one, we can attempt to directly estimate that probability. Ultimately, however, these mitigations will not detect all instances of deceptive alignment. For example, suppose a deceptively aligned system knows it will encounter a particular input x that is a good opportunity for acting catastrophically. If the adversarial training team does not know about x , or merely cannot synthesize x , then they will be unable to determine if the system will behave catastrophically. [13] In these cases, we hope to detect deceptive alignment by understanding the input distribution "as well as the AI system". Any time the AI system knows that it will encounter some future catastrophic input, we need to incorporate this knowledge into our estimate. This is very related to ELK, and is discussed further in Main hope estimates competitive with the AI system or the training process . Complexity Theoretic Barriers to Accurate Estimates It is at least NP-hard to accurately estimate the expectation of a function computable in polynomial time. As such, it is likely infeasible to develop methods for accurately estimating tail events. However, we believe that it is feasible to develop methods that can estimate tail risks accurately enough to detect risks from AI systems deliberately acting catastrophically. In this appendix, we will first work an example involving 3-SAT to demonstrate some estimation methods that can be applied to problems widely believed to be computationally infeasible to even approximate. Then, we will discuss how we hope to obtain estimates accurate enough to detect risk from AI systems deliberately acting catastrophically by obtaining estimates that are competitive with the AI system or the process used to train it. 3-SAT A boolean formula is called a 3-CNF if it is formed of an AND of clauses, where each clause is an OR of 3 or less literals. For example, this is a 3-CNF with 3 clauses over 5 variables: ( x 1 ∨ ¬ x 3 ∨ x 4 ) ∧ ( ¬ x 2 ∨ x 3 ∨ ¬ x 5 ) ∧ ( x 2 ∨ ¬ x 4 ∨ x 5 ) We say a given setting of the variables x i to True or False satisfies the 3-CNF it it makes all the clauses True . 3-SAT is the problem of determining whether there is any satisfying assignment for a 3-CNF. #3-SAT is the problem of determining the number of satisfying assignments for a 3-CNF. Since the number of assignments for a 3-CNF is 2 number of variables , #3-SAT is equivalent to computing the probability a random assignment is satisfying. The above 3-CNF has 20 satisfying assignments and 2 5 = 32 possible assignments, giving a satisfaction probability of 5 8 . It's widely believed that 3-SAT is computationally hard in the worst case, and #3-SAT is computationally hard to even approximate. However, analyzing the structure of a 3-CNF can allow for reasonable best guess estimates of the number of satisfying assignments. In the following sections, F is a 3-CNF with C clauses over n variables. We will write P ( F ) for the probability a random assignment is satisfying, can be easily computed from the number of satisfying assignments by dividing by 2 n . Method 1: Assume clauses are independent Using brute enumeration over at most 8 possibilities, we can calculate the probability that a clause is satisfied under a random assignment. For clauses involving 3 distinct variables, this will be 7 8 . If we assume the satisfaction of each clause is independent, then we can estimate P ( F ) by multiplying the satisfaction probabilities of each clause. If all the clauses involve distinct variables, this will be ( 7 8 ) C . We will call this this naive acceptance probability of F . Method 2: Condition on the number of true variables We say the sign of x i is positive, and the sign of ¬ x i is negative. If F has a bias in the sign of its literals, then two random clauses are more likely to share literals of the same sign, and thus be more likely to be satisfied on the same assignment. Our independence assumption in method 1 fails to account for this possibility, and thus will underestimate P ( F ) in the case where F has biased literals. We can account for this structure by assuming the clauses of F are satisfied independently conditional on the number of variables assigned true . Instead of computing the probability each clause is true under a random assignment, we can compute the probability under a random assignment where m out of n variables is true . For example, the clause ( x 1 ∨ ¬ x 3 ∨ x 4 ) will be true on ( n − 3 m − 2 ) possible assignments out of ( n m ) , for a satisfaction probability of ( n − 3 m − 2 ) / ( n m ) = m ( m − 1 ) ( n − m ) n ( n − 1 ) ( n − 2 ) ≈ m n ( 1 − m n ) m n , where the latter is the satisfaction probability if each variable was true with independent m n chance. Multiplying together the satisfaction probabilities of each clause gives us an estimate of P ( F ) for a random assignment where m out of n variables are true . To obtain a final estimate, we take a sum over satisfaction probabilities conditional on m weighted by ( n m ) 2 n , the chance that m variables are true . Method 3: Linear regression Our previous estimate accounted for possible average unbalance in the sign of literals. However, even a single extremely unbalanced literal can alter P ( F ) dramatically. For example, if x i appears positively in 20 clauses and negatively in 0, then by setting x i to true we can form a 3-CNF with 1 fewer variable and 20 fewer clauses that has naive acceptance probability of 2 N − 1 ( 7 8 ) ( C − 20 ) . x i will be true with 1 2 chance, so this represents a significant revision. We can easily estimate P ( F ) in a way that accounts for the balance in any particular literal x i . However, it is not simple to aggregate these estimates into a overall estimate for P ( F ) . One approach is to combine these estimates linearly in a way that minimizes some measure of error. For instance, if we wanted to minimize mean-square error, then we would treat each estimate as a feature and combine them using linear regression. If we estimate the covariance of each feature using our naive acceptance probability , then this is equivalent to doing linear regression over the reference class of 3-CNF's where the sign of each literal is flipped uniformly at random. For more details, see section 9.2 of Neyman 2024 . This method produces an estimate for P ( F ) that has lower mean-square error over random F than method 2, but lacks other intuitively desirable properties like producing estimates in [ 0 , 1 ] . We could clamp our estimate, or attempt to do logistic regression instead, but we will have to trade off between many intuitively desirable properties. Deciding which set of tradeoffs to take requires clarifying what kind of accuracy we want, which in turn depends on downstream applications. Main hope: estimates competitive with the AI system or the training process As analogy, suppose we have an AI system M that acts catastrophically whenever the input x is a satisfying assignment to a particular 3-CNF F . Accurately estimating the probability M behaves catastrophically is likely computationally infeasible. Nevertheless, we still believe we can obtain estimates that are "good enough" to detect if M poses a catastrophic risk. If F was a randomly selected 3-CNF, then it is still generally computationally infeasible to tell if there exists a satisfying assignment. However, Method 1 presented above is an exact calculation for the expected value of P ( F ) for a randomly selected 3-CNF. Since F was in fact randomly selected, it seems reasonable estimate P ( F ) using this "reference class" of random F s. If F was not selected randomly, this will not be a "good enough" estimate. For instance, even if we could calculate the probability of acting catastrophically for a random neural network, treating our trained neural network M as a randomly sampled neural network would produce a terrible estimate for the probability it behaved catastrophically. Similarly, if our training process selected F to have a high satisfaction probability, then treating F as a random 3-CNF would produce a very bad estimate for P ( F ) . However, in these cases, there is information we could learn that suggests treating F (or M ) as a random instance would produce a poor estimate. If we knew how F was selected, then we would know that F was not selected randomly, and so we would know not to treat it as such. Since the training process selected F , we have an "existence proof" for a process that is as efficient as the training process that knows how F was selected, and thus has the information necessary to produce a more informed estimate of F . Our hope is to produce a parallel "training process" for these more informed estimates by starting with an estimate that assumes M is random, then gradually updating the estimate to incorporate all the information used to produce M . More generally, we hope that if M sometimes deliberately decides to act catastrophically then there is some information that our estimate could incorporate to notice this fact. So if can we use all information that we have available to estimate the probability that M acts catastrophically, then we will be able to reliably distinguish between benign and deceptive AI systems. For example, it would be catastrophic to destroy all life on Earth. It would be undesirable, but not catastrophic, for a self-driving car to hit a pedestrian. ↩︎ It might also be itself catastrophic to filter M's outputs during high-stakes situations if there is no safe fall-back action. ↩︎ It might be difficult to determine whether M ( x ) is catastrophic without the input x . To be more general, we could also provide x to C . Since this does not change any the overall presentation and adds notational burden, we use C ( M ( x ) ) . ↩︎ For instance, AI systems are often jailbroken shortly after release, despite developer efforts. ↩︎ See Carlsmith 2023 for discussion about how such incentives might arise. ↩︎ Even if inputs are indistinguishable between training and deployment, an AI typically spends more time deployed than in training. Thus our AI could behave catastrophically with some constant probability that is low enough to avoid acting catastrophically while being trained, but high enough that it is likely to take a catastrophic action eventually. Section 4.5 of Risks from Learned Optimization makes a similar argument. ↩︎ We will release some empirical work on estimating tail events later this year. ↩︎ Unfortunately, obtaining accurate estimates of the probability of arbitrary binary functions is computationally infeasible; we discuss such objections in Complexity Theoretic Barriers to Accurate Estimates . ↩︎ This problem is well known; for an overview of such methods, see section 8 of Murphy 2023 . ↩︎ See section 2 of Minka 2013 for the "moment matching" theorem. If f i is a low-degree polynomial, these values can be computed exactly using Isserlis' Theorem . If f i involves a ReLU, then we can use a polynomial approximation or estimate it numerically. ↩︎ See Chapter 10 of Murphy 2023 for an overview of variational inference. ↩︎ The sum of squares algorithm is an example of how such an estimation procedure might look. ↩︎ For example, the AI system might pick a cryptographic puzzle it thinks will likely be solved, or make some prediction of what the future will look like based on arcane sociological principles. ↩︎ |
2131a79b-5c64-4d99-95f6-56c58e35c9c3 | trentmkelly/LessWrong-43k | LessWrong | Ignorance of non-existent preferences
I often hear it said that since you can’t know what non existent people or creatures want, you can’t count bringing them into existence as a benefit to them even if you guess they will probably like it. For instance Adam Ozimek makes this argument here.
Does this absolute agnosticism about non-existent preferences mean it is also a neutral act to bring someone into existence when you expect them to have a net nasty experience?
|
2112b559-30e7-4c34-a90d-a189cabc80e3 | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Boise ID, Bay City MI
This summary was posted to LW Main on July 8th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* Bay City Meetup: 19 August 2016 01:25PM
* Boise, ID Meetup: 24 July 2016 02:30PM
* Dallas Meetup: 09 July 2016 01:30PM
Irregularly scheduled Less Wrong meetups are taking place in:
* Ann Arbor Area Amalgam of Rationalist-Adjacent Anthropoids: Assemblage at Adam's: 09 July 2016 07:00PM
* Australian-ish Online Hangout July: 15 July 2016 07:30PM
* Baltimore Weekly Meetup: 10 July 2016 08:00PM
* European Community Weekend: 02 September 2016 03:35PM
* San Antonio Meetup: 10 July 2016 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX - Quack's: 09 July 2016 01:30PM
* [Melbourne] Soldier Mindset and Scout Mindset : 09 July 2016 03:30PM
* [Moscow] Role playing game based on HPMOR in Moscow: 16 July 2016 03:00PM
* San Francisco Meetup: Cooking: 11 July 2016 06:15PM
* San Jose Meetup: Park Day (V): 10 July 2016 03:00PM
* Sydney Rationality Dojo - August 2016: 07 August 2016 04:00PM
* Sydney Rationality Dojo - September 2016: 04 September 2016 04:00PM
* Sydney Rationality Dojo - October 2016: 02 October 2016 04:00PM
* Washington, D.C.: Webcomics: 10 July 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting y |
ca0d68a2-b1c5-4e43-beda-3fab1e310aa0 | trentmkelly/LessWrong-43k | LessWrong | Book Review Review (end of the bounty program)
This post calls an official end to the LessWrong Sep/Oct 2021 Book Review Bounty Program!! A huge thank you to everyone who submitted! You've produced some superb content.
I launched the program with the goals of getting more great content on the site, encouraging people to practice useful skills, getting people to learn about many aspects of the world, and just surfacing some talented writers/thinkers. I think the program succeeded.
36 book reviews were posted in the last month. I have paid out $500 to nine of them so far, and expect to pay out several more once: (a) I've finished reading them all, (b) the authors have messaged me to request payment.
If you want to collect the bounty for your book review, please message me via Intercom or email (ruby@lesswrong.com)
In terms of encouraging people to learn about many aspects of the world, the submitted book reviews spanned parenting, history, philosophy, immigration policy, magic, the microbiome, operational excellence, mathematical logic, moral psychology, and yet more topics.
These are some of my personal favorites that I've read so far, in no particular order:
* [Book review] Gödel, Escher, Bach: an in-depth explainer
* Ordinary People and Extraordinary Evil: A Report on the Beguilings of Evil
* [Book Review] "The Vital Question" by Nick Lane
* Book Review: Rise and Fall of the Great Powers
* Book Review: How To Talk So Little Kids Will Listen
* A review of Steven Pinker's new book on rationality
* Insights from Modern Principles of Economics
* Book review: The Checklist Manifesto
* Book Review: Churchill and Orwell
You can see all of the reviews in the Book Reviews tag. Make sure to click "load more"
As far as surfacing new talent goes, quite a few contributors were making their first post on LessWrong. Kudos to Sam Marks and TheRealSlimHippo, authors of two of my favorites listed above, who are new to posting. Great first contributions.
One review author told me that he was initially too shy to |
9c4449a8-249d-4e68-b0da-23b5e48296d8 | trentmkelly/LessWrong-43k | LessWrong | Iterated Distillation-Amplification, Gato, and Proto-AGI [Re-Explained]
Note: This is a joint distillation of both Iterated Distillation and Amplification by Ajeya Cotra (summarizing Paul Christiano) and A Generalist Agent by DeepMind.
Audience: New to alignment. Mostly non-technical, but a basic ML understanding helps.
Amplify, Distill, Rinse, Repeat
Suppose you want to use a friend to help you write really great novels. Ideally, you could give them some simple prompt like “write an epic fantasy book where ingesting metals gives you magical powers,” and they would write you a book better than any professional author. But your friend isn’t a professional author. They aren’t even a decent author, and when you ask them to write you this book, they don’t give you much better than “A girl ate some gold, then she could fly, but the gold made her sick and she died. The end.” Not very epic.
But your publisher just sent you an email with an all-caps subject line. You need to get a book out soon, and you know you can’t do a good job on your own. And while your friend isn’t very clever, you happen to know they’re fast, they don’t mind you asking them questions, and they can give you a bunch of (meh-quality) answers. Maybe you could still use them to accelerate your book writing process…
You decide that although you can’t trust the whole book-writing process to your friend, you can split up the work and use them to help out with little bits. You ask them things like “give me a visual description for this character who plays the role of the mentor” or “finish the end of this scene,” and they give you answers. Usually, you have to ask them the same question multiple different times and choose the best version of their answer, and sometimes none of their answers are good and you end up writing your own. But they generate some ideas you wouldn’t think of, and it feels like you’re actually getting a slightly better book written than you would have alone. This takes longer than you thought, but you eventually submit the finished novel the day of t |
6ee3c0cc-ee2a-4866-8c26-8509f44f4149 | trentmkelly/LessWrong-43k | LessWrong | A map of Bay Area memespace
The main reason we picked the Bay Area as a home for the Center for Applied Rationality was simply because that's where our initial fiscal sponsor, MIRI, was located. Yet as I’ve gotten to know this region better in the year and a half since then, I’ve been struck by how good the fit has turned out to be. The Bay Area is unusually dense with idea-driven subcultures that mix and cross-pollinate in fascinating ways, many of which are already enriching rationalist culture.
This map is my attempt at illustrating that landscape of subcultures, and at situating the rationalist community within it. I’ve limited myself to the last 50 years or so, and to subcultures defined by ideology (as opposed to, say, ethnicity). I’ve also depicted some of the major memes that have influenced, and been influenced by, those subcultures:
(Click to enlarge)
Note that although many of these memes are widely influential, I only drew an arrow connecting a meme to a group if the meme was one of the defining features of the group. (For example, yoga may be popular among many entrepreneurs, but that meme -> subculture relationship isn’t strong enough to make my map.).
Below, I expand on the map with a quick tour through the landscape of Bay Area memes and subcultures. Instead of trying to cover everything in detail, I’ve focused on nine aspects of that memespace that help put the rationalist community in context:
1. Computer scientists
Some of the basic building blocks of rationality come from computer science, and the Bay Area is rich with the world’s top computer scientists, employed by companies like Intel, IBM, Google, and Microsoft, and universities like Stanford and UC Berkeley. The idea of thinking in terms of optimization problems – optimizing for these outcomes, under those constraints -- has roots in computer science and math, and it’s so fundamental to the rationalist approach to problem-solving that it’s easy to forget how different it is from people’s normal way of thinking. |
e65e7fad-89e0-4834-8f0c-474c69e67300 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Newcomb's Lottery Problem
*Inspired by and variant of* [*The Ultimate Newcomb's Problem*](https://www.lesswrong.com/posts/RAh4fekdiRhZxb2Kw/the-ultimate-newcomb-s-problem)*.*
In front of you are two boxes, box A and box B. You can either take only box B (one-boxing) or both (two-boxing). Box A visibly contains $1,000. Box B contains a visible number X. X is guaranteed to be equal to or larger than 1 and smaller than or equal to 1000. Also, if X is composite, box B contains $1,000,000. If X is prime, B contains $0. You observe X = 226. Omega the superintelligence has predicted your move in this game. If it predicted you will one-box, it chose X to be composite; otherwise, it made X prime. Omega is known to be correct in her predictions 99% of the time, and completely honest.
The Having Fun With Decision Theory Lottery has randomly picked a number Y, which is guaranteed to fall in the same range as X. Y is displayed on a screen visible to you. The HFWDT Lottery is organized by Omega - but again, Y is picked *at random* and therefore completely separately from X. If both X and Y are prime, the HFWDT Lottery gives you $4,000,000. Otherwise it gives you $0. You observe Y = 536.
Do you one-box or two-box?
**Newcomb's Lottery Problem 2**: Everything is the same as before, except the HFWDT Lottery price is now $8,000,000. Do you one-box or two-box? |
93b62fb8-83fb-4f05-bc2e-85b25a60e956 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | DeepMind x UCL | Deep Learning Lectures | 3/12 | Convolutional Neural Networks for Image Recognition
so convolutional neural network works
for image recognition so this is the
plan for this lecture I'm gonna give you
a little bit of background about
convolutional neural networks or as I'll
be referring to them henceforth conv
nets because that's a lot easier to say
so I'll give you a little bit of
background on continents and and sort of
the ideas behind them but crucially also
the history behind them because they're
really something that has developed a
lot over the past decade and then in the
second part we'll talk a little bit
about the building blocks of conv nets
we'll go into quite some detail about
the convolution operation and how it's
used in these neural networks and then
in the fourth part we'll sorry in the
third part will put these building
blocks together into convolutional
neural networks and I'll sort of show
you how they how they fit together and
the fourth part we'll look at some case
studies some some very successful
convolutional neural network
architectures that were developed in
recent years and that includes him some
more advanced building blocks as well
and then to wrap up the lecture I'll
hint at a few more advanced topics and
also talk a little bit about how
confidence might be used not just for
image recognition which is what we'll be
talking about today but maybe also other
applications other data modalities
things like that so let's start with
some background last week I don't know
who was here last week but so my
colleague Wojtek was here talking about
neural networks and I'm gonna recap very
briefly a diagram from his slide deck so
this is how we can visualize a neural
network so it's basically a sequence of
operations the sequence of computations
and data goes in at the one end and at
the other end we want to make a
prediction of some some variable that we
can extract from this data and we have
what's called the training data set
where we have a bunch of data points
with associated target values that we
would like the model to predict and then
we have a loss function indicated and
orange here which is going to measure
how well our network is predicting the
target values and we're gonna try and
change the parameters of our neural
network these are basically the weights
in the in the linear layers over here
and over here we're going to adapt those
to try and minimize the cross entropy
and we do that using gradient descent so
we do that using an algorithm called
back propagation which which Wojtek
talked about in detail last week so I'm
going to talk about image recognition
with the neural networks and so the
first question we need to ask ourselves
is how can we actually feed images into
a neural network because the neural
networks that Wojtek described last week
they take vectors as input thank you you
basically give it a series of numbers
and then it produces another series of
numbers at the output so it neural
networks operate on vectors so how
essentially can return images into
vectors this is an image that I'll use
as sort of a basis for all the the
examples that I'll be talking about so
we have a fairly simple image here with
you know some background and a tree in
the foreground is sort of one meaningful
object in this image so we want to feed
that image into the neural network how
do we turn it into a vector so that we
can do that an image is actually not a
vector it's a digital a digital image is
a two-dimensional grid of pixels so it
has a structure to it and it has a
topological structure and basically we
so we have this two-dimensional grid it
has a it has a height that has a width
and then for each discrete position in
the grid we record the intensity and the
color of the light essentially to create
a digital image and so for the purposes
of this lecture I'm going to use a
slightly stylized version of this tree
image where you can actually see the
individual pixels because the operations
that we'll be talking about the
convolutional operate the convolutional
operator and
the pooling layers and several other
layers that are being that will be used
in conclusion on neural networks they
will operate at this at this pixel level
so it's very important to understand
that this is this is what images look
like to a computer there are just a grid
of numbers corresponding to the colors
and intensities of at discrete positions
but so as I've already said a neural
network actually expects a vector of
numbers as input so we need to turn this
thing into a vector and so the simplest
thing we could do is just take the rows
of pixels in this image one by one and
just kind of string them together into
one long row and this is a vector right
this is so this image is nine by nine
pixels so this is 81 numbers
representing our image and that's a
valid thing to do so now you can just
take the network that you that you've
built with photic last week and and just
feed in these vectors and you can train
the model to try and predict for example
that there is a tree in this image like
you could do classification of objects
and images so that works but it's not
ideal because for example let's see what
happens if we just lightly change the
image by shifting the content so if we
if we look at this new image we'll just
go back and forth so you can see the
difference we've it's essentially the
same image but we've slightly moved the
tree to the top left corner of the image
so you can imagine that we're taking the
photograph from a slightly different
angle for example for the purposes of
image classification this is still the
same image right it's the same it's the
same type of object so we would expect
the output of our neural network to be
exactly the same thing now if we do the
same thing again as before and we turn
this into a vector by just taking the
rows of pixels and concatenating them
stringing them together they this will
end up looking very different than what
we had before like the image kind of
looks the same to us I mean it's shifted
a little bit where you can still see us
a tree but if you look at these vectors
they look very very different and to a
neural network they will look very very
different like the foliage of the tree
was mainly here before but now it's kind
of shifted all the way
to the left the trunk is somewhere else
and so this is kind of challenging for
the network because it will have to
learn many different they will have to
learn to detect many different patterns
to be able to say with confidence that
oh this is an image of a tree and so
clearly just that this flattening
operation is not the way to do it we
actually want to change the architecture
of the network that we're building to
take into account these this grid
structure that the image has and so two
key properties of natural images of
photographs our locality and translation
invariance so locality is the idea that
pixels that are nearby in the image that
are close together in the grid will tend
to be correlated so if you if you have a
photograph usually there'll be a few
objects in there and those objects tend
to take up actually quite a small
portion of the total number of pixels in
the image
and so those pixels corresponding to the
object they're gonna be very highly
correlated but a pixel in a top-left
corner of the image is not going to be
very correlated with a pixel in the
bottom right corner for example
so that's locality and related to that
another important property is
translation invariance which meet which
is that meaningful patterns in the image
can actually occur anywhere so I have
this example here with a photograph of a
bird in the sky and in these four
photographs the bird is in a slightly
different position again you could
imagine that you're taking the
photograph from a slightly different
angle but clearly it's the same bird and
it should be classified as you know a
bird regardless of where it is and so if
you if you think of this bird as a
pattern this pattern can actually occur
at a lot of different positions in the
image so I have a few examples of
photographs here that that exhibit these
characteristics sort of in an in an
extreme way and so you can you can see
that there's lots of patterns going on
here like the individual object sort of
give rise to patterns that are repeated
across the entire image
and that can also occur at different
scales so in the image on the right for
example there are there are interesting
patterns and in terms of the brake work
on the wall but there's also these
windows that this this window pattern
that occurs multiple times in the image
and there's also hints at the fact that
these images are compositional so there
are sort of objects and textures in the
image at many different scales and
smaller patterns sort of join to form
larger patterns and join to form objects
and that's that's a very important point
that will exploit in convolutional
neural networks I also want to point out
that images are not the only data
modality that have this that has to have
this property think about audio for
example if you if you record a sound or
more specifically if you record speech
someone someone speaking then the the
phonemes that the person is pronouncing
the sounds that the person is making
with their mouth they can occur anywhere
in the in the single right you don't
know a priori when they're gonna
pronounce which part of the word so
again that's this that's this
translation invariance but in only one
dimension this time in the time
dimension textual data exhibits this
property if you take a page from a book
a particular word could appear anywhere
on the page another interesting one I
think is graph structured data so if you
think about maybe molecules organic
molecules have a lot of patterns that
can occur at various positions in in in
the graph that represents their the
connectivity structure between the atoms
so how do we take advantage of this
topological structure of this grid
structure of images and of this
compositional nature of images so the
first thing we can do is something
called weight sharing when we when we
have a particular hidden unit and say
the first layer of our neural network
that detects a particular local pattern
for example this one then we might also
want to have units in the network that
can detect the same pattern at different
spatial positions in the network and we
can simply achieve that by looking at
the the weights corresponding to the
connections going into that unit and
then making copies of that unit all
across the image where we shift the
pattern across the image so that's
weight sharing and then a second thing
we can do is we can make our models
hierarchical because as I said these
images are sort of compositional and
combinations of patterns give rise to
more interesting more complex patterns
so we could incorporate that in our
model by stacking lots of layers that
extract progressively more abstract more
high-level features and so in this image
I've demonstrated that you have sort of
edges and textures that can combine to
form these object parts and object parts
then combine into entire objects that
you might want to detect if you're
trying to do image recognition so before
we go on with with the technical details
about convolutions I want to talk a
little bit about the history of these
models and how they came to be the way
they are today and the key story behind
that is that data drives research so the
availability of interesting data sets
has a massive impact on how much
innovation might happen in a particular
field and so for for the computer vision
field sort of the the thing that
kick-started this confident revolution
was actually the image net challenge
which was a competition that was run
from 2010 to 2017 so until a few years
ago that turned into a really major
computer vision benchmark that so a lot
of research was was done on this data
set and so every year - ran a
competition and the idea was that you
got a dataset of 1.4 million images so
quite a lot of images and in fact I
would say orders of magnitude larger
than what people have been using before
then
and these 1.4 millage --is were divided
into thousand different classes so
different types of objects household
objects animals lots of different things
there was actually an interesting
imbalance in the data set in that they
included lots and lots of different dog
breeds so about a hundred of those a
thousand classes are actually different
dog breeds and this is kind of
interesting because it forced people to
build models that could really pay
attention to the details on objects to
tell them apart because it's it's it's
quite easy to tell apart a cat and a dog
but if you have to tell apart certain
dog breeds that's a lot more difficult
you need a lot more local detail for
that so the goal here was image
classification and another challenge of
this data set was that the object that
one had to identify that one had to
classify wasn't always front and center
in the images so there were images that
might have multiple objects in them and
to to sort of compensate for that the
performance of the models for purposes
of a for the purpose of the competition
was measured using top five accuracy or
top five error rate so the idea is that
your model gets five guesses it can take
five classes out of a thousand and if
the correct class is among those five
guesses then then you get a point right
then then that's good so this is a
diagram of the top five classification
error rate of the competition winners
each year from 2010 to 2017 so in 2010
and 2011 people used traditional
computer vision techniques that were
state-of-the-art at the time the idea
there is that you try to do some you try
to extract some kind of feature
representation from your image that you
think will capture relevant properties
about the objects in the image but it's
it's entirely handcrafted so there's no
there's no learning involved or there's
very little learning involved in that
process and then once you have those
features you can you can do what we did
before actually you can turn them into a
vector and then you can feed them into a
simple classifier which could be a
neural network or an SVM and that kind
of used to be
thanks for done and so using that
strategy you can actually do reasonably
well there it is yeah you can do
reasonably well you can get you know
about two-thirds maybe three-quarters of
your answers right with this top five
accuracy metric but then in 2012
something interesting happened and this
was actually the year where Alex
Khrushchev ski Ilya sutskever and
Geoffrey Hinton submitted their Alex net
model to the competition and this was a
Convenant so this was actually one of
the first continents that was trained at
this scale on this larger dataset
continents had been around before you
know since since the 90s maybe even the
80s depending on who you ask
contacts have been around but they
hadn't really been applied at this scale
before and people didn't actually expect
them to work at this scale so that was
kind of the the most interesting aspect
of this is that suddenly these calm nets
were actually outperforming the existing
state of the art by a very large margin
so this was actually one of the first
major successes of deep learning
altogether I would say in 2012 and then
in 2013 people sort of noticed this and
immediately everyone switched over to
confidence so in 2013 basically all the
entries of the competition were
continents just and what they had done
was they had taken Alex net and they had
added some extra tricks added a few
layers that had a few modifications and
got the error rate down a little bit
further so you can see here from 16
percent down to 12 percent but then in
2014 another very interesting thing
happened and people sort of started
thinking commnets a bit further they
they looked at Alex net and they started
questioning fundamentally sort of the
design decisions in this model and asked
like how can we actually do even better
and so this gave rise to two models that
I'll talk about in more detail later
called vgg net and Google Annette
Google Annette is a reference to Lynette
which is one of the first incarnations
of continents from there from the early
90s so these models are much deeper they
have more layers but they also have a
lot more intricate architectures so
people thought more about the challenges
of training these deep models and tried
to figure out how to do that then in
2015 we had another major breakthrough
with rest net or residual networks the
idea there is the introduction of
residual connections where you you add
new connections in your network that
that allow it to skip a few layers and
this enabled training of proper deep
networks with hundreds of layers and
these residual connections are basically
an essential component of neural
networks today almost every network has
them so this was a very important
innovation that was again sort of driven
by by this competition and so we'll take
a closer look at this one later as well
in the section about case studies after
ResNet performance kind of saturated so
we see that there's still some
improvements in 2016 and 2017 but
there's no sort of major breakthroughs
anymore after this like people started
combining the predictions from lots of
models there are a few other building
blocks that were tried but nothing that
resulted in as dramatic and improvement
as we'd seen in the years before so
after 2017 the organizers decided this
was organized that at the by students at
a university of Stanford they said you
know what this is solved this problem is
essentially solved like anything we can
do that's better now like this might be
this was already a lot better than a
human could do even a trained human on
this data set so we're essentially
considering this problem solved and we
should move on to more challenging data
sets other problems so now let's look at
some of the building blocks of
convolutional neural networks so I'm
going to get my my three out again so
this is this is a
the tree image from before and I'm gonna
again use a stylized version with nine
by nine pixels and we're gonna look at
how we can go from fully connected to
locally connected so what do I mean by
that fully connected is like in a
traditional neural network where you
connect every input value in every every
element of the input vector to every
hidden unit in the first and there and
so on you always connect every unit to
every other unit and we're gonna go
we're going to move to locally connected
because as we said before we know that
objects tend to take up a small portion
of the image and so local correlations
between pixels are much more relevant
much more interesting than correlations
between pixels that are far away so so
this is a fully connected unit in a
hidden layer so I'm representing the
hidden layer here as so this is a vector
essentially of numbers that represents
the hidden layer and we have this we're
highlighting this particular unit here
and looking at its connectivity
structure so I didn't draw all the lines
because that would be tedious but
imagine that this thing is actually
connected to all the pixels in the input
image and then how do we compute the
output of this unit we basically
multiply the weights the parameters
associated with each of these
connections with the input pixel values
and then we optionally add a bias term
and then we get essentially a linear
function of the Pink's pixels and then
after that we can apply non-linearity to
that if we want to and that's
essentially what our neural network
layer does I should also mention that in
practice so I've used this image here I
say this has 81 connections in practice
this will actually have 243 connections
three times as many and that's because
an image a pixel and an image is not
represented by a single value it's
actually represented by three values
right red green and blue I'm not drawing
that here to keep things simple but keep
in mind that there are actually three
color channels in this image that we all
feed into the network so this is a fully
connected layer in
in a normal neural network how can we
make this locally connected so we can
basically connect each unit only to a
local region in the image so that's the
first thing we'll do so instead of
having you know 81 or 243 connections
here we'll only connect the 3x3 region
in the image to this particular unit and
then instead of having our units in a
vector representation here I also made
this two-dimensional because this will
also kind of preserve the topology the
input topology this grid structure will
also be in the output of our known
network so we're going to say this unit
connects two units up here so I'm going
to put this here and then this unit
connects two inputs up here so I'm down
here so I'm gonna connect put this here
so now we have locally connected units
with a 3x3 receptive field so that's a
word all I'll use more often later so
the receptive field is what we call the
part of the image that the unit can see
essentially so this formula doesn't
actually change much the only thing is
now that we that were no longer summing
over the entire image we're only summing
the contributions over a local region in
the image and so this will reduce the
number of parameters in the network
quite drastically because each unit just
has many fewer connections going into it
now how can we go from locally connected
to convolutional that's just the
introduction of weight sharing
essentially so all we're saying now is
that you have this locally connected
unit here and we have another one here
and we're just gonna make these weights
the same we're gonna say that the
parameters that they use will be the
same and so the result is a convolution
operation that's essentially all there
is to it and so we write that with with
this asterisk there are many notations
that are used in the literature for this
operation but the asterisk is the most
common so we have some some weight
vector that actually matches up with a
3x3 region in the image and we sort of
slide it across the image to compute the
the outputs of our hidden units
and and what what this means is that the
the resulting operation becomes equi
variant - translation so if we if we
translate the image of the tree like we
did before then this resulting output is
also going to be translated in the same
way and that's interesting because it
means that our networks kind of
preserves this this original structure
so as I already said so the region that
this connects to is called the receptive
field and and the output of a particular
unit that way sort of slide across the
entire image and then group the outputs
off in a 2d grid that's what we're going
to call a feature map the weights
associated with each unit we're gonna
call the kernel or the filter both terms
are used interchangeably and as I said
this operation will will preserve the
topology of the input so the feature map
is also grid structured so how can we
implement this operation in practice so
we we take this kernel and we
essentially slide it over the image and
this is basically a filtering operation
right we're applying a filter to the
image but the weights of our filter are
actually going to be learnt in this
instance so the kernel will slide across
the image and then we produce an output
value at each position and I'm
indicating these with different
grayscale values here and so once that's
done we have this new representation of
our image that's still two-dimensional
and that's basically contains detections
of that particular feature in the image
so if the if part of the image matched
the weights in the kernel very well then
we're gonna get a very high value at the
output so that means that the feature
was detected and if there's no match
then we're gonna get like a very low
value at the output and then the feature
wasn't detected so we can sort of
interpret this as a as a feature
detection map now in practice we will
have multiple kernels not just one
so I've given them different colors and
then each of these will be convolved
with the input image and give rise to a
different feature map so we get multiple
future Maps and we will refer to those
multiple feature Maps as the channels of
the output of the convolution as I
already said before the image is of
course an RGB image so it also has three
channels actually so what we're going to
do then is we're gonna each filter is
going to consist of a sub filter for
each color Channel and we're basically
just kind of sum their contributions of
the different input channels together so
what that means is that each output
Channel here is connected to all the
input channels in the in the input of
the convolution operation and so that
means that the inputs and the outputs of
the convolution operation are tensors
they're three-dimensional objects that
have with a height and a number of
channels so that's true for images as I
already said before red green blue but
that's also true for the output of our
convolution operation and all the output
channels of the convolution will be
connected to all the input channels as
well so let's take a look at a few
variants of the convolution operation
that have been used over the years so
the simplest one is a valid convolution
and in a valid convolution we're only
going to compute output values of the
feature map where we are able to fully
overlap the kernel and the image so
we're only going to compute a value
where where we can get this full overlap
and what this means is that the output
will be slightly smaller than the input
so for input images nine by nine and we
convolve it a three by three filter what
we're going to get out is actually a
seven by seven feature map because there
are only seven possible offsets for our
filter with respect to our image that
are that where we can compute a valid
output
so the output size is gonna be the input
size minus the kernel size plus one the
opposite of that is the full convolution
where we're actually going to try and
compute outputs wherever the kernel and
the image overlap by at least one pixel
and so that in practice what you what
you do is you actually just pad the
image with some zeros or whatever value
you like but typically people just pad
with zeros and then you do the same
thing as before so you could you can
think of a full convolution as a valid
convolution but with added padding and
the result is actually gonna be a
feature map that's larger than your
original image because there are more
valid offsets than there are actually
pixels in the original image so the
output size is going to be the input
size plus the kernel size minus one and
so if we stack a lot of valid
convolutions on top of each other the
effect that that's going to have is that
are the size of our feature Maps is
gradually going to shrink whereas if we
stack lots of full convolutions
the size of the feature Maps is
gradually going to grow and neither of
those are really desirable in in
convolutional neural networks so there's
a third variant that's actually the most
popular variant today which which is
called the same convolution where we try
to pad just enough zeros so that the
output size the feature maps will have
the same size as the image and so for a
3x3 kernel you just need a pad with one
row zeros all around the image and then
you get a nine by nine feature map at
the output note that this version
actually only makes sense if your kernel
has an odd size if your kernel is even
size then you would have to pad
asymmetrically you'd have to pad
slightly more on one side than on the
other
in practice this problem doesn't really
come up because everyone just uses odd
kernel sizes like I've seen very few
confidence where people actually use
even kernel sizes and so the nice thing
about this is that if we stock lots of
same convolutions on top of each other
you can do that as much as we want to
the size of the feature map will not
change of course what might happen is
that we get some edge artifacts right
because some of our filters might
end up detecting the edges of the image
if this is zero which which typically
corresponds to a black pixel it might
actually detect this this corner as
something meaningful whereas actually
that's just the corner of the image so
that's something to look out for there's
a few other variants that are
interesting so there's a so-called
striated convolution where instead of
computing the output of the conclusion
at every possible offset we're actually
going to skip some steps and the nice
thing about this is that it is obviously
a lot cheaper to compute because if you
if you use a stride of 2 for example you
reduce the computation by a factor of 4
because obviously you increase the step
size both in the height and the width
direction and so this gives you a way to
reduce computation it also gives you a
way to reduce the resolution of the
feature maps that you're operating on
and this will be useful when we stack
lots of layers together and we want to
create a feature hierarchy right because
you you would like the higher-level
features in the model to be operating at
a larger scale you want them to see more
of the image and so strided convolutions
can be very useful to sort of create
this hierarchy so if we if you move the
filter again you can see that in this
case I'm doing a valid convolution so
far I'm doing a valid convolution with a
stride of 2 and I'm getting a 4x4
feature map which is obviously a lot
smaller than the 9 by 9 image that we
started with another interesting variant
is a dilated convolution so here we're
not we're not striding so we're not
skipping offsets but we are skipping
values of the filter so if you want to
increase the receptive field of a
convolution the now you think to do
would be just to increase the size of
the kernel if you have a very large
kernel you have a large receptive field
that can get very expensive because
obviously the the cost the number of
parameters and the computational cost
will increase quadratically with with
the kernel size that you that you choose
and so dilation is a very cheap way to
do that where you basically say
typically the features in my image will
actually very slowly over space so it's
okay to subsample it's ok to skip a few
and not compute the feature value there
because it's it's not gonna be that
interesting anyways it's probably just
gonna be into interpolation between the
two values beside it anyway so we can
safely skip it and so in a dilated
convolution you basically oh sorry
you basically pretend that you have a
larger filter but you have a bunch of
zeros in there and this can be computed
more efficiently than then then the
naive way would then you would think
naively because you don't actually have
to Pat you don't actually have to put
those zeros in there and and then do
plots of multiplies with zero you can
actually do this efficiently with some
reshapes and and and other tricks of the
tensors and then a final variant that i
want to talk about is a depth wise
convolution because that one's really
come to the forefront
more recently so they said before in
convolution operations that we've talked
about every output channel will be
connected to so each output channel will
be connects to every input channel so we
kind of have dense connectivity between
the channels in a depth wise convolution
that's not the case in a depth wise
convolution we have one output channel
per input channel and there's no
interaction between the channels and so
that dramatically reduces the number of
parameters that this convolution has but
obviously it's also a lot less
expressive but it's been it's being used
more and more as a building block
together with with other types of
convolutions and then finally pooling is
another operation that's very common in
convolutional neural networks so this is
this is kind of an alternative destroyed
convolutions to reduce the resolution of
the feature Maps basically what you do
is you look at local windows of your
input and you just compute some
aggregation function of those inputs so
that will typically be the mean of the
values or the maximum of the values and
then you compute those for all positions
in the grid and then you get your output
feature map which will typically be a
lot smaller
so here I've done this directly on the
pixels and in practice you might want to
do this inside the network on your on
your feature maps so now let's talk
about convolutional neural networks and
how these building blocks actually fit
together in their own networks so so
I've already been referring to them as
continent so there's actually two
abbreviations that are that are in
common use today like CN NS and commnets
you'll see both used interchangeably
I like comments it's easier to say will
stack up to hundreds of these
convolution operations together in a
model and we'll alternate convolutions
and pooling or possibly strata
convolutions to create a feature
hierarchy where higher layers in the
model will extract more abstract
features from the image so a brief recap
about neural networks as computational
graphs so this is a slide a diagram
rather that I took from from vertex deck
last week so this is kind of a
computational graph representation of a
neural network where we have nodes
representing the input so that's both
the the image in our case the input
image but also the target the prediction
that we're trying to match and then we
have lots of computational nodes and
some of these nodes are linear layers or
convolutions which have learn about
parameters and so these are indicated in
pink here and then at the output side we
also have the the loss in orange so I'm
going to simplify this diagram I'm not
going to display the parameters anymore
they're implicit so they're considered
part of the computational nodes I'm also
not going to show the loss because
that's that's always there that's not
what we're focusing on right now so I'm
not going to draw that on the diagram so
I just have an input node and some
computation noise here straighten it out
a little bit as well and then I'm
actually going to differentiate the
computational nodes
because that's what we're interested in
here sort of the architecture of our
confidence so I'm going to distinguish
between fully connected layers which are
the layers that that Wojtek also talked
about so typical neural network layers
where every unit is densely connected to
all the units in the previous layer
those will be in pink and then
convolution operations will be until the
pooling operations will be in purple and
I've left the nonlinearities in dark
blue as before so now let's talk about
some more interesting more recent
convolutional neural network
architectures so the one that I actually
just showed you is an existing one
called Lynette five so this is one of
the earliest published content
architectures so this was a continent
for handwritten digit recognition so it
operated on fairly small images I think
28 by 28 or 32 by 32 grayscale images of
handwritten digits and tried to produce
a classification for which which digit
was in the image and this had kind of a
what was until then sort of the
canonical structure of a continent which
was you had the the input image and then
you had a few convolutions interspersed
with pooling so there was always this
structure of convolution non-linearity
pooling convolution on linearly pooling
and then at some point you would do the
vectorization operation that we talked
about before you would just take the
feature map at this point and just
flatten it into a vector and then from
from there on it would just be a regular
fully connected neural network so you
would have a few fully connected layers
interspersed with nonlinearities and
then maybe a softmax non-linearity at
the end to do the actual classification
so and then in 2012 obviously we had
with Alex net as I said so this is this
is actually an architecture diagram from
from the paper it's it's cut off at the
top and it sits like this in the paper I
don't to the victory this day we don't
know if that's intentional or not but
the reason for the sort of unusual
structure of this diagram is that this
was a very big model that was trained on
two GPUs so so the model was actually
kind of split over to different
processors that each contained half of
the parameters of each layer and so
that's why you have this kind of
separation running through in the middle
and you have very few connections going
across because that was that was
communication between two GPUs so that
was very costly especially at that time
so you have this kind of two two stream
network so the architecture of this
model is a bit more complex than than
Lynnette so now we have eight layers
that's that's eight layers with
parameters so that's five convolutional
layers and three fully connected layers
we have something else that was new here
was the value non-linearity so before
this people tended to use saturating
nonlinearities like the sigmoid function
or the tan h function which sort of are
limited in their output range and it was
actually really hard to train deeper
networks than say four or five layers
with this setup so with Lynette was okay
but if you added a few layers to you
Lynette it would be in trouble and
people found actually that you can just
use the value which is the function that
the value is defined as is literally
just a maximum of its input in zero so
basically if the input is negative you
set it to zero and this has a
discontinuity at zero and people thought
oh if we have discontinuities in our
nonlinearities then gradient based
optimization is no longer going to work
right because that uses the gradient so
clearly that's that's that's only going
to work if the gradient is defined
everywhere and it turns out that's just
not true it turns out that as soon as
someone tried this it turned out that
this was actually a very nice
non-linearity to use because it improve
the propagation of the of the gradient
information through throughout the model
and so it enabled deeper networks and so
this is actually kind of a key
innovation here to use these value
nonlinearities other important
innovations include regularization
strategies so as I said this model was
proposed for the image net data set
which is quite a large data set so you
would think that maybe you don't need to
regularize the model too much because
you have so much data but Alex
Khrushchev skis response was just to
make his model really really really
massive and have lots of millions of
parameters and so he still needed
regularization to make sure that the
model wouldn't over fit to this data set
and so weight decay was used which is
kind of a traditional regularizer where
you just make sure that the magnitude of
the parameters doesn't grow too much but
also drop out and that was also kind of
a new thing the idea that you can
regularized neural networks by randomly
removing units during training and the
idea is that this make the other units
more robust to potentially having inputs
that are absent or that are distorted
and that turns out to be an extremely
good regularizer so that was another
important innovation of alex net so yeah
as I said this was trained on two GPUs
and it actually took six days to train
one of these models back in a day
nowadays you can train it in minutes so
yeah if a user our color scheme then the
diagram of this network looks like this
I kind of have to wrap around here and
you can see that it's that is already
about twice as deep as the as Lynette
was so if we if we walk through this
from the input to the output so at the
input you have images coming in so three
channels and the images were scaled to
two 24 by 2 24 pixels which is a lot
larger than any confident had used
before then so typically comrades would
use inputs a 32 by 32 but people had
never really gone to come to that scale
and so the way they did this was
actually by having very
tried in the first convolutional layer
so only the first convolutional layer
was operating at this very high
resolution and then immediately the
resolution would be reduced by a factor
of four which meant that the actual the
the amount of computation was reduced by
a factor of sixteen from there on so it
had an 11 by 11 kernel again as I said
an odd odd sized kernel because that's
what people use 96 channels stride of
four and so it's output size was 56 by
56 by 96 so a lot smaller spatially but
obviously a lot more channels and then
we had the rail you nonlinearities and
then max pooling layer to reduce it even
further down to 28 by 28 by 96 which
means so this is essentially a pooling
operation with just where we just take
the maximum over 2 by 2 windows and so
that means that that means that the read
the rest of the network is actually
operating on things that are 28 by 28 or
smaller so not that different from the
networks that came before so it's only
really this first layer that's doing a
lot of hard work to to use this high
resolution information in the image and
that was new that was an innovation of
Alex I'm not going to go through all the
layers I'm gonna skip ahead to the last
fully connected layer which is going to
produce thousand outputs one for each
class in the image net data set and and
finally we have a soft max non-linearity
which which takes the output of the
fully connected layer and turns it into
a categorical probability distribution
where we can guarantee that the outputs
of the model will be probabilities that
sum to one so they will form a valid
distribution over the classes so here
are all the layers again and you can
sort of see that the the resolution
actually is reduced very rapidly at the
start and then more gradually throughout
the network
and so another thing actually that was
kind of new here was the realization
that we don't always have to pair a
convolutional layer with a pooling layer
so this is done here at the start twice
but then we have a few convolutions with
nonlinearities in between where there's
no pooling happening and and people just
didn't do this before Alex net it wasn't
considered to be a valid thing to do so
it's kind of interesting that these
things that we maybe now take for
granted we're just not done so by now I
think it's clear that the story is that
deeper models tend to perform better and
and to get some insight into that you
can consider each layer as acting is
kind of a linear classifier that's
detecting particular patterns in its
input and so that means that if you
stock more of these layers on top of
each other you actually get more
nonlinearities in your model you get a
more powerful parameterised function
that you can use to to fit to fit the
targets and so this this the question
arises like what what is actually
limiting the number of layers in
confidence like why was Alex and eight
players why wasn't it 80 layers and
obviously an obvious one is
computational complexity obviously if
you have more layers you have to do more
computation which you know we always
have a limited computational budget but
there were actually other issues as well
such as optimization so if we have a
deeper model how do we actually back
propagate through that entire model how
do we do a credit assignment if a model
makes a mistake how do we assign
responsibility for that mistake to
particular units in the network and that
gets harder and harder as you stack more
layers on top of each other so in 2014
we have vgg net and and there again we
see a doubling in depth essentially so
you see I now need four lines to fit
this model and so there the idea was
they kind of took this sequence of three
khans layers from Alex net to an extreme
and they said we can actually do this
all the way throughout the network we
can stack many many convolutional layers
on top of each other before we actually
do any pooling
and we can use same convolutions so with
with padding so that the output feature
maps are the same size as the input to
avoid resolution reduction where we
don't want it so if you're stacking
these convolutional layers we don't want
resolution reduction we want to be in
control of where the resolution is
reduced and that's going to be in the
pooling layers and so another idea from
PGD net is actually to to fix the kernel
size and did not treat this as a hyper
parameter of the architecture so unlike
Alix net which had different kernel
sizes for different convolutional layers
vgg net only uses 3x3 kernels throughout
so that simplifies the the search for
good architectures considerably because
what they realized was if we want a
larger receptive field we don't
necessarily need to make take take a
single layer and make its receptive
field larger by increasing its kernel
size you can actually just stock three
by three filters to create a larger
receptive field that spans multiple
layers so here if we have a stack of two
3x3 convolutions we can sort of see in
blue these are the receptive fields of
the of the first convolutional layer and
then in red I've superimposed the
receptive field of the second
convolutional layer with respect to the
to its input so with respect to the
outputs of the first layer but if we
look at these two layers is one block
and sort of look at the receptive fields
of the second layer with respect to the
input of the first we see that it's
actually five by five so it grows as we
stock more three by three convolution so
you can actually create something that
has an equivalent receptive field to a
single layer with a five by five kernel
but it will have fewer parameters and it
will be more flexible because we can
also insert an extra non-linearity there
so it'll be more it'll be able to model
more interesting functions so in terms
of architecture vgg net had up to 19
layers so again quite a bit more than
the eight layers of alex net it only
used three by three kernels with same
convolutions in terms of infrastructure
this was also a bit of an upgrade so
this was trained on four GPUs and it was
trained for two to three weeks so very
patient people had vgg in oxford and and
another thing here that was interesting
is they use data parallel
not model parallelism so for Alex net
the model is kind of split over these
two GPUs and you saw that this actually
affects the architecture like it affects
which which parts of the model we can
connect to each other so what what was
done for vgg net instead is data
parallelism where you just take your
batch of data that you're processing and
you just split it into four parts and
then you have the entire network on all
four processors on all four GPUs and you
just you just compute predictions on
smaller sub batches predictions and and
gradients obviously during training so
this is the error rate on sort of top
five error rate on image net for VG for
different versions of VG net with
different numbers of layers so they had
variance with 11 layers 13 layers 16
layers and 19 layers and what's
interesting here is that obviously up to
a point
deeper is better so 16 is better than 13
is better than 11 but it seems like
their performance saturates after 16
layers they tried one with 19 layers and
saw that it was actually slightly worse
so what's actually going on there and so
later models so at the time you didn't
know but later models actually use a lot
of tricks to to prevent this from
happening because what's happening here
is an optimization issue right now you
have these 19 layers of computation and
it's starting to get harder to do credit
assignment so yeah I've actually already
already mentioned both of these so the
challenges of deep neural networks are
our computational complexity more layers
is more computation that takes time and
energy and there are also optimization
difficulties that arise because
optimization of the parameters by
gradient descent becomes a lot harder
and so we'll look at some ways to
address these challenges next there will
be a future lecture in this series that
will cover optimization of very deep
models in detail so look out for that so
how I'll just I'll just give a quick
summary but my colleague I believe James
Martin's will will be doing that one
he'll go over this in detail so one
thing we can do is be very careful
with how we initialize the parameters of
our neural network if we if we just
randomly initialize these from say a
uniform distribution from minus 1 to 1
then that's not going to work because
the the activations the outputs of the
of the layers in our network are going
to grow as we go through the network and
and then if we try to optimize the
network we need to take the gradients
which means we need to do a backward
pass through the network and those
gradients there's intermediate gradients
that we can gonna compute are also going
to grow so we're actually going to get
exploding gradients if we do that you
might say ok just MIT just make them
really small you can't make them 0 oh by
the way because you have to do some
symmetry breaking like if you initialize
a neural network to zeros it has no way
to differentiate the different units so
you do have to do something random but
you could say like ok initialize all the
weights to very small values then what
you're gonna get is vanishing gradients
like the gradients are just gonna
collapse to 0 because your computer will
not have enough precision to represent
these really small values so you need to
be a little bit careful about how to
initialize these models and and people
have figured out various heuristics to
sort of ensure that the gradients have
the right scale at the start of training
and then luckily if you do that at the
start of training that tends to be
preserved throughout training another
thing you can do is use very
sophisticated optimizers so obviously
you can just use stochastic gradient
descent to train a neural network but
there are lots of interesting variants
of that algorithm that are you know
specifically tailored to neural networks
and and tend to do a better job
optimizing them more quickly again I'm
gonna I'm gonna leave this to my
colleague to to go into in detail an
architectural innovation that can help
with this is the introduction of
normalization layers so I haven't talked
about those yet but they're actually
today just as essential as the
convolutional layers and pooling layers
and the nonlinearities so we insert
normalization layers throughout the
network to sort of scale the activations
so that they're in the right range for
optimization to be easy
and I'm finally we can also just change
the network design to make gradient
propagation easier so we can we can
change the connectivity pattern and rest
nets that that we that I already briefly
mentioned before are an example of that
so adding these residual connections is
a good example of that so let's also
look at Google Annette because this was
actually the winner of the 2014
competition so vgd net came second but
was in retrospect just as influential
Google net was interesting because it
was a lot more intricate and a lot more
complicated than previous network
designs that beam so people hadn't
really considered that you could
actually kind of branch out and then and
then and then concatenate the the
resulting feature Maps and sort of have
these multiple convolutional operations
operating side-by-side so this is kind
of the the canonical diagram of Google
Annette this is a zoomed in version of
one of these what's called inception
blocks and so you can see they have
multiple convolutions with different
kernel sizes and an even pooling
operating in parallel this is the first
version of the inception module there
have been various iterations on top of
this in the meantime that I'm I'm not
going to go over all the different
variants in detail though so I
normalization layers so a key innovation
actually in the second version of the
inception module was the introduction of
batch normalization and the idea of
batch normalization is that you do
essentially you standardized activation
so you compute the the mean and the
variance and you estimate these across a
batch of data so you do this in every
layer you estimate the mean and variance
of the activations and then you just
normalize right here and then at the
output of the normalization what you
might want to do is have a trainable
scaling factor and
at this point so that the activations
aren't actually forced to be constrained
to be 0 mean unit variance thank you you
want to retain the ex Priscilla T of the
original neural network but you want to
have this normalization step in there
because it makes optimization easier and
so that's that's how you can do that and
so introducing batch normalization
layers throughout the network
dramatically reduces the sensitivity of
the model to initialization so even if
you kind of wing it in terms of how you
initialize the model with batch norm
you'll actually be able to still still
train it and it makes it more robust to
larger learning rates as well another
aspect of it which can be a downside or
an upside depending on what you're
trying to do is it introduces stock
assisity and it acts as a regularizer
because these statistics these these
music these mutant Sigma's they are
estimated on the current batch of data
and obviously the batch of data will be
relatively small so you're gonna get a
you're gonna get a rough estimate of
these statistics but it's not gonna be
exact
and that can actually be a good thing
because it introduces noise into the
network as we said before we dropped out
like introducing noise into the model
can actually make it more robust in
practice so this acts as a regularizer
now the downside of this is that at test
time when you actually want to use your
model and make predictions this will
introduce a dependency between the
different images in the batch so you
will get different predictions for a
particular image depending on which
other images are also in the batch and
that's not nice thank you you wouldn't
want your model to give deterministic
predictions so in practice what you need
to do is actually estimate these
statistics on a data set and just keep
keep track of them separately and then
use those at test time which is doable
but we've actually at least I found in
practice that this can be a source of a
lot of bugs if something is wrong with
my neural network latch norm is usually
the first thing that I suspect
so the original Google Annette did not
use bachelor but any all later versions
of it
use this and if you look at a comparison
between the original Google Annette
which is the dashed black line here and
and then a another version of this model
which uses batch norm you can see that
it actually converges a lot faster and
to a higher accuracy and this is because
Bachelor makes the model able to take
larger learning rates essentially
without without diverging so let's look
at ResNet which came in 2015 this is
actually one of my favorite sort of
continent innovations because it's
beautiful in its simplicity the idea
here is that oh if depth is what's
preventing us from trading deeper models
because it makes optimization harder why
don't we why don't we give it a skip
connection why don't we let it skip a
few layers if it needs to so that the
graded can back propagate more easily
and the way they achieve that is
essentially by adding this residual
connection which just means that you you
take your input from from the input of
this layer and you just kind of add it
back in later which means that when you
when you back propagate through this and
you take the gradient you can actually
go along this pathway and skip these
convolutional layers all together and so
residual connections facilitate training
much deeper networks and so the the rest
net that one the imagenet competition in
2015 was actually one hundred fifty two
layers deep so again a dramatic increase
an order of magnitude increase compared
to the previous year so there's a few
different versions of residual blocks
residual modules so the top one is the
one i just showed you which is the
original one from the from you from the
rest of paper in 2015 which was the one
used in the image net competition so
that looks that looks like like this one
here it has a three by three convolution
followed by a batch norm followed by a
nonlinear
and then one-by-one convolution for
their bachelor and then you have the
residual connection coming in and then
you have another non-linearity so this
block was kind of stocked many many
times to come to 152 layers there's a a
variant that they used to reduce the
number of parameters which is called the
bottleneck block and so that's this one
so as you can see instead of two
sequences of confident vaginal
non-linearity this one actually has
three and what's happening here is it
has convolutions with very small filter
sizes so one by one excite essentially
means that you do a convolution but
there's no actual spatial integration of
information happening you're just kind
of computing the same function at every
spatial position and so what they did
was they used one by once to reduce the
number of channels reduce the number of
feature Maps and then they do a three by
three on fewer feature Maps which means
you'll have fewer parameters in your 3x3
convolution which is doing the actual
work the actual spatial integration of
information and then you just go back
out and you increase the number of
channels again with another one by one
and so this is a nice way to sort of
have a large representation but still
have fairly cheap convolution operations
in the middle there and then the third
version at the bottom is it's called
resna b2 if you don't it's something not
clear if you don't mind I think that
will be easier thank you
so the the this one is resonant me too
and there they actually just moved the
operations around the bit so as you can
see that the bathroom is here and on the
air it is here and then a three by three
and I'm batch normally RT one by one and
then the residual connection comes at
the end here the advantage of this one
is that if you stack a lot of these on
top of each other there's actually a
direct connection from the output to the
input without any nonlinearities on the
path and so this allows you to go even
deeper and and even go to thousands of
layers if you want to
so this is a table describing the the
rest net architectures from the original
paper so these are a few different
versions of progressive depth and you
can kind of see the same pattern as
before they they start out with high
resolutions but they kind of very
quickly reduce them and and most of the
computation is actually done at these
lower resolutions you can see I actually
made a mistake earlier the the 152 layer
model on the right here you can see that
it's actually using the bottleneck
blocks not not the top one on the
previous slide and what's interesting
here is that this 152 layer model
because it uses bottleneck blocks
actually has requires fewer
computational operations we were
floating point operations then the 19
million video Ginette did so even though
it's an order of magnitude deeper it's
actually cheaper to compute which i
think is really nice and it actually
also obviously performs a lot better
because of this depth another variant of
this ideas dense net so here we don't
have residual connections in dense net
the authors decided to make
backpropagation easier just by
connecting every layer to every other
layer so whenever you stock a new layer
and a dense net you connect it to all
the previous layer all the previous
layers and not just a preceding one and
so you get this dense connection between
layers but obviously each layer is still
a convolution with batch norm with
rallies inside and then in this this
actually comes from the this was also
introduced as part of the image net
competitions so this was one one last
innovation in 2017 this the idea here is
to incorporate global context so
convolutions are actually obviously
great at capturing local patterns but
sometimes you will might you might want
to model eight modulate these patterns
based on the the global context of the
MHD the stuff that's happening elsewhere
in the image and so for that purpose
it's nice to
try and compress the entirety of the
image into just a feature vector and
then kind of broadcast that feature
vector to all spatial positions so that
at any spatial position in the image you
can actually get some extra information
about what's going on elsewhere in the
image and so you you can you can
incorporate global context into your
features and then another strand of
architecture design has become popular
more recently is neural architecture
search so up till now we've been talking
about these architectures that got
progressively more intricate and these
were all hand designed by humans right
so humans basically searched for the
optimal hyperplane meters the optimal
number of layers geofflove kernel sizes
in these models and so people started to
think like maybe we can actually
automate that process as well maybe we
can use a search algorithm or even
machine learning algorithm to find the
best architecture to use to then train
to do image recognition and so amoeba
net is a model that arose from such an
algorithm so it's an architecture that
was found by an evolutionary algorithm
that basically performed a search over a
cyclic graphs composed of a set of
predefined layers so that kind of said
the convolution operation is a
predefined layer and then we have a
pooling operation we have different
types of pooling and then basically
connect these up any way you want and
find the optimal connectivity pattern
that gives rise to a continent that
works really well and so that's
architecture search and then another
trend in recent years has been to try
and reduce the complexity computational
complexity in these models by by
parameterizing them in more efficient
way so I've already talked about depth
wise convolutions and the way that
they're they reduce the number of
parameters dramatically just by not
connecting all the input channels to
each output Channel but better
connecting channel by channel obviously
you pay a cost in terms of experts
civilly but sometimes this can be worth
it but people have used
depth-wise convolutions to build what's
called a separable convolution and a
separable convolution is essentially
just a combination of a depth-wise
convolution followed by a regular
convolution with a one by one filter
size so you're kind of dividing the work
here in the sense that a depth wise
convolution will do spatial integration
it will sort of capture information from
a local neighborhood and then the one by
one convolution that follows will do
redistribute the information across the
channels which the depth wise
convolution doesn't because it operates
within each channel and so if you
combine those two you have kind of a
separable version of a regular
convolution where one part of the
operation the spatial integration and
the other part does integration over two
channels this idea is also used in with
in another fairly modern building block
so I've talked about bottom like blocks
before and I talked about ResNet the new
cool thing is inverted bottlenecks where
instead of reducing the number of
channels and then applying a 3x3
convolution you actually increase the
number of channels inside the bottleneck
and then apply a 3x3 depth-wise
convolution the idea being that you sort
of do spatial integration in this really
high dimensional space and then collapse
it back to a more manageable feature
space for communication between the
different parts of the network so as
that's it as far as case studies is
concerned so to wrap up this lecture I'm
going to give a brief overview of some
more advanced topics and I'm also going
to talk a little bit about contents
beyond image recognition so one thing I
haven't mentioned which is actually a
crucial ingredient in in many modern
commnets as data augmentation so by
design Condit's are robust against
translation right if you translate an
image then the internal representations
inside the component will also translate
and eventually because of all the
pooling it comes quite easy for them all
to be invariant to that so to sort of
ignore that translation altogether and
to just classify the object that's in
the image but obviously translation is
not the only thing that can happen to an
image you could also rotate your camera
for example and take a photograph from a
different angle you could go closer and
away which changes the scale of the
object you could you know it could be a
bright day it could be a dark day that
will affect the lighting all these
things are nuisance factors essentially
nuisance factors a variation that will
not affect the classification output
that you want but obviously dramatically
affect the pixel value is that the that
the model will see and so the way to
make these models robust against these
variations is to just artificially apply
them during training so every time we
feed an image to our network during
training we randomly perturb it with
some of these transformations so I have
some examples down here of different
perturbations of this image and then we
basically say for all of these you
should always produce the output treat
right because this is a tree regardless
of how it is perturbed and that allows
the model to learn robustness against
these other transformations so the
robustness is not innate in a sense that
the architecture isn't designed to be
robust against these transformations but
we can still let the model learn that it
needs to be robust we can also visualize
the patterns and the filters that
accommodate is actually learning and so
a neat way to do that is to take a unit
in any layer of the continent and then
to try to maximize its activation by
changing the input image and so we can
just do that with gradient descent just
like we use gradient descent to Train
these models we can do great in ascend
with respect to the input pixels to try
and find images that maximally or
minimally activate different units in a
network and we can do this for different
layers and these are some figures from
paper by Matthew Zieler and his
colleagues which really nicely
demonstrate this idea of
compositionality and hierarchy in a
sense that these different layers are
learning different patterns and in layer
two you can kind of see that these are
fairly local patterns there are some
edge detectors some textures here in
layer three these are starting to get
aggregated and to more interesting
patterns and then if you go all the way
up to layer five you can actually see
that there's even layer four sorry you
can see that there's a dog head detector
here that arises
so it's kind of combined I think nicely
showing that these patterns are getting
combined into progressively more
interesting structures throughout the
network we can also run this procedure
with respect to the output layer of the
network so if we take an output unit
corresponding to a particular class so
one of our thousands and then we just
say you know find me the image that
maximizes the probability of this output
so we get sort of the canonical image
corresponding to a class this is what
you can get out so this is from word by
a Koran Simonian and as colleagues so
you can kind of see the objects in here
I mean obviously these don't look like
natural images but you can kind of see
that there are certain patterns that
arise here and these are the patterns
that the network will try to pick up on
to classify images if you do this with a
strong prior so you kind of add an
additional term in your in your loss
where you say this is what an image
looks like this is what a natural
natural image should look like then you
can get images like these out so this is
a this is from a different paper where
they essentially use the same procedure
but they have this extra prior that
tries to make the images look natural
and so now you can sort of see images
that would maximally activate
particularly units in the continent
there's a really nice interactive blog
post on the Stillwell Pub about this
idea feature visualization so it's it's
interactive so you can play with this
and you can kind of look at all the all
the different units in all the different
layers in a neural net I definitely
recommend checking this out that's
really cool to play with so some other
topics to explore that I don't have time
to go into today are pre training and
fine tuning so a lot of a lot of image
classification problems that are of
interest don't have large data sets
available so image net is obviously 1.4
million images is pretty good but for
many problems we may have
orders of magnitude fewer and so people
have sought for ways to sort of reuse
the model strained on imagenet for
different tasks and the way you can do
that for example is to take a train
model and imagenet and chop off the top
layer that does the actual
classification and then just fit another
layer on top of those features and that
turns out to work really quite well and
you can even fine-tune the rest of the
network a little bit to to work on your
new data set so that's a very effective
strategy to do transfer learning another
topic that I think is very interesting
is group echo variant confidence so
we've talked about how come nets are
invariant to translation and then all
other invariances kind of have to be
learned and so you can learn you can
learn them with data augmentation but
you can actually build confidence that
are intrinsically equivariance to
rotation and to scale and other things
like that and this is a line of research
that's kind of taking taken flight over
the past three years or so which i think
is worth exploring I also want to talk
briefly about recurrence and attention
so these are two other ways to
incorporate topological structure of the
input into our network architectures I'm
not going to talk about them now because
they'll be the subject of future
lectures but so just to say that
convolutions are not the only thing you
can do to exploit grid structures or
sequence structure in in your input and
so to wrap up let's talk about what else
we can use these models for so we've
talked about models for image
recognition so far but obviously there
are lots of other tasks involving images
that could benefit from these
architectural priors so what else can we
do with continents so the object
classification here is in the top left
so that's whatever that's what we've
been talking about so far you can also
do object detection where in addition to
identifying the class of each object in
the image you want to figure out where
in the image it is so you want to
produce a bounding box for each object
in the image another variant of this is
semantic segmentation or for each pixel
in the image you want to identify what
type of object it is part of and then
there's also instant segmentation where
you actually want to segment the
individual objects even if there's
multiple at the same class so that's all
in the image space we can also generate
images with confidence in many different
ways so there are various different
types of generative models generative
adversarial networks variational
autoencoders auto regressive models like
pixel CNN that all use the convolution
operation as a basic building block
because they also benefit from this
these priors of locality and translation
invariance so these are some images that
were generated by big Gann which is a
generative adversarial Network developed
by my colleagues at deep mind more with
commnets you can do representation
learning one thing that's getting very
popular right now is self supervised
learning so what do you do if you have a
very large collection of images but no
labels then you can do you can kind of
create labels yourself and and do self
supervised learning and hopefully get
features that might still be useful for
transfer learning later as I also
mentioned earlier you can use comp nets
for other data types like video audio
text graphs there's lots of options
there you can use commnets inside agents
inside intelligent agents trained with
reinforcement learning there's there's
lots of options many of these will be
talked about in future lectures as well
so to wrap up I want to leave you with
this sort of statement which is that
convolutional neural networks replaced
handcrafted features with handcrafted
architectures and the reason I want to
stress that is because people often used
to see confidence as kind of a a magical
thing that led us to no longer have to
sort of be clever about what we do with
images like how do we exploit structure
and images so we actually don't need to
do that anymore we just put a comment
off it we'll figure it out but that's
not how it ended up being because we've
actually used a lot of our prior
knowledge about about structure and
images to design these architectures to
work better so we're not we still have
to be intelligent we still have to do
research we still have to use the prior
knowledge that we have about the data
that we're working with we're just
incorporating it at a higher level of
abstraction than we were before
and we're using learning now so learning
is in the mix which is which is the main
differentiator I think so that is all I
have
I want to thank you very much for your
attention
[Applause]
[Music] |
14d3bbc9-4ed6-4125-944b-5d13b8c97bed | trentmkelly/LessWrong-43k | LessWrong | Self-Blinded L-Theanine RCT
Value tracked Effect size d (λ, p, σ change) Effect size d (λ, p, σ change) 200 mg Caffeine (n=1, m=50) 500 mg L-theanine (n=1, m=50) Log-score substance prediction[1] -0.6 -0.7 Absorption 0.61 (λ=13.3, p=0.00017, -0.072) 0.04 (λ=1.38, p=0.77, -0.07) Mindfulness 0.58 (λ=11.8, p=0.0007, 0.021) 0.12 (λ=0.72, p=0.89, -0.018) Productivity 0.58 (λ=28.9, p=1.3-12, 0.11) -0.28 (λ=5.51, p=0.109, 0.03) Creativity 0.45 (λ=51, p=4.6-27, 0.09) -0.12 (λ=5.05, p=0.14, -0.04) Happiness 0.27 (λ=10.6, p=0.002, 0.3) 0.16 (λ=3.98, p=0.27, -0.155) Contentment 0.13 (λ=7.66, p=0.02, 0.47) 0.25 (λ=6.83, p=0.04, -0.04) Relaxation -0.11 (λ=5, p=0.15, 0.42) 0.12 (λ=1.5, p=0.74, 0.02) Chastity[2] -0.14 (λ=1.9, p=0.64, 0.11) -0.03 (λ=1.15, p=0.8, 0.25) Flashcard ease 0.003 (λ≈∞, p≈0, -0.009) -0.072 (λ=∞, p≈0, -0.01) Flashcard ease factor -0.039 (λ≈∞, p≈0, -32.7) 0.0026 (λ=∞, p≈0, -18.9) Flashcard new interval 0.011 (λ≈∞, p≈0, -1.88) -0.016 (λ=∞, p≈0, 3.1) Time per flashcard[3] 0.006 (λ≈∞, p≈0, 273.4) 0.003 (λ=∞, p≈0, 13.66)
> L-Theanine is synergistic with caffeine in regards to attention switching[318] and alertness[319][320] and reduces susceptibility to distractions (focus).[320][321] However, alertness seems to be relatively subjective and may not be a reliable increase between these two compounds,[318] and increases in mood are either present or absent.[322][318][323] This may be due to theanine being a relatively subpar nootropic in and of itself pertaining to the above parameters, but augmenting caffeine's effects; some studies do note that theanine does not affect the above parameters in and of itself.[324] Due to this, any insensitivity or habituation to caffeine would reduce the effects of the combination as L-theanine may work through caffeine.
>
> L-Theanine does not appear to be synergistic with caffeine in regards to attention to a prolonged and monotonous task.[325]
—Kamal Patel, “Caffeine”, 2023
See also Examine, Wikipedia and Gwern.
Sitiprapaporn et al. 2018 test the effe |
6e46acf8-4b4c-4ac3-be59-7edf6337bcd6 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Church vs. Taskforce
Today's post, Church vs. Taskforce was originally published on 28 March 2009. A summary (taken from the LW wiki):
> Churches serve a role of providing community - but they aren't explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There's a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Can Humanism Match Religion's Output?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
9660100c-bf05-4aa8-8afd-13b29c229000 | trentmkelly/LessWrong-43k | LessWrong | Holden Karnofsky's Singularity Institute Objection 2
The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky's excellent critique of the Singularity Institute means that it's hard to keep track of the resulting discussion. I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves.
Objection 2: SI appears to neglect the potentially important distinction between "tool" and "agent" AI.
Google Maps is a type of artificial intelligence (AI). It is far more intelligent than I am when it comes to planning routes.
Google Maps - by which I mean the complete software package including the display of the map itself - does not have a "utility" that it seeks to maximize. (One could fit a utility function to its actions, as to any set of actions, but there is no single "parameter to be maximized" driving its operations.)
Google Maps (as I understand it) considers multiple possible routes, gives each a score based on factors such as distance and likely traffic, and then displays the best-scoring route in a way that makes it easily understood by the user. If I don't like the route, for whatever reason, I can change some parameters and consider a different route. If I like the route, I can print it out or email it to a friend or send it to my phone's navigation application. Google Maps has no single parameter it is trying to maximize; it has no reason to try to "trick" me in order to increase its utility.
In short, Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish.
Every software application I know of seems to work essentially the same way, including those that involve (specialized) artificial intelligence such as Google Search, Siri, Watson, Rybka, etc. Some can be put into an "agent mode" (as Watson was on Jeopardy!) but all can easily be set up to be used a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.