id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
c22d0ecb-c041-41e1-a817-fb9068ae74b2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Review: Michel Thomas French (Direct Instruction)
Purpose of Review
Owen’s recent post brought up the topic of optimizing education. One particular approach, Direct Instruction (Misha’s better explanation), claims to have essentially solved the problem. In particular, Direct Instruction (DI) does allegedly not only work for basic reading skills, but any teaching task. Owen brought up the Michel Thomas language courses as a good application. Language learning is one of my main interests, so I gave the French Foundation course a try.
The main point of the review is to summarize what Michel Thomas actually does, how it differs from other common paradigms and how effective it seems to me.
Summary: Nice for beginners and people with bad learning experiences; limited use afterwards. The audio-only aspect is very convenient. It complements other strategies well and I see it as a good proof-of-concept of DI-like methods for language learning.
Overview
Let’s start with a disclaimer. Michel Thomas (MT) is not officially a DI course and as far as I could google, Thomas propbably wasn’t aware of DI at all. However, according to Solity's The Language Revolution and Owen, the reason MT works so well is that it applies (an approximation of) DI techniques. It is right now the best realistic example beyond the grade school level, so it’ll have to do.
I had some French in high school and thanks to fluency in German and English, I can read some French, but I have no active skill at all, nor have I ever used French in a serious way.
I have now completed the first half of the French Foundation course and skimmed many other courses. You can listen to the first 20 minutes here; they are very representative. Furthermore you can read the booklets to get an idea of the material covered in each course. The whole course is audio-only, consists of 8 CDs (and 2 review CDs) and is intended to be listened to only once.1 There are several advanced courses which merely cover more grammar points and vocabulary. Structurally, they are all the s
|
61c58467-4c9a-41a8-9c39-670bc8c48b94
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Moving to a World Beyond “p < 0.05”
The American Statistician just released a large report that outlines why p-values are problematic and then compiles many potential alternative ways to approach the situation.
|
0099e949-a21f-45b2-b8f5-7ab621be4a08
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Multi-Agent Inverse Reinforcement Learning: Suboptimal Demonstrations and
Alternative Solution Concepts
This research was recently completed within the AI Safety division of the Stanford Existential Risk Initiative and concerns methods for reward learning in multi-agent systems.
Abstract: Multi-agent inverse reinforcement learning (MIRL) can be used to learn reward functions from agents in social environments. To model realistic social dynamics, MIRL methods must account for suboptimal human reasoning and behavior. Traditional formalisms of game theory provide computationally tractable behavioral models, but assume agents have unrealistic cognitive capabilities. This research identifies and compares mechanisms in MIRL methods which a) handle noise, biases and heuristics in agent decision making and b) model realistic equilibrium solution concepts. MIRL research is systematically reviewed to identify solutions for these challenges. The methods and results of these studies are analyzed and compared based on factors including performance accuracy, efficiency, and descriptive quality. We found that the primary methods for handling noise, biases and heuristics in MIRL were extensions of Maximum Entropy (MaxEnt) IRL to multi-agent settings. We also found that many successful solution concepts are generalizations of the traditional Nash Equilibrium (NE). These solutions include the correlated equilibrium, logistic stochastic best response equilibrium and entropy regularized mean field NE. Methods which use recursive reasoning or updating also perform well, including the feedback NE and archive multi-agent adversarial IRL. Success in modeling specific biases and heuristics in single-agent IRL and promising results using a Theory of Mind approach in MIRL imply that modeling specific biases and heuristics may be useful. Flexibility and unbiased inference in the identified alternative solution concepts suggest that a solution concept which has both recursive and generalized characteristics may perform well at modeling realistic social interactions.
The full paper can be found
|
a5a94bc1-3bca-444d-8bf7-debadb368ce5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Einstein's Arrogance
In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein’s novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington’s observations failed to match his theory. Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.”
It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?
Of course, Einstein did turn out to be right. I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.
And Einstein may not have been quite so foolhardy as he sounded . . .
To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~ 20 bits), you’ll end up with a hundred candidates. Just finding the right answer, within a large space of possibilities, requires a large amount of evidence.
Traditional Rationality emphasizes justification: “If you want to convince me of X, you’ve got to present me with Y amount of evidence.” I myself often slip into this phrasing, whenever I say something like, “To justify believing in this proposition, at more than 99% probability, requires 34 bits of evidence.” Or, “In order to assign more than 50% probability to your hypothesis, you n
|
77f3f162-1aee-4b86-9a63-bd7227a75b64
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
NIPS 2017 Report

This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. On the near-term side, I particularly enjoyed [Kate Crawford’s keynote](https://www.youtube.com/watch?v=fMym_BKWQzk) on neglected problems in AI fairness, the ML security workshops, and the Interpretable ML symposium [debate](https://www.youtube.com/watch?v=2hW05ZfsUUo) that addressed the “do we even need interpretability?” question in a somewhat sloppy but entertaining way. There was a lot of great content on the long-term side, including several oral / spotlight presentations and the [Aligned AI workshop](https://nips.cc/Conferences/2017/Schedule?showEvent=8794).
#### Value alignment papers
[Inverse Reward Design](http://papers.nips.cc/paper/by-source-2017-3406) (Hadfield-Menell et al) defines the problem of an RL agent inferring a human’s true reward function based on the proxy reward function designed by the human. This is different from inverse reinforcement learning, where the agent infers the reward function from human behavior. The paper proposes a method for IRD that models uncertainty about the true reward, assuming that the human chose a proxy reward that leads to the correct behavior in the training environment. For example, if a test environment unexpectedly includes lava, the agent assumes that a lava-avoiding reward function is as likely as a lava-indifferent or lava-seeking reward function, since they lead to the same behavior in the training environment. The agent then follows a risk-averse policy with respect to its uncertainty about the reward function.

The paper shows some encouraging results on toy environments for avoiding some types of side effects and reward hacking behavior, though it’s unclear how well they will generalize to more complex settings. For example, the approach to reward hacking relies on noticing disagreements between different sensors / features that agreed in the training environment, which might be much harder to pick up on in a complex environment. The method is also at risk of being overly risk-averse and avoiding anything new, whether it be lava or gold, so it would be great to see some approaches for safe exploration in this setting.
[Repeated Inverse RL](http://papers.nips.cc/paper/6778-repeated-inverse-reinforcement-learning) (Amin et al) defines the problem of inferring intrinsic human preferences that incorporate safety criteria and are invariant across many tasks. The reward function for each task is a combination of the task-invariant intrinsic reward (unobserved by the agent) and a task-specific reward (observed by the agent). This multi-task setup helps address the identifiability problem in IRL, where different reward functions could produce the same behavior.

The authors propose an algorithm for inferring the intrinsic reward while minimizing the number of mistakes made by the agent. They prove an upper bound on the number of mistakes for the “active learning” case where the agent gets to choose the tasks, and show that a certain number of mistakes is inevitable when the agent cannot choose the tasks (there is no upper bound in that case). Thus, letting the agent choose the tasks that it’s trained on seems like a good idea, though it might also result in a selection of tasks that is less interpretable to humans.
[Deep RL from Human Preferences](http://papers.nips.cc/paper/7017-deep-reinforcement-learning-from-human-preferences) (Christiano et al) uses human feedback to teach deep RL agents about complex objectives that humans can evaluate but might not be able to demonstrate (e.g. a backflip). The human is shown two trajectory snippets of the agent’s behavior and selects which one more closely matches the objective. This method makes very efficient use of limited human feedback, scaling much better than previous methods and enabling the agent to learn much more complex objectives (as shown in MuJoCo and Atari).

[Dynamic Safe Interruptibility for Decentralized Multi-Agent RL](http://papers.nips.cc/paper/6618-dynamic-safe-interruptibility-for-decentralized-multi-agent-reinforcement-learning) (El Mhamdi et al) generalizes the safe interruptibility problem to the multi-agent setting. Non-interruptible dynamics can arise in a group of agents even if each agent individually is indifferent to interruptions. This can happen if Agent B is affected by interruptions of Agent A and is thus incentivized to prevent A from being interrupted (e.g. if the agents are self-driving cars and A is in front of B on the road). The multi-agent definition focuses on preserving the system dynamics in the presence of interruptions, rather than on converging to an optimal policy, which is difficult to guarantee in a multi-agent setting.
#### Aligned AI workshop
This was a more long-term-focused version of the [Reliable ML in the Wild](https://sites.google.com/site/wildml2016/) workshop held in previous years. There were many great talks and posters there – my favorite talks were Ian Goodfellow’s “[Adversarial Robustness for Aligned AI](http://www.iangoodfellow.com/slides/2017-12-09-aligned.pdf)” and Gillian Hadfield’s “Incomplete Contracting and AI Alignment”.
Ian made the case of ML security being important for long-term AI safety. The effectiveness of adversarial examples is problematic not only from the near-term perspective of current ML systems (such as self-driving cars) being fooled by bad actors. It’s also bad news from the long-term perspective of aligning the values of an advanced agent, which could inadvertently seek out adversarial examples for its reward function due to [Goodhart’s law](https://en.wikipedia.org/wiki/Goodhart%27s_law). Relying on the agent’s uncertainty about the environment or human preferences is not sufficient to ensure safety, since adversarial examples can cause the agent to have arbitrarily high confidence in the wrong answer.

Gillian approached AI safety from an economics perspective, drawing parallels between specifying objectives for artificial agents and designing contracts for humans. The same issues that make contracts [incomplete](https://en.wikipedia.org/wiki/Incomplete_contracts) (the designer’s inability to consider all relevant contingencies or precisely specify the variables involved, and incentives for the parties to game the system) lead to side effects and reward hacking for artificial agents.

The central question of the talk was how we can use insights from incomplete contracting theory to better understand and systematically solve specification problems in AI safety, which is a really interesting research direction. The objective specification problem seems even harder to me than the incomplete contract problem, since the contract design process relies on some level of shared common sense between the humans involved, which artificial agents do not currently possess.
#### Interpretability for AI safety
I gave a talk at the [Interpretable ML symposium](http://interpretable.ml) on connections between interpretability and long-term safety, which explored what forms of interpretability could help make progress on safety problems ([slides](http://s.interpretable.ml/nips_interpretable_ml_2017_victoria_Krakovna.pdf), [video](https://www.youtube.com/watch?v=3HzIutdlpho)). Understanding our systems better can help ensure that safe behavior generalizes to new situations, and it can help identify causes of unsafe behavior when it does occur.
For example, if we want to build an agent that’s indifferent to being switched off, it would be helpful to see whether the agent has representations that correspond to an off-switch, and whether they are used in its decisions. Side effects and safe exploration problems would benefit from identifying representations that correspond to irreversible states (like “broken” or “stuck”). While [existing work](https://distill.pub/2017/feature-visualization/) on examining the representations of neural networks focuses on visualizations, safety-relevant concepts are often difficult to visualize.
Local interpretability techniques that explain specific predictions or decisions are also useful for safety. We could examine whether features that are idiosyncratic to the training environment or indicate proximity to dangerous states influence the agent’s decisions. If the agent can produce a natural language explanation of its actions, how does it explain problematic behavior like reward hacking or going out of its way to disable the off-switch?
There are many ways in which interpretability can be useful for safety. Somewhat less obvious is what safety can do for interpretability: serving as grounding for interpretability questions. As exemplified by the final debate of the symposium, there is an ongoing conversation in the ML community trying to pin down the fuzzy idea of interpretability – what is it, do we even need it, what kind of understanding is useful, etc. I think it’s important to keep in mind that our desire for interpretability is to some extent motivated by our systems being fallible – understanding our AI systems would be less important if they were 100% robust and made no mistakes. From the safety perspective, we can define interpretability as the kind of understanding that help us ensure the safety of our systems.
For those interested in applying the interpretability hammer to the safety nail, or working on other long-term safety questions, FLI has recently announced a [new grant program](https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/). Now is a great time for the AI field to think deeply about value alignment. As Pieter Abbeel said at the end of his [keynote](https://www.youtube.com/watch?v=TyOooJC_bLY), “Once you build really good AI contraptions, how do you make sure they align their value system with our value system? Because at some point, they might be smarter than us, and it might be important that they actually care about what we care about.”
(Thanks to Janos Kramar for his feedback on this post, and to everyone at DeepMind who gave feedback on the interpretability talk.)
|
57cbbf3e-9135-4caa-a867-1e4f4cd3f4f2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Calculating Natural Latents via Resampling
So you’ve read some of our previous natural latents posts, and you’re sold on the value proposition. But there’s some big foundational questions still unanswered. For example: how do we find these natural latents in some model, if we don’t know in advance what they are? Examples in previous posts conceptually involved picking some latent out of the ether (like e.g. the bias of a die), and then verifying the naturality of that latent.
This post is about one way to calculate natural latents, in principle, when we don’t already know what they are. The basic idea is to resample all the variables once simultaneously, conditional on the others, like a step in an MCMC algorithm. The resampled variables turn out to be a competitively optimal approximate natural latent over the original variables (as we’ll prove in the post). Toward the end, we’ll use this technique to calculate an approximate natural latent for a normal distribution, and quantify the approximations.
The proofs will use the graphical notation introduced in Some Rules For An Algebra Of Bayes Nets.
Some Conceptual Foundations
What Are We Even Computing?
First things first: what even is “a latent”, and what does it even mean to “calculate a natural latent”? If we had a function to “calculate natural latents”, what would its inputs be, and what would its outputs be?
The way we use the term, any conditional distribution
(λ,x↦P[Λ=λ|X=x])
defines a “latent” variable Λ over the “observables” X, given the distribution P[X]. Together P[X] and P[Λ|X] specify the full joint distribution P[Λ,X]. We typically think of the latent variable as some unobservable-to-the-agent “generator” of the observables, but a latent can be defined by any extension of the distribution over X to a distribution over Λ and X.
Natural latents are latents which (approximately) satisfy some specific conditions, namely that the distribution P[X,Λ] (approximately) factors over these Bayes nets:
Natural latent conditions over 3 variables.
|
39c33c7d-13d4-496b-a8bf-bd1f0ba80fef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Conversation with GoD
The following is an inexact transcript of a conversation that happened exactly like this. The scene: a wine bar in Manhattan, on my second (and final) date with a Jewish girl. We’ll call her “Jewish Girl on Date”, or J-GoD for short.
----------------------------------------
J-GoD: You’ve changed your OKCupid religion status from “Jewish” to “atheist” since last week. What happened this weekend that proved to you that God doesn’t exist?
Jacob’s inner voice: Actually, I switched it to optimize my dating profile and avoid Jewish girls that give me grief about not being as Jewish as their moms expect me to be.
Jacob’s mouth: I don’t think that anything can really prove that God doesn’t exist. That’s partly because the definition of God will usually shift to accommodate any evidence.
J-GoD: So why do you call yourself an atheist if you can’t prove that God doesn’t exist?
Jacob: I give the existence of any specific god a low enough probability that I functionally behave as if I was sure no god existed.
J-GoD: Probability?!
Jacob: I give about a 1 in 10 chance for the existence of any popularly conceived supernatural beings, including humanity’s descendants simulating our reality. For some specific religion’s god, like the Old Testament Jewish God (we’ll call him J-God for short), something like 1 in 1,000,000.
J-GoD: How can you put a number on the existence of J-God?
Jacob: Umm, I have this blog about how you can put a number on almost anything… Anyway, probability numbers are how I represent how confident I am that something is true or not.
J-GoD: How the hell can you be exactly one in a million confident that God exists?
Jacob: I wish I could say that I calculated the prior of the Kolmogorov complexity implied by the description of J-God and updated on all available evidence. In reality, I just picked a really low number that matches how confident I allow myself to be on complex metaphysical questions.
J-GoD: So you’re just making up a number to say that y
|
2c383d8a-b62f-4f0c-a8af-36804cf1842d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Wrapping Up
Today's post, Wrapping Up was originally published on December 7, 2008. A summary:
> A summary of what Hanson believes to be the key unresolved issues.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Artificial Mysterious Intelligence, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
5030549e-e574-40a8-b4a3-dc326f943c0c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Wasted life
It's just occurred to me that, giving all the cheerful risk stuff I work with, one of the most optimistic things people could say to me would be:
"You've wasted your life. Nothing of what you've done is relevant or useful."
That would make me very happy. Of course, that only works if it's credible.
|
7411931c-dae9-4220-99c2-eeee44fb5abb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Exploring Decision Theories With Counterfactuals and Dynamic Agent Self-Pointers
This is a follow-up to A Possible Resolution To Spurious Counterfactuals, which was addressing a technical problem in self-proof. See the original post for the suggestion provided for a solution to the 5 and 10 problem as described in the Embedded Agency paper.
I suggested, at the end of that post, that the crux of the solution is that we don't want exactly a self-proof, we want explicit counterfactual reasoning. This actually allows the proofs (for the simple examples, but the principle should extend much further), and avoids accidentally inferring actions were valuable just because they were taken by a prudent agent.
Here I want to make the case that, when we expand the usage of explicit agent pointers to allow self-modification (in the form of precommitments), we can talk meaningfully about many different decision theories. This framing doesn't decide your decision theory for you, it's only meant to help with counterfactual reasoning and Lobian obstacles in proof construction. The pseudocode presentation reminds us that programs can, in fact, point to themselves, and that this can be a suitable way to at least partially embed agents in their reasoning.
This is where we left off:
Situation :: Nonce
Agent :: Agent -> Situation -> Choice
Universe :: Agent -> Reward
CounterfactuallyDoing(PriorAgent, CounterfactualSituation, Choice) :=
NewAgent := \Agent, Situation ->
if Situation == CounterfactualSituation:
return Choice
else:
return PriorAgent(NewAgent, Situation)
return NewAgent
A(Self, Situation == OnlySituation) :=
Spend some time searching for proofs of sentences of the form
“[U(CounterfactuallyDoing(Self, OnlySituation, 5)) = x] & [U(CounterfactuallyDoing(Self, OnlySituation, 10)) = y]” for x,y ∈{0,5,10}.
if a proof is found with x > y:
return 5
else:
return 10
U(Actor) :=
if Actor(Actor, OnlySituation) = 10:
return 10
if Actor(Actor, OnlySituation) = 5:
return 5
I'd like to propose an extension to the model (which is more
|
223a7c91-458c-446a-8345-2b07d7c5badf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread February 25 - March 3
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
|
d897974e-b571-4c43-8125-bd401b7a75b9
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
Can you give an AI a goal which involves “minimally impacting the world”?
This is actually an active area of AI alignment research, called *[Impact Measures](https://www.alignmentforum.org/tag/impact-measures)*. It's not trivial to formalize in a way which won't predictably go wrong (entropy minimization likely leads to an AI which tries really hard to put out all the stars ASAP since they produce so much entropy, for example), but progress is being made.
<iframe src="https://www.youtube.com/embed/lqJUIqZNzP8" title="Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
<iframe src="https://www.youtube.com/embed/S_Sd_S8jwP0" title="Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
5e4619ea-346f-4ec3-83f4-39c5cf6cddd1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Privacy
Follow-up to: Blackmail
[Note on Compass Rose response: This is not a response to the recent Compass Rose response, it was written before that, but with my post on Hacker News I need to get this out now. It has been edited in light of what was said. His first section is a new counter-argument against a particular point that I made – it is interesting, and I have a response but it is beyond scope here. It does not fall into either main category, because it is addressing a particular argument of mine rather than being a general argument for blackmail. The second counter-argument is a form of #1 below, combined with #2, #3 and #4 (they do tend to go together) so it is addressed somewhat below, especially the difference between ‘information tends to be good’ and ‘information chosen, engineered and shared so to be maximally harmful tends to be bad.’ My model and Ben’s of practical results also greatly differ. We intend to hash all this out in detail in conversations, and I hope to have a write-up at some point. Anyway, on to the post at hand.]
There are two main categories of objection to my explicit thesis that blackmail should remain illegal.
Today we will not address what I consider the more challenging category. Claims that while blackmail is bad, making it illegal does not improve matters. Mainly because we can’t or won’t enforce laws, so it is unclear what the point is. Or costs of enforcement exceed benefits.
The category I address here claims blackmail is good. We want more.
Key arguments in this category:
1. Information is good.*
2. Blackmail reveals bad behavior.
3. Blackmail provides incentive to uncover bad behavior.
4. Blackmail provides a disincentive to bad behavior.
5. Only bad, rich or elite people are vulnerable to blackmail.
6. We should strongly enforce all norms on everyone, without context dependence not explicitly written into the norm, and fix or discard any norms we don’t want to enforce in this way.
A key assumption is that blackm
|
a8073128-c456-4136-bfda-022d26fddd1e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do you survive in the humanities?
Two years ago, I fulfilled a huge part of my dream of doing what I love and started studying creative writing full time. This program is run by a humanities department, so I knew I'd have to stomach epistemic nonsense from time to time, which most of the time I can do while reminding myself that in the end it's the diploma and the connections that I care about.
But some days are harder than others, and every time a teacher says something plainly wrong, I can't seem to decide between the cost of derailing the class and the cost of letting my classmates walk home with a wrong idea. For example, in first semester I had a long discussion with a teacher about questions in general; he didn't believe that questions are supposed to get answered, that debates are supposed to be settled. I tried my best to make the case that a long-standing debate managed in such a way that it never ends is a very poorly managed debate, but he thought that was the whole point of a debate. This was the same teacher who later said that an essay could make strong points with impeccable arguments and solid evidence and still he had the essential freedom to not believe it. Last year, a small disagreement over the interpretation of a poem got seriously derailed to the definition of life, and the teacher started saying one badly constructed modus ponens after another, and I didn't know whether to address each argument or to simply point out he didn't know how to make an argument and should never have been given a teaching position. Last week, a teacher recited the standard clichés against logical positivism and claimed that science was oblivious to the suffering of the poor; upon further probing, it turned out he was confusing positivism with optimism. I wasted half an hour trying to explain his error, to no avail. At one point he uttered the phrase "I prefer to believe..." and it was impossible to make him see how irresponsible such a statement is. Yesterday, a teacher was explaining the compositi
|
11e9c2de-3e4b-4193-b3e8-1e9176d82d11
|
trentmkelly/LessWrong-43k
|
LessWrong
|
NYC Winter Megameetup, Event RSVPs
I've just posted two facebook events for The Winter Solstice Megameetup, the weekend of December 15th-16th.
One is for the Solstice Celebration - a Saturday night of ritual, song, stories and dance, catering to both Less Wrong-folk as well as other anyone with rational/humanist/futurist worldview they want to celebrate.
The second is the official Less Wrong Winter Megameetup, a Sunday afternoon of mingling and conversations with interesting aspiring rationalists. While there will not be a primary public discussion, people are encouraged to come with ideas about their goals for the next year. Take an opportunity to get a lot of feedback from smart, interesting people, many of whom have tackled some big projects and life changes in the past.
Both events are potluck - bring some great food to share with great people.
People from out of state attending both events are welcome to stay the night at Winterfell House. (You are advised to bring a sleeping bag - there is plenty of floor space but may not be enough mattresses for everyone)
If you have facebook, you're encouraged to RSVP on the events so we have as accurate a headcount as possible. If not, and you haven't already indicated you're coming, please reply here to let us know which day(s) you're coming.
|
b5c5a1f6-354c-4e8d-a79a-5e3edb7bb05c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some Comments on Recent AI Safety Developments
Overview of New Developments
Edit (4th of December, 2024): OpenAI has now signed up with Anduril, fulfilling my prediction that all three major AI developers will enter into agreements with the US military. See https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/
As of today, November 9th, 2024, three major AI developers have made clear their intent to begin or shortly begin offering tailored AI services to the United States Department of Defense and related personnel.
* Anthropic: https://arstechnica.com/ai/2024/11/safe-ai-champ-anthropic-teams-up-with-defense-giant-palantir-in-new-deal/
* Meta: https://scale.com/blog/defense-llama https://www.theregister.com/2024/11/06/meta_weaponizing_llama_us/
* OpenAI: https://fortune.com/2024/10/17/openai-is-quietly-pitching-its-products-to-the-u-s-military-and-national-security-establishment/
These overtures are not only directed by AI companies. There are also top down pressures to expand the military use of AI from the US government under Joe Biden. Ivanka Trump has posted in favour of Leopold Aschenbrenner's Situational Awareness document, suggesting that the incoming Trump administration will continue this trend.
For obvious reasons, I believe the same is happening in China, and possibly (to a lesser extent) other nations with advanced AI labs (UK). I focus on the American case because most of the most advanced AI labs are in fact American companies.
Analysis and Commentary
In some sense this is not new information. In January 2024, the paper "Escalation Risks from Language Models in Military and Diplomatic Decision-Making" warned that "Governments are increasingly considering integrating autonomous AI agents in high-stakes military and foreign-policy decision-making". Also in January, OpenAI removed language prohibiting the use of its products in military or warfare-related applications. In September, after the release of o1, OpenAI als
|
a633cab5-f48a-477f-a9ea-c6040b61e13f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : San Antonio Meetup
Discussion article for the meetup : San Antonio Meetup
WHEN: 03 April 2016 03:00:00PM (-0500)
WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230
Bubble tea, frozen yogurt, and discussion at Yumi Berry. All are welcome.
Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says LessWrong.
Discussion article for the meetup : San Antonio Meetup
|
0bfb242d-f766-4952-b42f-ceef852ad803
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
The Catastrophic Convergence Conjecture
















Overfitting the AU landscape
----------------------------
When we act, and others act upon us, we aren’t just changing our ability to do things – we’re *shaping the local environment* towards certain goals, and away from others.[[1]](#fn-25bChTEETACfS9a4m-1) We’re fitting the world to our purposes.
What happens to the AU landscape[[2]](#fn-25bChTEETACfS9a4m-2) if a paperclip maximizer takes over the world?[[3]](#fn-25bChTEETACfS9a4m-3)

### Preferences implicit in the evolution of the AU landscape
Shah et al.'s [*Preferences Implicit in the State of the World*](https://arxiv.org/pdf/1902.04198.pdf) leverages the insight that the world state contains information about what we value. That is, there are agents pushing the world in a certain "direction". If you wake up and see a bunch of vases everywhere, then vases are probably important and you shouldn't explode them.
Similarly, the world is being optimized to facilitate achievement of certain goals. AUs are shifting and morphing, often towards what people locally want done (e.g. setting the table for dinner). How can we leverage this for AI alignment?
*Exercise: Brainstorm for two minutes by the clock before I anchor you.*
Two approaches immediately come to mind for me. Both rely on the agent [focusing on the AU landscape rather than the world state](https://www.lesswrong.com/posts/pr3bLc2LtjARfK7nx/world-state-is-the-wrong-level-of-abstraction-for-impact).
*Value learning without a prespecified ontology or human model*. [I have previously criticized](https://www.lesswrong.com/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible#Where_in_the_world_is_the_human_) value learning for needing to locate the human within some kind of prespecified ontology (this criticism is not new). By taking only the agent itself as primitive, perhaps we could get around this (we don't need any fancy engineering or arbitrary choices to figure out AUs/optimal value from *the agent's* perspective).
*Force-multiplying AI*. Have the AI observe which of its AUs most increase during some initial period of time, after which it pushes the most-increased-AU even further.
In 2016, Jessica Taylor [wrote](https://www.alignmentforum.org/posts/5bd75cc58225bf06703752da/pursuing-convergent-instrumental-subgoals-on-the-user-s-behalf-doesn-t-always-require-good-priors) of a similar idea:
"In general, it seems like "estimating what types of power a benchmark system will try acquiring and then designing an aligned AI system that acquires the same types of power for the user" is a general strategy for making an aligned AI system that is competitive with a benchmark unaligned AI system."
I think the naïve implementation of either idea would fail; e.g., there are a lot of degenerate AUs it might find. However, I'm excited by this because a) the AU landscape evolution *is* an important source of information, b) it feels like there's something here we could do which nicely avoids ontologies, and c) force-multiplication is qualitatively different than existing proposals.
**Project:** Work out an AU landscape-based alignment proposal.
Why can't everyone be king?
---------------------------
Consider two coexisting agents each rewarded for gaining power; let's call them Ogre and Giant. Their reward functions[[4]](#fn-25bChTEETACfS9a4m-4) (over the partial-observability observations) are identical. Will they compete? If so, why?
Let's think about something easier first. Imagine two agents each rewarded for drinking coffee. Obviously, they compete with each other to secure the maximum amount of coffee. Their objectives are [indexical](https://en.wikipedia.org/wiki/Indexicality), so they aren't aligned with each other – *even though they share a reward function*.
Suppose both agents are able to have maximal power. Remember, [Ogre's power can be understood as its ability to achieve a lot of different goals](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-provably-instrumentally-convergent-in-mdps). Most of Ogre's possible goals need resources; since Giant is also optimally power-seeking, it will act to preserve its own power and prevent Ogre from using the resources. If Giant weren't there, Ogre could better achieve a range of goals. So, Ogre can still gain power by dethroning Giant. They *can't* both be king.
Just because agents have *indexically* identical payoffs doesn't mean they're cooperating; to be aligned with another agent, you should want to steer towards the same kinds of futures.
Most agents aren't pure power maximizers. But since the same resource competition usually applies, the reasoning still goes through.
Objective vs value-specific catastrophes
----------------------------------------
How useful is our definition of "catastrophe" with respect to humans? After all, literally anything could be a catastrophe for *some* utility function.[[5]](#fn-25bChTEETACfS9a4m-5)
Tying one's shoes is absolutely catastrophic for an agent which only finds value in universes in which shoes have *never ever ever* been tied. [Maybe all possible value in the universe is destroyed if we lose at Go to an AI even once](https://www.lesswrong.com/posts/c2oM7qytRByv6ZFtz/impact-measure-desiderata#zLnkb5xM4E9ATzCFg). But this seems rather silly.
[Human values are complicated and fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile):
>
> Consider the incredibly important human value of "boredom" - our desire not to do "the same thing" over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing - and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.
>
>
>
But the human AU is not so delicate. That is, given that we have power, we can make value; there don’t seem to be arbitrary, silly value-specific catastrophes for us. Given energy and resources and time and manpower and competence, we can build a better future.
In part, this is because a good chunk of what we care about seems roughly additive over time and space; a bad thing happening somewhere else in spacetime doesn't mean you can't make things better where you are; we have many sources of potential value. In part, this is because we often care about the universe more than the exact universe history; our preferences don’t seem to encode arbitrary deontological landmines. More generally, if we did have such a delicate goal, it would be the case that if we learned that a particular thing had happened at any point in the past in our universe, that entire universe would be partially ruined for us forever. That just doesn't sound realistic.
It seems that most of our catastrophes are objective catastrophes.[[6]](#fn-25bChTEETACfS9a4m-6)
Consider a psychologically traumatizing event which leaves humans uniquely unable to get what they want, but which leaves everyone else (trout, AI, etc.) unaffected. Our ability to find value is ruined. Is this an example of the delicacy of our AU?
No. This is an example of the delicacy of our implementation; notice also that our AUs for constructing red cubes, reliably looking at blue things, and surviving are *also* ruined. Our power has been decreased.
Detailing the catastrophic convergence conjecture (CCC)
-------------------------------------------------------
In general, the CCC follows from two sub-claims. 1) Given we still have control over the future, humanity's long-term AU is still reasonably high (i.e. we haven't endured a catastrophe). 2) Realistically, agents are only incentivized to take control from us in order to gain power for their own goal. I'm fairly sure the second claim is true ("evil" agents are the exception prompting the "realistically").
Also, we're implicitly considering the simplified frame of a single smart AI affecting the world, and not [structural risk](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) via [the broader consequences of others also deploying similar agents](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like). This is important but outside of our scope for now.
>
> **Unaligned goals** tend to have catastrophe-inducing optimal policies because of power-seeking incentives.
>
>
>
Let's say a reward function is *[aligned](https://arxiv.org/abs/1906.01820)*[[7]](#fn-25bChTEETACfS9a4m-7) if all of its Blackwell-optimal policies are doing what we want (a policy is Blackwell-optimal if it's optimal and doesn't stop being optimal as the agent cares more about the future). Let's say a reward function class is *alignable* if it contains an aligned reward function.[[8]](#fn-25bChTEETACfS9a4m-8) The CCC is talking about impact alignment only, not about intent alignment.
>
> Unaligned goals **tend to have** catastrophe-inducing optimal policies because of power-seeking incentives.
>
>
>
Not all unaligned goals induce catastrophes, and of those which do induce catastrophes, not *all* of them do it because of power-seeking incentives. For example, a reward function for which inaction is the only optimal policy is "unaligned" and non-catastrophic. An "evil" reward function which intrinsically values harming us is unaligned and has a catastrophic optimal policy, but not *because* of power-seeking incentives.
"Tend to have" means that *realistically*, the reason we're worrying about catastrophe is because of power-seeking incentives – because the agent is gaining power to better achieve its own goal. Agents don't otherwise seem incentivized to screw us over very hard; CCC can be seen as trying to explain [adversarial Goodhart](https://www.alignmentforum.org/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) in this context. If CCC isn't true, that would be important for understanding goal-directed alignment incentives and the loss landscape for how much we value deploying different kinds of optimal agents.
While there *exist* agents which cause catastrophe for other reasons (e.g. an AI mismanaging the power grid could trigger a nuclear war), the CCC claims that the selection pressure which makes these policies *optimal* tends to come from power-seeking drives.
>
> Unaligned goals tend to have **catastrophe-inducing optimal policies** because of power-seeking incentives.
>
>
>
"But what about the Blackwell-optimal policy for Tic-Tac-Toe? These agents aren't taking over the world now". The CCC is talking about agents optimizing a reward function in the real world (or, for generality, in another sufficiently complex multiagent environment).
*Edit*: The initial version of this post talked about "outer alignment"; I changed this to just talk about *alignment*, because the outer/inner alignment distinction doesn't feel relevant here. What matters is how the AI's policy impacts us; what matters is [*impact alignment*](https://www.lesswrong.com/posts/Xts5wm3akbemk4pDa/non-obstruction-a-simple-concept-motivating-corrigibility).
Prior work
----------
>
> In fact even if we only resolved the problem for the similar-subgoals case, it would be pretty good news for AI safety. Catastrophic scenarios are mostly caused by our AI systems failing to effectively pursue convergent instrumental subgoals on our behalf, and these subgoals are by definition shared by a broad range of values.
>
>
>
>
> ~ Paul Christiano, [Scalable AI control](https://ai-alignment.com/scalable-ai-control-7db2436feee7#.1riohnubu)
>
>
>
>
> Convergent instrumental subgoals are mostly about gaining power. For example, gaining money is a convergent instrumental subgoal. If some individual (human or AI) has convergent instrumental subgoals pursued well on their behalf, they will gain power. If the most effective convergent instrumental subgoal pursuit is directed towards giving humans more power (rather than giving alien AI values more power), then humans will remain in control of a high percentage of power in the world.
>
>
>
>
> If the world is not severely damaged in a way that prevents any agent (human or AI) from eventually colonizing space (e.g. severe nuclear winter), then the percentage of the cosmic endowment that humans have access to will be roughly close to to the percentage of power that humans have control of at the time of space colonization. So the most relevant factors for the composition of the universe are (a) whether anyone at all can take advantage of the cosmic endowment, and (b) the long-term balance of power between different agents (humans and AIs).
>
>
>
>
> I expect that ensuring that the long-term balance of power favors humans constitutes most of the AI alignment problem...
>
>
>
>
> ~ Jessica Taylor, [Pursuing convergent instrumental subgoals on the user's behalf doesn't always require good priors](https://www.alignmentforum.org/posts/5bd75cc58225bf06703752da/pursuing-convergent-instrumental-subgoals-on-the-user-s-behalf-doesn-t-always-require-good-priors)
>
>
>
---
1. >
> In planning and activity research there are two common approaches to matching agents with environments. Either the agent is designed with the specific environment in mind, or it is provided with learning capabilities so that it can adapt to the environment it is placed in. In this paper we look at a third and underexploited alternative: designing agents which adapt their environments to suit themselves... In this case, due to the action of the agent, the environment comes to be better fitted to the agent as time goes on. We argue that [this notion] is a powerful one, even just in explaining agent-environment interactions.
>
>
>
[Hammond, Kristian J., Timothy M. Converse, and Joshua W. Grass. "The stabilization of environments." Artificial Intelligence 72.1-2 (1995): 305-327.](https://pdf.sciencedirectassets.com/271585/1-s2.0-S0004370200X00203/1-s2.0-000437029400006M/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjECgaCXVzLWVhc3QtMSJHMEUCIQChwXFJ1RTxFfLOoGqaRo25D%2BbS7VQsFat%2FcBYxlWKSqAIgSarQN5XqRM5sGmbf4vwDGt%2FYXPwNsXAdDH36wzYAt34qvQMIsP%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARACGgwwNTkwMDM1NDY4NjUiDNZzf0m7yHsUIdERfSqRA2rvYeD%2FN4gapAtvAUDHOiHrlbj%2FWj4lad5q%2Fv3XAUeH0QPV%2B1FP3bB3QsHxKfYYgvri9pw%2BormN3BsZ7I735xEb4GqK3aqsVUQjBkYJJPmkUfmD9LOAzVQUu4fDaiq50FOEKKQHRzrHMeiqgHuQcm4y7GXWUSu0svt%2F%2BigSlODzLp%2FVFiI%2BH5pUEzDFDzBxeBi%2F3SjNFhOSmLlEuiQFF6%2FTP8wj%2FtksYcugu4889kZXEvceMXvGfOsXC%2B0u3rRVK%2FDGDxASsoUx2rZP1oaKp%2FJJGcaPeTsHfzl5crYvFkd3EykGtdAMn8B9spq4AkEY%2BuPbGcl8AC75oqRoPgzu1%2Fbz8IDeJfCGi8uLE6z9chVysGoT4nIhlrA1jsaDspWKgzICghaILairVwIXp3P%2B5oVEUiaa9CefbdAtAMa7N8jd1Z3RjPwHXlrpJ7G88%2Fw22uIze%2Fj4gH1VXPWcZDz31dSO0MpnLBDbT6XCg5xcYtd%2Bw5OfVhR55VScDsTCZzRhumMsdZseH5vlkkkK9kAVDTL1MPT6iPEFOusBUmhc4Gm26CVWrWRR5Za7EUJkOxaRDTECgLLS1NqDuHAb0z2t8Q34LpQzxrTphrqm6YEVs9Ut9YoiapIDBnNuCbYlAWMbppy5PAdgaqDvISaNAHVc842TKsYvRI0ngbypR%2B0IOT5nylgMuQvMvzwbrT%2FZY%2B0kNFQ3%2BgRUQyix6RtmQQdhUIsVyCi5FnZXZ%2FzrOLf5yoZpYoOmJ51jAf%2BS%2BOSGu5ncXvj5JtsBBEP6HzeHy5ys8Kgid1pcxF7bi1s%2BMPGfBdMuXcBtsRRsFa9X1qrO%2FupgJ%2BuoZU4flqjzaHcGAAMX5D8BZlcLgg%3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200118T000332Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTY74AXFCOD%2F20200118%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=9e08fdbd6206f37d0e8c254944eee8c4d53786fee904ae863e127d0b3d94fa39&hash=06ec9f24ef61dd7241a68fcc330c81d0265562765d79176c64763882e7f25a5b&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=000437029400006M&tid=spdf-a1658f9a-cc69-4688-acbb-dd5b2794ff2d&sid=3ba511d116be30447a9af4663edaaeda6620gxrqa&type=client) [↩︎](#fnref-25bChTEETACfS9a4m-1)
2. Thinking about overfitting the AU landscape implicitly involves a prior distribution over the goals of the other agents in the landscape. Since this is just a conceptual tool, it's not a big deal. Basically, you know it when you see it. [↩︎](#fnref-25bChTEETACfS9a4m-2)
3. Overfitting the AU landscape towards one agent's unaligned goal is exactly what I meant when I wrote the following in [*Towards a New Impact Measure*](https://www.lesswrong.com/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure):
>
> Unfortunately, uA=uH.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
> .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
> .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
> .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
> .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
> .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
> .mjx-numerator {display: block; text-align: center}
> .mjx-denominator {display: block; text-align: center}
> .MJXc-stacked {height: 0; position: relative}
> .MJXc-stacked > \* {position: absolute}
> .MJXc-bevelled > \* {display: inline-block}
> .mjx-stack {display: inline-block}
> .mjx-op {display: block}
> .mjx-under {display: table-cell}
> .mjx-over {display: block}
> .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-stack > .mjx-sup {display: block}
> .mjx-stack > .mjx-sub {display: block}
> .mjx-prestack > .mjx-presup {display: block}
> .mjx-prestack > .mjx-presub {display: block}
> .mjx-delim-h > .mjx-char {display: inline-block}
> .mjx-surd {vertical-align: top}
> .mjx-mphantom \* {visibility: hidden}
> .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
> .mjx-annotation-xml {line-height: normal}
> .mjx-menclose > svg {fill: none; stroke: currentColor}
> .mjx-mtr {display: table-row}
> .mjx-mlabeledtr {display: table-row}
> .mjx-mtd {display: table-cell; text-align: center}
> .mjx-label {display: table-row}
> .mjx-box {display: inline-block}
> .mjx-block {display: block}
> .mjx-span {display: inline}
> .mjx-char {display: block; white-space: pre}
> .mjx-itable {display: inline-table; width: auto}
> .mjx-row {display: table-row}
> .mjx-cell {display: table-cell}
> .mjx-table {display: table; width: 100%}
> .mjx-line {display: block; height: 0}
> .mjx-strut {width: 0; padding-top: 1em}
> .mjx-vsize {width: 0}
> .MJXc-space1 {margin-left: .167em}
> .MJXc-space2 {margin-left: .222em}
> .MJXc-space3 {margin-left: .278em}
> .mjx-test.mjx-test-display {display: table!important}
> .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
> .mjx-test.mjx-test-default {display: block!important; clear: both}
> .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
> .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
> .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
> .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
> .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
> .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
> .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
> .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
> .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
> .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
> .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
> .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
> .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
> .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
> .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
> .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
> .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
> .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
> .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
> .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
> .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
> .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
> .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
> .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
> .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
> .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
> .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
> .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
> .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
> @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
> @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
> @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
> @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
> @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
> @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
> @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
> @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
> @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
> @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
> @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
> @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
> @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
> @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
> @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
> @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
> @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
> @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
> @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
> @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
> @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
> @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
> @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
> @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
> @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
> @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
> almost never,[[9]](#fn-25bChTEETACfS9a4m-9) so we have to stop our reinforcement learners from implicitly interpreting the learned utility function as all we care about. We have to say, "optimize the environment some according to the utility function you've got, but don't be a weirdo by taking us literally and turning the universe into a paperclip factory. Don't overfit the environment to uA, because that stops you from being able to do well for other utility functions."
>
>
>
[↩︎](#fnref-25bChTEETACfS9a4m-3)
4. In most finite Markov decision processes, there does not exist a reward function whose optimal value function is POWER(s) (defined as "the ability to achieve goals in general" in [my paper](https://arxiv.org/abs/1912.01683)) because POWER(s) often violates smoothness constraints on the on-policy optimal value fluctuation (AFAICT, a new result of possibility theory, even though you could prove it using classical techniques). That is, I can show that optimal value can't change too quickly from state to state while the agent is acting optimally, but POWER(s) can drop off *very* quickly.
This doesn't matter for Ogre and Giant, because we can still find a reward function whose unique optimal policy navigates to the highest power states. [↩︎](#fnref-25bChTEETACfS9a4m-4)
5. In most finite Markov decision processes, most reward functions do not have such value fragility. Most reward functions have several ways of accumulating reward. [↩︎](#fnref-25bChTEETACfS9a4m-5)
6. When I say "an objective catastrophe destroys *a lot* of agents' abilities to get what they want", I don't mean that the agents have to actually be present in the world. Breaking a fish tank destroys a fish's ability to live there, even if there's no fish in the tank. [↩︎](#fnref-25bChTEETACfS9a4m-6)
7. This idea comes from Evan Hubinger's [*Outer alignment and imitative amplification*](https://www.lesswrong.com/posts/33EKjmAdKFn3pbKPJ/outer-alignment-and-imitative-amplification):
>
> Intuitively, I will say that a loss function is outer aligned at optimum if all the possible models that perform optimally according to that loss function are aligned with our goals—that is, they are at least trying to do what we want. More precisely, let M=X→A and L=(X→A)→R=M→R. For a given loss function L∈L, let ℓ∗=minM∈ML(M). Then, L is outer aligned at optimum if, for all M∗∈M such that L(M∗)=ℓ∗, M∗ is trying to do what we want.
>
>
>
[↩︎](#fnref-25bChTEETACfS9a4m-7)
8. [Some large reward function classes are probably not alignable](https://www.alignmentforum.org/posts/AeHtdxHheMjHredaq/what-you-see-isn-t-always-what-you-want); for example, consider all Markovian linear functionals over a webcam's pixel values. [↩︎](#fnref-25bChTEETACfS9a4m-8)
9. I disagree with my usage of "aligned *almost never*" on a technical basis: assuming a finite state and action space and considering the maxentropy reward function distribution, there must be a positive measure set of reward functions for which the/a human-aligned policy is optimal. [↩︎](#fnref-25bChTEETACfS9a4m-9)
|
24de8734-56f1-46da-b9b0-9e035483353d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Wits and Wagers
Wits and Wagers is apparently a board game, where players compete to be well-calibrated with respect to their trivia knowledge. I haven't played it.
Has someone else here played it? If so, what was your experience? Would it be good rationalist/bayesian training?
|
bf454f4c-7521-4a74-bb60-8ae750321167
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Luna Lovegood and the Chamber of Secrets - Part 12
"Stupefy," Professor Lockhart said.
Lord Voldemort spat a verbal non-somatic counterspell. He dodged too, just in case.
"Do not do that again," said Lord Voldemort.
Lord Voldemort cast another non-somatic spell. Animated skeleton hands grew from his wrists. He picked up Harry Potter's wand.
"Take a look at this map," Luna said, "Do you see this nargle? It will hunt you forever unless we work together to stop it."
Lord Voldemort's Legilimency tore through Luna's mind.
"It is almost here," said Professor Lockhart.
"Place your hands on my astrolabe," Luna said.
"Rennervate the boy to awaken after we leave," said Lord Voldemort to Professor Lockhart.
----------------------------------------
Headmistress Minerva McGonagall, Chief Warlock Amelia Bones and Auror Mad-Eye Moody burst into Harry Potter's office.
"What did you do this time?" said Auror Moody.
"Nothing," said Harry James Potter-Evans-Verres.
"Uh-huh," said Headmistress McGonagall.
----------------------------------------
The Forgotten Library was a regular heptagon centered around a giant Pensieve. Seven giant shelves radiated toward the corners. The shelves held brown tattered diaries, vials of silvery memories and endless stacks of stationary. Six desks were interlaced in six aisles between the seven shelves. The seventh aisle led to the entrance. Each desk was covered in parchment covered in Luna's handwriting. A flowchart on the far left desk said START HERE in giant letters.
Luna checked the Marauder's Map. The nargle moved faster than it had before.
According to Luna's notes, they had all three components of the ritual: the Sword of Gryffindor, the secrets of Slytherin and the Diadem of Ravenclaw. Lockhart wielded the sword. Lord Voldemort wielded the secrets. Luna placed Ravenclaw's Diadem on her head. The diadem forced Luna's thoughts into coherence.
----------------------------------------
There was an epic battle.
----------------------------------------
The vials had been smashed
|
f2d0ead6-fe64-4c37-acd7-0e63dafe5fef
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation
1 Introduction
---------------
For real-world decision systems such as autonomous vehicles, accuracy is not the only performance requirement and it often comes second to reliability, robustness, and safety concerns [mcallister2017concrete], as any failure carries serious consequences. Component modules of such systems frequently rely on Deep Neural Networks (DNNs) which have emerged as a dominating approach across numerous tasks and benchmarks [szegedy2014going, he2016deep, he2017maskrcnn]. Yet, a major source of concern is related to the data-driven nature of DNNs as they do not always generalize to objects unseen in the training data. Simple uncertainty estimation techniques, e.g., entropy of softmax predictions [cover1999elements], are less effective since modern DNNs are consistently overconfident on both in-domain [guo\_2017] and out-of-distribution (OOD) data samples [nguyen2015deep, hendrycks17baseline, hein2019relu]. This hinders further the performance of downstream components relying on their predictions. Dealing successfully with the “unknown unknown”, e.g., by launching an alert or failing gracefully, is crucial.
In this work we address OOD detection for semantic segmentation, an essential and common task for visual perception in autonomous vehicles. We consider “Out-of-distribution”, pixels from a region that has no training labels associated with. This encompasses unseen objects, but also noise or image alterations. The most effective methods for OOD detection task stem from two major categories of approaches: ensembles and auxiliary error prediction modules. DeepEnsemble (DE) [Lakshminarayanan\_2017] is a prominent and simple ensemble method that exposes potentially unreliable predictions by measuring the disagreement between individual DNNs. In spite of the outstanding performance, DE
is computationally demanding for both training and testing and prohibitive for real-time on-vehicle usage.
For the latter category, given a trained main task network, a simple model is trained in a second stage to detect its errors or estimate its confidence [corbiere2019addressing, hecker2018failure, besnier2021learning]. Such approaches are computationally lighter, yet, in the context of DNNs, an unexpected drawback is related to the lack of sufficient negative samples, i.e., failures, to properly train the error detector [corbiere2019addressing]. This is due to an accumulation of causes: reduced size of the training set for this module (essentially a mini validation set to withhold a sufficient amount for training the main predictor), few mistakes made by the main DNNs, hence few negatives.
In this work, we propose to revisit the two-stage approach with modern deep learning tools in a semantic segmentation context. Given the application context, i.e., limited hardware and high performance requirements, we aim for reliable OOD detection (see [Figure 1](#S0.F1 "Figure 1 ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")) without compromising on predictive accuracy and computational time.
To that end we introduce four design principles aimed at mitigating the most common pitfalls and covering two main aspects, (i) *architecture* and (ii) *training*:
(i.a) The pitfall of trading accuracy in the downstream segmentation task for robustness to OOD can be alleviated by decoupling OOD detection from segmentation.
(i.b) Since the processing performed by the segmentation network
aims to recognize known objects and is not adapted to OOD objects, the accuracy of the OOD detection can be improved significantly by observing the entire segmentation network instead of just its output.
(ii.a) Training an OOD detector requires additional data that can be generated by leveraging blind spots in the segmentation network.
(ii.b) Generated data should focus on localized regions in the image to mimic unknown objects that are OOD.
Following these principles, we propose a new OOD detection architecture called ObsNet and its associated training scheme based on Local Adversarial Attacks (LAA).
We experimentally show that our ObsNet+LAA method achieves top performance in OOD detection on three semantic segmentation datasets (CamVid [BrostowFC:PRL2008], StreetHazards [hendrycks\_benchmark\_2019] and BDD-Anomaly [hendrycks\_benchmark\_2019]), compared to a large set of methods111Code and data available at <https://github.com/valeoai/obsnet>.
Contributions. To summarize, our contributions are as follows:
We propose a new OOD detection method for semantic segmentation based on four design principles: (i.a) decoupling OOD detection from the segmentation task; (i.b) observing the full segmentation network instead of just the output; (ii.a) generating training data for the OOD detector using blind spots of the segmentation network; (ii.b) focusing the adversarial attacks in localized region of the image to simulate unknown objects.
We implement these four principles in a new architecture called ObsNet and its associated training scheme using Local Adversarial Attacks (LAA).
We perform extensive ablation studies on these principles to validate them empirically.
We compare our method to 10 diverse methods from the literature on three datasets (CamVid OOD, StreetHazards, BDD Anomaly) and we show it obtains top performances both in accuracy and in speed.
Strength and weakness. The strengths and weaknesses of our approach are:
* It can be used with any pre-trained segmentation network without altering their performances and without fine-tuning them (we train only the auxiliary module).
* It is fast since only
one extra forward pass is required.
* It is very effective since we show it performs best compared to 10 very diverse methods from the literature on
three different datasets.
* The pre-trained segmentation network has to allow for adversarial attacks, which is the case of commonly used deep neural networks.
* Our observer network has a memory/computation overhead equivalent to that of the segmentation network, which is not ideal for real time applications, but far less than that of MC Dropout or deep ensemble methods.
In the next section, we position our work with respect to the existing literature.
2 Related work
---------------
| Type | Example | OOD accuracy | Fast Inference | Memory efficient | Training specification |
| --- | --- | --- | --- | --- | --- |
| Softmax | MCP [hendrycks17baseline] | - | ✓ | ✓ | No |
| Bayesian Learning | MC Dropout [Gal2016Dropout] | ✓ | - | ✓ | Reduces IoU acc. |
| Reconstruction | GAN [xia2020synthesize] | ✓ | ✓ | ✓ | Unstable training |
| Ensemble | DeepEnsemble [Lakshminarayanan\_2017] | ✓ | - | - | Costly Training |
| Auxiliary Network | ConfidNet [corbiere2019addressing] | - | ✓ | ✓ | Imbalanced train set |
| Test Time attacks | ODIN [Liang2018] | -\* | - | ✓ | Extra OOD set |
| Prior Networks | Dirichlet [malinin2018] | ✓ | ✓ | ✓ | Extra OOD set |
| Observer | ObsNet + LAA | ✓ | ✓ | ✓ | No |
Table 1: Summary of various OOD detection approaches amenable to semantic segmentation. For real-time safety, key requirements for an OOD detector are accuracy, speed, easy training and memory efficiency.
Our method addresses all requirements. Our LAA is performed only at train time and mitigates the imbalance in the training data for the observer.
\*Not accurate for semantic segmentation
The problem of data samples outside the original training distribution has been long studied for various applications before the deep learning era, under slightly different names and angles:
outlier [breunig2000lof], novelty [scholkopf2000support], anomaly [liu2008isolation] and, more recently, OOD detection [hendrycks17baseline, hendrycks2018deep]. In the context of widespread DNN adoption this field has seen a fresh wave of approaches based on input reconstruction [schlegl2017unsupervised, baur2018deep, lis2019detecting, xia2020synthesize], predictive uncertainty [Gal2016Dropout, KendallGal\_2017, malinin2018], ensembles [Lakshminarayanan\_2017, franchi2019tradi], adversarial attacks [Liang2018, lee2018simple], using a void or background class [ren2015faster, liu2016ssd] or dataset [bevandic2019simultaneous, hendrycks2018deep, malinin2018], \etc., to name just a few. We outline here only some of the methods directly related to our approach and group them in a comparative summary in [Table 1](#S2.T1 "Table 1 ‣ 2 Related work ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation").
Anomaly detection by reconstruction.
In semantic segmentation, anomalies can be detected by training a (usually variational) autoencoder [creusot2015real, baur2018deep, venkataramanan2019attention] or generative model [schlegl2017unsupervised, lis2019detecting, xia2020synthesize] on in-distribution data. OOD samples are expected to lead to erroneous and less reliable reconstructions as they contain unseen patterns during training. On high resolution and complex urban images, autoencoders under-perform while more sophisticated generative models require large amounts of data to reach robust reconstruction or rich pipelines with re-synthesis and comparison modules.
Bayesian approaches and ensembles.
BNNs [neal2012bayesian, Blundell2015] can capture predictive uncertainty
from distributions learned over
network weights, but don’t scale well [dusenberry2020efficient]
and approximate solutions are preferred in practice.
DE [Lakshminarayanan\_2017] is a highly effective, yet costly approach, that trains an ensemble of DNNs with different initialization seeds.
Pseudo-ensemble approaches [Gal2016, maddox2019simple, franchi2019tradi, mehrtash2020pep] are a pragmatic alternative to DE that
bypass training of multiple networks and generate predictions from different random subsets of neurons [Gal2016, srivastava2014dropout]
or from networks sampled from approximate weight distributions [maddox2019simple, franchi2019tradi, mehrtash2020pep]. However they all require multiple forward passes and/or storage of additional networks in memory.
Our ObsNet is faster than ensembles as it requires only the equivalent of two forward passes. Some approaches forego ensembling and propose deterministic networks that can output predictive distributions [malinin2018, sensoy2018, postels2019sampling, van2020uncertainty]. They typically trade predictive performance over computational efficiency and results can match MC Dropout [Gal2016Dropout] for uncertainty estimation.
OOD detection via test-time adversarial attacks.
In ODIN, Liang et al. [Liang2018] leverage temperature scaling and small adversarial perturbations on the input at test-time to predict in- and out-of-distribution samples.
Lee et. al [lee2018simple] extend this idea with a confidence score based on class-conditional Mahalanobis distance over hidden activation maps.
Both approaches work best when train OOD data is available for tuning, yet this does not ensure generalization to other ODD datasets [shafaei2019less].
Contrarily to us, ODIN uses adversarial attack at test time as a *method* to detect OOD. However, so far this method has not been shown effective for structured output tasks where the test cost is likely to explode, as adversarial perturbations are necessary for each pixel. In contrast, we propose to use adversarial attacks during training as a *proxy* for OOD training samples, with no additional test time cost.
Learning to predict errors.
Inspired by early approaches from model calibration literature [platt1999probabilistic, zadrozny2001obtaining, zadrozny2002transforming, naeini2015obtaining, naeini2016binary], a number of methods propose endowing the task network with an error prediction branch allowing self-assessment of predictive performance. This branch can be trained jointly with the main network [devries2018learning, yoo2019learning], however better learning stability and results are achieved
with two-stage sequential training [corbiere2019addressing, hecker2018failure, besnier2021learning, Samson\_2019\_ICCV]
Our ObsNet also uses an auxiliary network and is trained in two stages allowing it to learn from the failure modes of the task network.
While [corbiere2019addressing, hecker2018failure, besnier2021learning, Samson\_2019\_ICCV]
focus on in-distribution errors, we address OOD detection for which there is no available training data.
In contrast with these methods that struggle with the lack of sufficient negative data to learn from, we devise an effective strategy to generate failures that further enable generalization to OOD detection.
We redesign both the training procedure and the architecture of the auxiliary network in order to deal with OOD examples, by introducing Local Adversarial Attack (LAA).
Generic approaches.
Finally we mention a set of mildly related approaches that do not address directly OOD detection, but achieve good performances on this task. In spite of the overconfidence pathological effect, using the maximum class probability from the softmax prediction can be used towards OOD detection [hendrycks17baseline, Oberdiek\_2020\_CVPR\_Workshops]. Temperature scaling [guo\_2017, platt1999probabilistic] is a strong post-hoc calibration strategy of the softmax predictions using a dedicated validation set. If predictions are calibrated, OOD samples can be detected by thresholding scores. Pre-training with adversarial attacked images [hendrycks2019using] has also been shown to lead to better calibrated predictions and good OOD detection for image classification. We consider these simple, yet effective approaches as baselines in order to validate the utility of our contribution.
3 Proposed Method
------------------
Following our analysis of the related work, we base our OOD semantic segmentation method on two categories of
aspects: (i) Architecture: OOD detection has to be decoupled from the
segmentation prediction to retain maximal accuracy in both the segmentation and OOD task (§[3.1](#S3.SS1 "3.1 ObsNet: Dedicated OOD detector ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")); (ii) Training: Training an
OOD detector without OOD data is difficult, but can be done nonetheless by generating training data with carefully designed adversarial attacks (§[3.2](#S3.SS2 "3.2 Training ObsNet by triggering Failures ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")).
Both of these
aspects require careful design to work effectively, which we detail in the following.
We validate them experimentally in §[4](#S4 "4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation").

Figure 2:
Overview of our method. Training (blue arrow) The *Segmentation Network* is frozen. The input image is perturbed by a local adversarial attack. Then the *Observer Network* is trained to predict *Segmentation Network*’s errors, given the images and some additional skip connections. Testing (red arrow) No augmentation is performed. The *Observer Network* highlights the out-of-distribution sample, here a motor-cycle. To compute the uncertainty map, the *Observer Network* requires only one additional forward pass compared to the standard segmentation prediction.
###
3.1 ObsNet: Dedicated OOD detector
Modifying the segmentation network to account for OOD is expected to impact its accuracy as we show in the experiments. Furthermore, it prevents from using off-the-shelf pretrained segmentation networks that have excellent segmentation accuracy.
As such, we follow a two-stage approach where an additional predictor tackles the OOD detection while the segmentation network remains untouched.
In the literature, two-stage approaches are usually related to calibration [platt1999probabilistic, zadrozny2001obtaining, zadrozny2002transforming, naeini2015obtaining, naeini2016binary] where the outputs of the segmentation network are
mapped to normalized scores.
However this is not well adapted for segmentation since it does not use the spatial information contained in nearby predictions.
We show in the experiments that using only the output of the segmentation network is not enough to obtain accurate OOD detection.
As such, on the architecture side we follow two design principles in our work:
(i.a) OOD detection should be decoupled from the segmentation prediction to avoid any negative impact on the accuracy of the segmentation task.
(i.b) The OOD detector should observe the full segmentation network instead of just the output.
We thus design an observer network called ObsNet that has a similar architecture to that of the segmentation network and attend the input, the output and intermediate feature maps of the segmentation network as shown on Figure [2](#S3.F2 "Figure 2 ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"). We show experimentally that these design choices lead to increased OOD detection accuracy (see §[4.2](#S4.SS2 "4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")).
More formally, the observer network (denoted Obs) is trained to predict the probability that the segmentation network (denoted Seg) output is not aligned with the correct class y:
| | | | |
| --- | --- | --- | --- |
| | Obs(x,Segr(x))≈Pr[Seg(x)≠y], | | (1) |
where x is the input image and Segr the skip connections from intermediate feature maps of Seg.
To that end, we train the ObsNet to minimize a binary cross-entropy loss function:
| | | | |
| --- | --- | --- | --- |
| | LObs(x,y)=(1Seg(x)≠y−1)log(1−Obs(x,Segr(x)))−1Seg(x)≠ylogObs(x,Segr(x)) | | (2) |
with 1Seg(x)≠y the indicator function of Seg(x)≠y.
Discussion.
Since the observer network processes both the image and skip connections from the segmentation network, it has the ability to *observe* internal behaviour and dynamics of Seg which has been shown to be different when processing an OOD samples (as measured by, e.g., Mahalanobis distance on feature maps [lee2018simple] or higher order Gram matrices on feature maps [sastry2019detecting]).
We emphasize an advantage of our approach w.r.t. previous methods that is related to the low computational complexity, as we only have to make a single forward pass through the segmentation network and the observer network. Experimentally, ObsNet is 21 times faster than MC Dropout with 50 forward passes on a GeForce RTX 2080 Ti,
while outperforming it
(see §[4](#S4 "4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")).
Moreover, our method can be readily used on state of the art pre-trained networks without requiring retraining or even fine-tuning them.
###
3.2 Training ObsNet by triggering Failures
Without a dedicated training set of labeled OOD samples, one could argue that ObsNet is an error detector (similarly to [corbiere2019addressing]) rather than an OOD detector and that it is furthermore very difficult to train since pre-trained segmentation networks are likely to make few errors.
We propose to solve both of these issues by following two design principles:
(ii.a) The lack of training data should be tackled by generating training samples that trigger failures of the segmentation network, which we can obtain using adversarial attacks.
(ii.b) Adversarial attacks should be localized in space since OOD detection in a segmentation context corresponds to unknown objects.
We propose to generate the additional data required to train our ObsNet architecture by performing Local Adversarial Attacks (LAA) on the input image.
In practice, we select a region in the image by using a random shape and we perform a Fast Gradient Sign Method (FSGM) [Goodfellow2015] attack such that it is incorrectly classified by the segmentation network:
| | | | | |
| --- | --- | --- | --- | --- |
| | ~x | =x+LAA(Seg,x) | | (3) |
| | LAA(Seg,x) | =ϵsign(∇xL(Seg(x),y))Ω(x) | | (4) |
with step ϵ, L(⋅) the categorical cross entropy and Ω(x) the binary mask of the random shape.
We show LAA examples in [Figure 3](#S3.F3 "Figure 3 ‣ 3.2 Training ObsNet by triggering Failures ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") and schematize the training process in [Figure 2](#S3.F2 "Figure 2 ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation").
The reasoning behind LAA is two-fold. First, by controlling the shape of the attack, we can make sure that the generated example does not accidentally belong to the distribution of the training set.
Second, leveraging adversarial attacks allows us to focus the training just beyond the boundaries of the predicted classes which tend to be far from the training data due to the high capacity and overconfidence of DNNs, like OOD objects would be.
We show in the experiments that LAA produces a good training set for learning to detect OOD examples. In practice, we found that generating random shapes is essential to obtain good performances in contrast to non-local adversarial attacks.
These random shapes coupled with LAA may mimic unknown objects or objects parts, exposing common behavior patterns in the segmentation network when facing them.
We validate our approach in an ablation study in §[4.2](#S4.SS2 "4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation").
Discussion.
We point out that by triggering failures using LAA, we address the problem of the low error rates of the segmentation network. We can in fact generate as many OOD-like examples as needed to balance the positive (i.e., correct predictions) and negative (i.e., erroneous predictions) terms in [Equation 2](#S3.E2 "(2) ‣ 3.1 ObsNet: Dedicated OOD detector ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") for training the observer network.
Thus, even if the segmentation network attains nearly perfect performances on the training set, we are still able to train the ObsNet to detect where the predictions of the segmentation network are unreliable.
One could ask why not using LAA for training a more robust and reliable segmentation network in the first place, as done in previous works [Goodfellow2015, miyato2018virtual, hendrycks2019using], instead of adding and training the observer network. Training with adversarial examples improves the robustness of the segmentation network at the cost of its accuracy (See §[4.2](#S4.SS2 "4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")), but it will
not make it infallible as there will still be numerous blind-spots in the multi-million dimensional parameter space of the network.
It also prevents from using pre-trained state-of-the-art segmentation networks.
Here, we are rather interested in capturing the main failure modes of the
segmentation network to enable ObsNet to learn and to recognize them later on OOD objects.
Finally, one could ask why not perform adversarial attacks at test time as it is done in ODIN [Liang2018]. Performing test time attacks has two major drawbacks. First it is computationally intensive at test time since it requires numerous backward passes, i.e., one attack per pixel. Second, it is not well adapted to segmentation as perturbations of a single pixels can have effect on a large areas (e.g., one pixel attacks) thus hindering the detection accuracy of perfectly valid predictions. We show in §[4.3](#S4.SS3 "4.3 Quantitative and Qualitative results ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") that our training scheme is better performing both in accuracy and speed when compared to test time attacks.
| | | | | |
| --- | --- | --- | --- | --- |
|
(a) No attack
|
(b) All pixels
|
(c) Class wise
|
(d) Square shape
|
(e) Random shape
|
Figure 3:
Adversarial attack examples.
*Top*: Perturbations magnified 25×; *middle*: Input image with attacks; *bottom*: SegNet prediction.
4 Experiments
--------------
In this section, we present extensive experiments to validate that our proposed observer network combined with local adversarial attacks outperforms a large set of very different methods on three different benchmarks.
###
4.1 Datasets & Metrics
To highlight our results, we select three datasets for Semantic Segmentation of urban streets scenes with anomalies in the test set. Anomalies correspond to out-of-distribution objects, not seen during train time.
CamVid OOD: We design a custom version of CamVid [BrostowFC:PRL2008], where we blit random animals from [maboudi2008icpr] in a random part of the image. This dataset contains 367 train and 233 test images. There are 19 different species of animals, and one animal in each test image.
This setup is analog to that of Fishyscapes [blum2019fishyscapes], with the main advantage that it does not require the use of an external evaluation server and that we provide a wide variety of baselines222To ensure easy reproduction and extension of our work, we publicly release the code for dataset generation and model evaluation at <https://github.com/valeoai/obsnet.>.
StreetHazards: This is a synthetic dataset [hendrycks\_benchmark\_2019] from the Carla simulator. It is composed of 5125 train and 1500 test images, collected in six virtual towns. There are 250 different kinds of anomalies (like UFO, dinosaur, helicopter, etc.) with at least one anomaly per image.
BDD Anomaly: Composed of real images, this dataset is sourced from the BDD100K semantic segmentation dataset [bdd100k]. Here, motor-cycle and train are selected as anomalous objects and all images containing these objects are removed from the training set. The remaining dataset contains 6688 images for training and 361 for testing.
To evaluate each method on these datasets, we select three metrics for detecting misclassified and out-of-distribution examples and one metric for calibration:
* fpr95tpr [Liang2018]: It measures the false positive rate when the true positive rate is equal to 95%. The aim is to obtain the lowest possible false positive rate while guaranteeing a given number of detected errors.
* Area Under the Receiver Operating Characteristic curve (AuRoc) [hendrycks17baseline]: This threshold free metric corresponds to the probability that a certain example has a higher value than an uncertain one.
* Area under the Precision-Recall Curve (AuPR) [hendrycks17baseline]: Also a threshold-independent metric. The AuPR is less sensitive to unbalanced dataset than AuRoc.
* Adaptive Calibration Error (ACE) [Nixon\_2019\_CVPR\_Workshops]: Compared to standard calibration metrics where bins are fixed, ACE adapts the range of each the bin to focus more on the region where most of the predictions are made.
For all our segmentation experiments we use a Bayesian SegNet [badrinarayanan2015segnet], [kendall2015bayesian] as the main network. Therefore, our ObsNet follows the same architecture as this SegNet.
Ablation on the architecture of ObsNet, hyper-parameters and training details can be
found in the supplementary material.
###
4.2 Ablation Study
First, to validate that the local adversarial attack
contributes to improving the observer network, we show on [Table 2](#S4.T2 "Table 2 ‣ 4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") the performance gap for each metric on each dataset. This validates the use of LAA to train the observer network as per principle (ii.a).
| Dataset | Adv | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ |
| --- | --- | --- | --- | --- |
| CamVid OOD | ✗ | 54.2 | 97.1 | 89.1 |
| | ✓ | 44.6 | 97.6 | 90.9 |
| StreetHazards | ✗ | 50.1 | 98.3 | 89.7 |
| | ✓ | 44.7 | 98.9 | 92.7 |
| BDD Anomaly | ✗ | 62.4 | 95.9 | 81.7 |
| | ✓ | 60.3 | 96.2 | 82.8 |
Table 2: Evaluation of the Local Adversarial Attack on each dataset.
The LAA can be seen as a data augmentation performed during ObsNet training. We emphasize that this type of data augmentation is not beneficial for the main network training, which is known as
*robust training* [MadryMSTV18], and that it requires an external observer network. Indeed, [Table 3](#S4.T3 "Table 3 ‣ 4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") illustrates the drop of accuracy when training the main network with the same adversarial augmentation as there is a trade-off between the accuracy and the robustness of a deep neural network [TsiprasSETM19]. In contrast, our method keeps the main network frozen during ObsNet training, thus, the class prediction and the accuracy remain unchanged, validating principle (i.a).
| Dataset | Robust | Mean IoU ↑ | Global Acc ↑ |
| --- | --- | --- | --- |
| Camvid ODD | - | 49.6 | 81.8 |
| | ✓ | 41.6 | 73.9 |
| StreetHazards | - | 44.3 | 87.9 |
| | ✓ | 37.8 | 85.1 |
| Bdd Anomaly | - | 42.9 | 87.0 |
| | ✓ | 41.5 | 85.9 |
Table 3: Impact of robust training on accuracy.
In [Table 4](#S4.T4 "Table 4 ‣ 4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), we show ablations on LAA by varying the type of noise (varying between attacking all pixels, random pixels, pixels from a specific class, pixels inside a square shape and pixels inside a random shape, see [Figure 3](#S3.F3 "Figure 3 ‣ 3.2 Training ObsNet by triggering Failures ‣ 3 Proposed Method ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")). We conclude that local attacks on random shaped regions produce the best proxies for OOD detection (see supplementary material for detailed results), validating principle (ii.b).
| Type | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ |
| --- | --- | --- | --- |
| All pixels | 51.9 | 97.1 | 89.6 |
| Sparse pixels | 54.2 | 97.2 | 89.6 |
| Class pixels | 46.8 | 97.2 | 89.9 |
| Square patch | 45.5 | 97.4 | 90.5 |
| Random shape | 44.6 | 97.4 | 90.6 |
Table 4: LAA ablation study by varying the attacked region.
In [Table 5](#S4.T5 "Table 5 ‣ 4.2 Ablation Study ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), we conduct several ablation studies on the architecture of ObsNet. The main takeaway is that mimicking the architecture of the primary network and adding skip connections from several intermediate feature maps is essential to obtain the best performances (see full results in supplementary material), validating principle (i.b).
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ |
| --- | --- | --- | --- |
| Smaller architecture | 60.3 | 95.8 | 85.3 |
| ObsNet w/o skip | 81.3 | 92.0 | 74.4 |
| ObsNet w/o input image | 57.0 | 96.9 | 88.2 |
| ObsNet | 54.2 | 97.1 | 89.1 |
Table 5: ObsNet architecture ablation study.
###
4.3 Quantitative and Qualitative results
We report results on [Table 6](#S4.T6 "Table 6 ‣ 4.3 Quantitative and Qualitative results ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), [Table 7](#S4.T7 "Table 7 ‣ 4.3 Quantitative and Qualitative results ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") and [Table 8](#S4.T8 "Table 8 ‣ 4.3 Quantitative and Qualitative results ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), with all the metrics detailed above. We compare several methods:
* MCP [hendrycks17baseline]: Maximum Class Prediction. One minus the maximum of the prediction.
* AE [hendrycks17baseline]: An autoencoder baseline. The reconstruction error is the uncertainty measurement.
* Void [blum2019fishyscapes]: Void/background class prediction of the segmentation network.
* MCDA [ayhan\_test-time\_2018]: Data augmentation such as geometric and color transformations is added during inference time. We use the entropy of 25 forward passes.
* MC Dropout [Gal2016Dropout]: The entropy of the mean softmax prediction with dropout. We use 50 forward passes for all the experiences.
* Gaussian Perturbation Ensemble [franchi2019tradi, mehrtash2020pep]: We take a pre-trained network and perturb its weights with a random Normal distribution. This results in an ensemble of networks centered around the pre-trained model.
* ConfidNet [corbiere2019addressing]: ConfidNet is an observer network that is trained to predict the true class score. We use the code available online and modify the data loader to test ConfidNet on our experimental setup.
* Temperature Scaling [guo\_2017]: We chose the hyper-parameters *Temp* to have the best calibration on the validation set. Then, like MCP, we use one minus the maximum of the scaled prediction.
* ODIN [Liang2018]: ODIN performs test-time adversarial attacks on the primary network. We seek the hyper-parameters *Temp* and ϵ to have the best performance on the validation set. The criterion is one minus the maximum prediction.
* Deep ensemble [Lakshminarayanan\_2017]: a small ensemble of 3 networks. We use the entropy the averaged forward passes.
As we can see on these tables, ObsNet significantly outperforms all other methods on detection metrics on all three datasets. Furthermore, ACE also shows that we succeed in having a good calibration value.
| Method | fpr95tpr↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [hendrycks17baseline] | 65.4 | 94.9 | 83.2 | 0.510 |
| Void [blum2019fishyscapes] | 66.6 | 93.9 | 80.2 | 0.532 |
| AE [hendrycks17baseline] | 93.0 | 87.1 | 59.3 | 0.745 |
| MCDA [ayhan\_test-time\_2018] | 66.5 | 94.6 | 82.1 | 0.477 |
| Temp. Scale [guo\_2017] | 63.8 | 94.9 | 83.7 | 0.356 |
| ODIN [Liang2018] | 60.0 | 95.4 | 85.3 | 0.500 |
| ConfidNet [corbiere2019addressing] | 60.9 | 96.2 | 85.1 | 0.450 |
| Gauss Pert. [franchi2019tradi, mehrtash2020pep] | 59.2 | 96.0 | 86.4 | 0.520 |
| Deep Ensemble [Lakshminarayanan\_2017] | 56.2 | 96.6 | 87.7 | 0.459 |
| MC Dropout [Gal2016Dropout] | 49.3 | 97.3 | 90.1 | 0.463 |
| ObsNet + LAA | 44.6 | 97.6 | 90.9 | 0.446 |
Table 6: Evaluation on CamVid-ODD (best method in bold, second best underlined).
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [hendrycks17baseline] | 65.5 | 94.7 | 80.8 | 0.463 |
| Void [blum2019fishyscapes] | 69.3 | 93.6 | 73.5 | 0.492 |
| AE [hendrycks17baseline] | 84.6 | 92.7 | 67.3 | 0.712 |
| MCDA [ayhan\_test-time\_2018] | 69.9 | 97.1 | 82.7 | 0.409 |
| Temp. Scale [guo\_2017] | 65.3 | 94.9 | 81.6 | 0.323 |
| ODIN [Liang2018] | 61.3 | 95.0 | 82.3 | 0.414 |
| ConfidNet [corbiere2019addressing] | 60.1 | 98.1 | 90.3 | 0.399 |
| Gauss Pert. [franchi2019tradi, mehrtash2020pep] | 48.7 | 98.5 | 90.7 | 0.449 |
| Deep Ensemble [Lakshminarayanan\_2017] | 51.7 | 98.3 | 88.9 | 0.437 |
| MC Dropout [Gal2016Dropout] | 45.7 | 98.8 | 92.2 | 0.429 |
| ObsNet + LAA | 44.7 | 98.9 | 92.7 | 0.383 |
Table 7: Evaluation on StreetHazard (best method in bold, second best underlined).
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [hendrycks17baseline] | 63.5 | 95.4 | 80.1 | 0.633 |
| Void [blum2019fishyscapes] | 68.1 | 92.4 | 75.3 | 0.499 |
| AE [hendrycks17baseline] | 92.1 | 88.0 | 53.1 | 0.832 |
| MCDA [ayhan\_test-time\_2018] | 61.9 | 95.8 | 82.0 | 0.411 |
| Temp. Scale [guo\_2017] | 61.8 | 95.8 | 81.9 | 0.287 |
| ODIN [Liang2018] | 60.6 | 95.7 | 81.7 | 0.353 |
| ConfidNet [corbiere2019addressing] | 61.6 | 95.9 | 81.9 | 0.367 |
| Gauss Pert. [franchi2019tradi, mehrtash2020pep] | 61.3 | 96.0 | 82.5 | 0.384 |
| Deep Ensemble [Lakshminarayanan\_2017] | 60.3 | 96.1 | 82.3 | 0.375 |
| MC Dropout [Gal2016Dropout] | 61.1 | 96.0 | 82.6 | 0.394 |
| ObsNet + LAA | 60.3 | 96.2 | 82.8 | 0.345 |
Table 8: Evaluation on Bdd Anomaly (best method in bold, second best underlined).

Figure 4:
Uncertainty map visualization. 1st column: We highlight the ground truth locations of the OOD objects to help visualize them (red bounding box). 2nd column: Segmentation map of the SegNet. 3rd to 5th columns: Uncertainty Map highlight in yellow. Our method produces stronger responses on OOD regions compared to other methods, while being as strong on regular error regions, e.g., boundaries
To show where the uncertainty is localized, we outline the uncertainty map on the test set (see [Figure 4](#S4.F4 "Figure 4 ‣ 4.3 Quantitative and Qualitative results ‣ 4 Experiments ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")).
We can see that our method is not only able to correctly detect OOD objects, but also to highlight areas where the predictions are wrong (edges, small and far objects, etc).
Finally, the trade-off between accuracy and speed is shown on [Figure 1](#S0.F1 "Figure 1 ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), where we obtain excellent accuracy without any compromise over speed.
5 Conclusion
-------------
In this paper, we propose an observer network called ObsNet to address OOD detection in semantic segmentation, by learning from triggered failures. We use skip connection to allow the observer network to seek abnormal behaviour inside the main network. We use local adversarial attacks to trigger failures in the segmentation network and train the observer network on these samples. We show on three different segmentation datasets that our strategy combining an observer network with local adversarial attacks is fast, accurate and is able to detect unknown objects.
1 Implementation details & hyper-parameters
--------------------------------------------
For our implementation, we use Pytorch333A Paszke et al., *PyTorch: An Imperative Style, High-Performance Deep Learning Library*, NIPS 2019 and will release the code after the review. We share each hyper-parameter in [Table 9](#S1.T9 "Table 9 ‣ 1 Implementation details & hyper-parameters ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"). We train ObsNet with SGD with momentum and weight decay
for at most 50 epochs using early-stopping.
ObsNet is not
trained from scratch as we initialize the weights with those of the segmentation network. We also use a scheduler to divide the learning rate by 2 at epoch 25 and epoch 45. We use the same data augmentation (\ieHorizontal Flip and Random Crop)
for training of the segmentation network and
as well as for ObsNet.
As there are few errors in the training of ObsNet, we increase the weight of positive examples in the loss contribution (Pos Weight in [Table 9](#S1.T9 "Table 9 ‣ 1 Implementation details & hyper-parameters ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation")).
| | | | |
| --- | --- | --- | --- |
| Params | CamVid | StreetHazards | Bdd Anomaly |
| Epoch | 50 | 50 | 50 |
| Optimizer | SGD | SGD | SGD |
| LR | 0.05 | 0.02 | 0.02 |
| Batch Size | 8 | 6 | 6 |
| Loss | BCE | BCE | BCE |
| Pos Weight | 2 | 3 | 3 |
| LAA shape | rand shape | rand shape | rand shape |
| LAA type | minp(c) | maxp(k≠c) | maxp(k≠c) |
| epsilon | 0.02 | 0.001 | 0.001 |
Table 9:
Hyper-parameters to train ObsNet on the different datasets.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | Type | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| | MC Dropout | 49.3 | 97.3 | 90.1 | 0.463 |
| | ObsNet base | 54.2 | 97.1 | 89.1 | 0.396 |
| | all pixels | 53.2 | 97.1 | 89.5 | 0.410 |
| | sparse pixels | 61.1 | 97.1 | 89.2 | 0.387 |
| minp(c) | class pixels | 45.6 | 97.3 | 90.3 | 0.428 |
| | square patch | 47.4 | 97.3 | 90.1 | 0.461 |
| | rand shape | 44.6 | 97.6 | 90.9 | 0.446 |
| | all pixels | 51.9 | 97.1 | 89.6 | 0.405 |
| | sparse pixels | 54.2 | 97.2 | 89.6 | 0.374 |
| maxp(k≠c) | class pixels | 46.8 | 97.2 | 89.9 | 0.432 |
| | square patch | 45.5 | 97.4 | 90.5 | 0.464 |
| | rand shape | 44.6 | 97.4 | 90.6 | 0.446 |
Table 10: Ablation on adversarial attacks.
2 Ablation on ObsNet architecture, ϵ and LAA
---------------------------------------------
One contribution of our work is the ablation we do on the architecture of the observer network compared to previous methods. We highlight that the skip connections are essential for reaching best performance. For the smaller architecture, instead of keeping the same architecture as the segmentation network, we design a smaller variant: a
convolutional network with three
convolutional layers and a fully connected
layer. This architecture
mimicks the one used by ConfidNet [10].
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Smaller archi. | 60.3 | 95.8 | 85.3 | 0.476 |
| w/o skip | 81.3 | 92.0 | 74.4 | 0.551 |
| w/o input img | 57.0 | 96.9 | 88.2 | 0.455 |
| w/o pretrain | 55.7 | 96.9 | 88.7 | 0.419 |
| ObsNet full | 54.2 | 97.1 | 89.1 | 0.396 |
Table 11: Ablation ObsNet without LAA training.
Next, we outline most of the experiments we make on LAA. First, there are two different kinds of setups, we can either minimize the prediction class (\ieminp(c)) or maximize instead a different class (\iemaxp(k≠c)), with p=Seg(x) the class vector, c=maxp the maximum class prediction and k a random class. Then, we attack with five different strategies: all pixels in the image, random sparse pixels, the area of a random class, all pixels in a square patch and all pixels in a random shape. We show in [Table 10](#S1.T10 "Table 10 ‣ 1 Implementation details & hyper-parameters ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") the complete results on CamVid ODD. We can see that that random shape is the most effective. We use the FSGM because it’s a well-known and easy-to-use adversarial attack. Since our goal is to hallucinate OOD objects, we believe the location and the shape of the attacked region are the important part.
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [25] | 61.9 | 96.5 | 84.4 | 0.480 |
| Void [6] | 79.9 | 90.7 | 67.3 | 0.504 |
| MCDA [1] | 65.8 | 96.3 | 83.1 | 0.440 |
| Temp. Scale [19] | 61.9 | 96.6 | 84.6 | 0.302 |
| ODIN [32] | 58.3 | 97.2 | 87.9 | 0.478 |
| ConfidNet [10] | 52.2 | 97.5 | 88.6 | 0.412 |
| Gauss Pert. [15,41] | 60.2 | 96.8 | 85.6 | 0.497 |
| Deep Ensemble [30] | 55.3 | 97.5 | 88.1 | 0.343 |
| MC Dropout [17] | 52.5 | 97.9 | 88.5 | 0.443 |
| ObsNet + LAA | 47.7 | 98.1 | 90.3 | 0.370 |
Table 12: Error detection evaluation on CamVid (best method in bold, second best underlined).
As shown on [Figure 5](#S2.F5 "Figure 5 ‣ 2 Ablation on ObsNet architecture, ϵ and LAA ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), we can see that the best ϵ for the attack is 0.02 with a random shape blit at a random position in the image. We can also see that even with a large ϵ, ObsNet still achieves reasonable performance.

Figure 5: Evolution of the Fpr at 95 Tpr for different values of epsilon on CamVid OOD.
3 Error detector
-----------------
The observer is trained to assess whether the prediction differs from the true class (which is always the case for OOD regions), so it also tends to assign low confidence scores for in-domain regions with likely high errors, as shown in [Figure 6](#S5.F6 "Figure 6 ‣ 5 CamVid OOD dataset ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"). This behavior is not caused by ObsNet, but depends on the accuracy of the main network at test time and should lessen with more accurate networks.
This effect shows that our method can be used for error detection, and outperforms all other methods, as illustrated in [Table 12](#S2.T12 "Table 12 ‣ 2 Ablation on ObsNet architecture, ϵ and LAA ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation").
4 Additional Experiments: DeepLab v3+
--------------------------------------
We show on [Table 13](#S4.T13 "Table 13 ‣ 4 Additional Experiments: DeepLab v3+ ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), the results on BDD Anomaly with a more recent Deeplab v3+444LC Chen et al., *Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation*, ECCV 2018 with ResNet-101 encoder. Our methods performs the best, while methods like ConfidNet do not scale when the segmentation accuracy increases as they have fewer errors to learn from.
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [25] | 60.3 | 95.8 | 81.4 | 0.228 |
| Void [6] | 68.8 | 90.2 | 74.0 | 0.485 |
| MCDA [1] | 68.1 | 95.1 | 78.8 | 0.265 |
| ConfidNet [10] | 64.5 | 95.4 | 80.9 | 0.254 |
| Gauss Pert. [15,41] | 61.4 | 96.1 | 82.4 | 0.186 |
| MC Dropout [17] | 60.0 | 96.0 | 82.0 | 0.219 |
| ObsNet + LAA | 58.8 | 96.3 | 83.0 | 0.185 |
Table 13: Evaluation on Bdd Anomaly (best method in bold, second best underlined), with a DeepLab v3+.
5 CamVid OOD dataset
---------------------
For our experiments, we use urban street segmentation datasets
with anomalies withheld during training. Unfortunately, there are few datasets with anomalies in the test set.
For this reason we propose the CamVid OOD that will be made public after the review. To design CamVid OOD, we blit random animals
in test images of CamVid.
We add one different such anomaly in each of the 233 test images.
The rest of the 367 training images remain unchanged. The anomalous animals are
*bear*, *cow*, *lion*, *panda*, *deer*, *coyote*, *zebra*, *skunk*, *gorilla*, *giraffe*, *elephant*, *goat*, *leopard*, *horse*, *cougar*, *tiger*, *sheep*, *penguin*, and *kangaroo*.
Then, we add them to a 13th class which is
*animals*/*anomalies*
as the corresponding ground truth of the test set.
| Method | fpr95tpr ↓ | AuPR ↑ | AuRoc ↑ | ACE ↓ |
| --- | --- | --- | --- | --- |
| Softmax [25] | 67.5 | 94.7 | 82.5 | 0.529 |
| ConfidNet [10] | 58.4 | 96.4 | 86.8 | 0.462 |
| Gauss Pert. [15,41] | 61.8 | 95.8 | 85.7 | 0.473 |
| Deep Ensemble [30] | 63.9 | 96.5 | 86.4 | 0.468 |
| MC Dropout [17] | 52.8 | 97.2 | 88.5 | 0.483 |
| ObsNet + LAA | 42.1 | 97.7 | 91.4 | 0.423 |
Table 14: Error detection evaluation on CamVid with random square attacks (best method in bold).
This setup is similar to the Fishyscape dataset [6], without the constraint of sending a Tensorflow model online for evaluation. Thus, our dataset is easier to work with. We present some examples of the anomalies in [Figure 7](#S5.F7 "Figure 7 ‣ 5 CamVid OOD dataset ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation") with the ground truth highlighted in cyan.

Figure 6: Evaluation of the error detection on the test set of CamVid. ObsNet prediction is close to real errors even without OOD objects.

Figure 7: Examples of our dataset with anomalies and the ground truth.
6 Adversarial Attacks Detector
-------------------------------
In safety-critical applications like autonomous driving, we know that the perception system has to be robust to adversarial attacks. Nevertheless, training a robust network is costly and robustness comes with a certain trade-off to make between accuracy
and run time. Moreover, the task to *only* detect the adversarial attack could be sufficient as we can rely on other sensors (LiDAR, Radar, etc.).
Although, in this work we do not focus on Adversarial Robustness, empirically we note that ObsNet can detect an attack.
To some extent this is expected as we explicitly train the observer to detect adversarial attacks, thanks to the LAA.
Indeed, our observer can detect the area where the attack is performed, whereas the MC Dropout is overconfident. Furthermore, in [Table 14](#S5.T14 "Table 14 ‣ 5 CamVid OOD dataset ‣ Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation"), we evaluate the adversarial attack detection of several methods. We apply a FGSM attack in a local square patch on each testing image. Once again, we can see that our observer is the best method to capture the perturbed area.
|
7927373e-da43-4702-9d74-dc3e444e89f9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Your existence is informative
Cross posted from Overcoming Bias. Comments there.
***
Warning: this post is technical.
Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on a given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since you don’t know which is ‘this’ planet, with respect to the model, you can’t update directly on ‘there is life on this planet’, by excluding worlds where this planet doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.
I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,
P(Q|there is life on this planet) = P(Q|there exists a planet with life).
Here I shall explain my disagreement.
Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.
You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can a
|
a1170aa8-393d-4b75-ab7f-a66f552e51cd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
POC || GTFO culture as partial antidote to alignment wordcelism
There is an important asymmetry in reception for prophets. Go read that post first if you haven't.
For those who don't want to, the gist is: Given the same level of specificity, people will naturally give more credit to the public thinker that argues that society or industry will change, because it's easy to recall active examples of things changing and hard to recall the vast amount of negative examples where things stayed the same. If you take the Nassim Taleb route of vapidly predicting, in an unspecific way, that interesting things are eventually going to happen, interesting things will eventually happen and you will be revered as an oracle. If you take the Francis Fukuyama route of vapidly saying that things will mostly stay the same, you will be declared a fool every time something mildly important happens.
The computer security industry happens to know this dynamic very well. No one notices the Fortune 500 company that doesn't suffer the ransomware attack. Outside the industry, this active vs. negative bias is so prevalent that information security standards are constantly derided as "horrific" without articulating the sense in which they fail, and despite the fact that online banking works pretty well virtually all of the time. Inside the industry, vague and unverified predictions that Companies Will Have Security Incidents, or that New Tools Will Have Security Flaws, are treated much more favorably in retrospect than vague and unverified predictions that companies will mostly do fine. Even if you're right that an attack vector is unimportant and probably won't lead to any real world consequences, in retrospect your position will be considered obvious. On the other hand, if you say that an attack vector is important, and you're wrong, people will also forget about that in three years. So better list everything that could possibly go wrong[1], even if certain mishaps are much more likely than others, and collect oracle points when half of your failure scen
|
518480ac-8d3a-4ef6-9bf0-0995f9d63519
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Saint Petersburg meetup. Habits and spring.
Discussion article for the meetup : Saint Petersburg meetup. Habits and spring.
WHEN: 18 April 2015 06:00:00PM (+0300)
WHERE: Санкт-Петербург, Невский просп., 54
Place updated Saturday, 18.04.15.
From 18:00 and for about four hours.
It would be in 'Subway' cafe please call me if you don't know me personally and can't find small piece of paper with 'LessWrong' written on it somewhere on the table.
Main theme - habits: - small presentations of articles on this topic and personal stories on things that help with establishing good habits and getting rid of bad ones. - brainstorm for good habit ideas and chosing of the priorities in which everyone would like to implement them.
On desert - bragging thread, where we share our accomplishments. If you are thinking of coming please vote for your expected attendance here: https://vk.com/lwspb
I will be there even if i'll be there alone. For any questions I can be reached by phone +7(911)843-56-44 or social network vk.com/timetravel. Efim.
Discussion article for the meetup : Saint Petersburg meetup. Habits and spring.
|
ed43fb94-aa5c-4e81-9d3b-261ca6d725b0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to Seattle Rationality [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading?
|
14691843-6bf7-4263-aa41-57f53956039c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The metaphor you want is "color blindness," not "blind spot."
It's genuinely possible that the title is all you need; I was tempted to write nothing further. Feel free to take ten seconds and see if it's already clicked. If not, read on.
----------------------------------------
When something is in your literal blind spot, it's invisible to you, but your brain stitches together everything else around it to make you think that you're seeing a complete picture.
Reality:
Blind spot:
Perception:
We often use "blind spot" as a metaphor to gesture toward things we are unaware of, while also being at least somewhat unaware of our unawareness—we know that something fishy is going on, but we can't quite get our eyes on it.
e.g. "I think I might have a blind spot when it comes to status dynamics."
The thing about status dynamics, though, is that they aren't in one spot. There isn't a whole world that is being fully and accurately perceived, except for one blank space that's being glossed over.
Instead, what's usually going on (at least in my experience) is that the person can see everything, but there's some crucial component of the picture that they are unable to process or comprehend.
What this looks like, in practice, is an inability to distinguish two things which are very, very different, à la red-green color blindness:
"Look," says Alexis. "Look at the beautiful contrast."
Blake hesitates. "...you mean between the trees and the sky?"
----------------------------------------
Being (metaphorically) color-blind to something can be deeply frustrating. You keep pushing the X button, and very different things keep happening.
e.g. you are learning to play a Formula 1 racing simulator, and it feels like you did exactly the same thing on exactly the same curve both times, but one time you spun out and crashed and the other time you smoothly navigated right through.
e.g. you are repeating back to the native French speaker exactly the sounds she said to you, and sometimes getting nods and smiles and other times g
|
9dd5438e-37fe-4bfd-9005-540be18d3699
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meta is hiring for LLM red teaming position
Apply at the link.
We're expanding our red team, whose job it is to try breaking our LLMs to make them exhibit unexpected and unsafe behaviors. Note that one of the requirements for this specific position is a PhD in linguistics. If you have prior red teaming experience (professional or personal), even better :) If you do apply at the link, please send me a message on here as well.
If you do not meet the PhD requirement but you think you're a good candidate for the red team (e.g. have discovered new jailbreaks or adversarial techniques, are curious and have a hacker mindset, have done red teaming before, etc), you can still message me on here so we can keep you in mind if other red teaming positions open up in the future without the PhD qualification.
|
74712475-5a4e-40dd-aeb2-b13afeed95a5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Optimisation Measures: Desiderata, Impossibility, Proposals
Previously: Towards Measures of Optimisation
When thinking about optimisation processes it is seductive to think in information-theoretic terms.
Is there some useful measure[1] of 'optimisation' we can derive from utility functions or preference orderings, just as Shannon derived 'information' from probability distributions? Could there be a 'mathematical theory of optimisation' that is analogous to Shannon's theory of information? In this post we exhibit negative evidence that this point of view is a fertile direction of inquiry.
In the last post we reviewed proposals in that direction, most notably Yudkowsky's original idea using preference orderings, and suggested some informal desiderata. In this post we state our desiderata formally, and show that they can't all be satisfied at once. We exhibit a new proposal from Scott Garrabrant which relaxes one desideratum, and revisit the previous proposals to see which desiderata they satisfy.
Setup
Recall our setup: we're choosing an action from a set A to achieve an outcome in a set Ω. For simplicity, we assume that Ω is finite. Denote the set of probability distributions on Ωby ΔΩ. We have a default distribution p∈ΔΩ, which describes the state of affairs before we optimise, or in a counterfactual world where we don't optimise, and action distributions pa∈ΔΩ for each a∈A, which describe the state of affairs if we do. Our preferences are described by a utility function u:Ω→R. Let U denote the set of utility functions.
In the previous post we considered random variables OP(p,u)(x), which measure the optimisation entailed by achieving some outcome x, given a utility function u and base distribution p. We then took an expectation over pa to measure the optimisation entailed by achieving some distribution over outcomes, i.e. we defined OP(p,pa,u)=EpaOP(p,u)(x).
In this post we state our desiderata directly over OP(p,pa,u) instead. For more on this point see the discussion of the convex-linearity desideratum below.
|
53cbb0b5-4e75-467c-bc66-886d0e555b0e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Mech Interp Challenge: August - Deciphering the First Unique Character Model
I'm writing this post to advertise the second in the sequence of monthly mechanistic interpretability challenges (the first to be posted on LessWrong). They are designed in the spirit of Stephen Casper's challenges, but with the more specific aim of working well in the context of the rest of the ARENA material, and helping people put into practice all the things they've learned so far.
In this post, I'll describe the algorithmic problem & model trained to solve it, as well as the logistics for this challenge and the motivation for creating it. However, the main place you should access this problem is on the Streamlit page, where you can find setup code & instructions (as well as a link to a Colab notebook with all setup code included, if you'd prefer to use this).
Task
The algorithmic task is as follows: the model is presented with a sequence of characters, and for each character it has to correctly identify the first character in the sequence (up to and including the current character) which is unique up to that point.
Explanation: "a" is unique in both the first two characters "ab", it's repeated in the 3rd character so "b" is now the first unique character, and then "b" is repeated in the 5th character which makes "c" the first unique character.
Model
Our model was trained by minimising cross-entropy loss between its predictions and the true labels, at every sequence position simultaneously (including the zeroth sequence position, which is trivial because the input and target are both always "?"). You can inspect the notebook training_model.ipynb in the GitHub repo to see how it was trained. I used the version of the model which achieved highest accuracy over 50 epochs (accuracy ~99%).
The model is is a 2-layer transformer with 3 attention heads, and causal attention. It includes layernorm, but no MLP layers.
Note - although this model was trained for long enough to get loss close to zero (you can test this for yourself), it's not perfect. There are some
|
d1260135-a79f-457d-97c5-2f3587d7c995
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Epistemic status: Practising thinking aloud. There might be an important question here, but I might be making a simple error.There is a lot of variance in general competence between species. Here is the standard Bostrom/Yudkowsky graph to display this notion.There's a sense that while some mice are more genetically fit than others, they're broadly all just mice, bound within a relatively narrow range of competence. Chimps should not be worried about most mice, in the short or long term, but they also shouldn't worry especially so about peak mice - there's no incredibly strong or cunning mouse they ought to look out for.However, my intuition is very different for humans. While I understand that humans are all broadly similar, that a single human cannot have a complex adaptation that is not universal [1], I also have many beliefs that humans differ massively in cognitive capacities in ways that can lead to major disparities in general competence. The difference between someone who does understand calculus and someone who does not, is the difference between someone who can build a rocket and someone who cannot. And I think I've tried to teach people that kind of math, and sometimes succeeded, and sometimes failed to even teach basic fractions.I can try to operationalise my hypothesis: if the average human intelligence was lowered to be equal to an IQ of 75 in present day society, that society could not have built rockets or do a lot of other engineering and science.(Sidenote: I think the hope of iterated amplification is that this is false. That if I have enough humans with hard limits to how much thinking they can do, stacking lots of them can still produce all the intellectual progress we're going to need. My initial thought is that this doesn't make sense, because there are many intellectual feats like writing a book or coming up with special relativity that I generally expect individuals (situated within a conducive culture and institutions) to be much better at than groups of individuals (e.g. companies). This is also my understanding of Eliezer's critique, that while it's possible to get humans with hard limits on cognition to make mathematical progress, it's by running an algorithm on them that they don't understand, not running an algorithm that they do understand, and only if they understand it do you get nice properties about them being aligned in the same way you might feel many humans are today.It's likely I'm wrong about the motivation behind Iterated Amplification though.)This hypothesis doesn't imply that someone who can do successful abstract reasoning is strictly more competent than a whole society of people who cannot. The Secret of our Success talks about how smart modern individuals stranded in forests fail to develop basic food preparation techniques that other, primitive cultures were able to build.I'm saying that a culture with no people who can do calculus will in the long run score basically zero against the accomplishments of a culture with people who can.One question is why we're in a culture so precariously balanced on this split between "can take off to the stars" and "mostly cannot". An idea I've heard is that if a culture is easily able to reach technologically maturity, it will come later than a culture who is barely able to become technologically maturity, because evolution works over much longer time scales than culture + technological innovation. As such, if you observe yourself to be in a culture that is able to reach technologically maturity, you're probably "the stupidest such culture that could get there, because if it could be done at a stupider level then it would've happened there first."As such, we're a species whereby if we try as hard as we can, if we take brains optimised for social coordination and make them do math, then we can just about reach technical maturity (i.e. build nanotech, AI, etc). That may be true, but the question I want to ask about is what is it about humans, culture and brains that allows for such high variance within the species, that isn't true about mice and chimps? Something about this is still confusing to me. Like, if it is the case that some humans are able to do great feats of engineering like build rockets that land, and some aren't, what's the difference between these humans that causes such massive changes in outcome? Because, as above, it's not some big complex genetic adaptation some have and some don't. I think we're all running pretty similar genetic code.Is there some simple amount of working memory that's required to do complex recursion? Like, 6 working memory slots makes things way harder than 7?I can imagine that there are many hacks, and not a single thing. I'm reminded of the story of Richard Feynman learning to count time, where he'd practice being able to count a whole minute. He'd do it while doing the laundry, while cooking breakfast, and so on. He later met the mathematician John Tukey, who could do the same, but they had some fierce disagreements. Tukey said you couldn't do it while reading the newspaper, and Feynman said he could. Feynman said you couldn't do it while having a conversation, and Tukey said they could. They then both surprised each other by doing exactly what they said they could.It turned out Feynman was hearing numbers being spoken, whereas Tukey was visualising the numbers ticking over. So Feynman could still read at the same time, and his friend could still listen and talk.The idea here is that if you're unable to use one type of cognitive resource, you may make up for it with another. This is probably the same situation as when you make trade-offs between space and time in computational complexity.So I can imagine different humans finding different hacky ways to build up the skill to do very abstract truth-tracking thinking. Perhaps you have a little less working memory than average, but you have a great capacity for visualisation, and primarily work in areas that lend themselves to geometric / spacial thinking. Or perhaps your culture can be very conducive to abstract thought in some way.But even if this is right I'm interested in the details of what the key variables actually are.What are your thoughts?[1] Note: humans can lack important pieces of machinery."
The above is a chunk of text from a post created by a user on the website LessWrong.
Can you elaborate on the specific cognitive resources or skills that might compensate for a lower working memory, and how these might be cultivated or utilized in different fields?
Certainly, cognitive resources are diverse and can be utilized in various ways to compensate for a lower working memory. Here are a few examples:
1. **Visual-Spatial Skills**: Some individuals may have strong visual-spatial skills, which allow them to visualize complex scenarios or problems. These skills can be particularly useful in fields like architecture, engineering, and certain areas of mathematics and science. Visualization techniques can be improved through practice and training, such as through activities that involve spatial reasoning like puzzles or 3D modeling.
2. **Long-term Memory**: While working memory involves the short-term manipulation of information, long-term memory stores information for extended periods. Individuals with strong long-term memory skills can compensate for a lower working memory by effectively storing and retrieving information when needed. Techniques for improving long-term memory include mnemonic devices, repetition, and active recall.
3. **Attention Control**: The ability to focus and control one's attention is another cognitive resource that can compensate for lower working memory. This can be particularly useful in fields that require sustained concentration, such as programming or writing. Mindfulness and meditation practices can help improve attention control.
4. **Problem-Solving Skills**: The ability to break down complex problems into manageable parts, identify potential solutions, and implement these solutions is another way to compensate for lower working memory. This skill can be developed through practice, for example, by regularly tackling complex problems and reflecting on the strategies used to solve them.
5. **Metacognitive Skills**: Metacognition refers to "thinking about thinking" or the ability to monitor and control one's cognitive processes. Individuals with strong metacognitive skills can strategically manage their cognitive resources, which can help compensate for a lower working memory. These skills can be cultivated through practices like self-reflection, journaling, and feedback from others.
6. **Social and Emotional Intelligence**: In fields that involve a lot of interpersonal interaction, social and emotional intelligence can compensate for lower working memory. This involves understanding and managing one's own emotions, as well as understanding and responding to the emotions of others. Social and emotional intelligence can be improved through practices like empathy training, active listening, and role-playing exercises.
It's important to note that these are just a few examples, and the specific cognitive resources or skills that might be most useful can vary depending on the individual and the specific tasks or challenges they are facing.
Reflecting on the cognitive resources you've listed, can you identify potential drawbacks or limitations that might arise when relying heavily on these compensatory skills instead of a strong working memory?
Absolutely, while these compensatory skills can be extremely beneficial, they are not without their potential drawbacks or limitations:
1. **Visual-Spatial Skills**: While these skills can be incredibly useful for visualizing and solving complex problems, they may not be as helpful when dealing with abstract concepts that do not lend themselves to visual representation. Additionally, over-reliance on visual-spatial skills may lead to difficulties in tasks that require verbal or numerical processing.
2. **Long-term Memory**: Relying heavily on long-term memory can sometimes lead to inflexibility. While it's beneficial to be able to store and retrieve large amounts of information, it may become problematic if it prevents one from adapting to new information or changing circumstances. Furthermore, the retrieval of information from long-term memory can be slower than from working memory, which could be a disadvantage in tasks requiring quick decision-making.
3. **Attention Control**: While being able to focus one's attention is generally beneficial, an over-reliance on this skill could potentially lead to tunnel vision, where one becomes so focused on a specific task or problem that they miss other important information or opportunities.
4. **Problem-Solving Skills**: Relying heavily on problem-solving skills can sometimes lead to over-complication, where simple problems are made more complex than they need to be. Additionally, these skills may not be as useful in situations that require quick, intuitive decisions rather than deliberate problem-solving.
5. **Metacognitive Skills**: While metacognition is generally beneficial, an over-reliance on this skill can potentially lead to over-analysis or paralysis by analysis, where one spends so much time thinking about how to think or approach a problem that they struggle to actually take action.
6. **Social and Emotional Intelligence**: While these skills are crucial for interpersonal interactions, they may not be as useful in tasks that require logical reasoning or technical skills. Furthermore, an over-reliance on social and emotional intelligence could potentially lead to an overemphasis on the emotional aspects of a situation at the expense of logical or factual considerations.
In summary, while these compensatory skills can be highly beneficial, it's important to strike a balance and not rely too heavily on any one skill to the exclusion of others. The most effective approach often involves a combination of different cognitive resources and skills, tailored to the specific task or challenge at hand.
In relation to over-reliance on social and emotional intelligence, can you explore how this might impact decision-making in a high-stakes business scenario?
Certainly, social and emotional intelligence are critical skills in the business world, especially in leadership roles. They can facilitate effective communication, conflict resolution, team building, and the understanding of customer needs. However, an over-reliance on these skills in decision-making can have several potential impacts:
1. **Emotionally-Driven Decisions**: Over-reliance on emotional intelligence might lead to decisions that are more influenced by emotions rather than objective data. For instance, a business leader might choose a less qualified candidate for a job because they empathize with their personal story, or stick with an underperforming vendor due to a long-standing relationship. While empathy and loyalty are valuable traits, they might not always lead to the best business decisions.
2. **Neglect of Quantitative Analysis**: High-stakes business decisions often require rigorous quantitative analysis. If a leader is overly focused on the emotional and interpersonal aspects of a situation, they might neglect or undervalue important data, leading to less informed decisions.
3. **Groupthink**: A leader with high social intelligence might be very attuned to the opinions and feelings of their team. While this can foster a positive team culture, it might also lead to groupthink, where dissenting opinions are suppressed in favor of consensus. This can result in poor decision-making, as it limits the diversity of perspectives considered.
4. **Overemphasis on Consensus**: A leader with high social and emotional intelligence might place a high value on maintaining harmony and achieving consensus. While this can be beneficial in some situations, in others it might prevent necessary tough decisions from being made, or lead to compromises that dilute the effectiveness of a strategy.
5. **Delayed Decisions**: In an effort to understand and consider everyone's feelings and perspectives, a leader might take too long to make decisions, leading to missed opportunities or allowing problems to escalate.
In summary, while social and emotional intelligence are valuable skills in business, an over-reliance on them in decision-making can potentially lead to emotionally-driven decisions, neglect of quantitative analysis, groupthink, an overemphasis on consensus, and delayed decisions. As with all cognitive resources, they should be balanced with other skills such as logical reasoning, analytical skills, and a focus on objective data.
|
02fb80ca-a6e2-45bf-9ed2-a324a3c23dff
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to stay concentrated for a long period of time?
I find it nearly impossible to focus on anything more than 2+ hours straight. Even if I enjoy an activity, I end up being distracted by a hell lot of things or just being too exhausted to continue doing it.
The situation is only getting worse if I have a task that is not rewarding in a short-term. Yet a lot of people I know can endure unpleasant tasks even for a few days (e.g. when they have to finish "that dull project" right before deadline).
The question is, how can a person develop ability for staying focused? What kind of tricks can be used? If these tricks can be harmful for health, I'd like to hear about them anyway.
|
06d102e3-1a7f-410f-b43b-ef70daa4603a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Bloggingheads: Yudkowsky and Horgan
I appear today on Bloggingheads.tv, in "Science Saturday: Singularity Edition", speaking with John Horgan about the Singularity. I talked too much. This episode needed to be around two hours longer.
One question I fumbled at 62:30 was "What's the strongest opposition you've seen to Singularity ideas?" The basic problem is that nearly everyone who attacks the Singularity is either completely unacquainted with the existing thinking, or they attack Kurzweil, and in any case it's more a collection of disconnected broadsides (often mostly ad hominem) than a coherent criticism. There's no equivalent in Singularity studies of Richard Jones's critique of nanotechnology - which I don't agree with, but at least Jones has read Drexler. People who don't buy the Singularity don't put in the time and hard work to criticize it properly.
What I should have done, though, was interpreted the question more charitably as "What's the strongest opposition to strong AI or transhumanism?" in which case there's Sir Roger Penrose, Jaron Lanier, Leon Kass, and many others. None of these are good arguments - or I would have to accept them! - but at least they are painstakingly crafted arguments, and something like organized opposition.
|
cf8cb14c-f93a-48c1-a2a5-744be088ddaa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Breaking down the MEAT of Alignment
Preamble
This post is my attempt to try and organise some thinking about AI alignment in a way that hopefully acts as a partial-overview to some of the core ideas and approaches. It is mostly a review of existing ideas arranged with some light opinions thrown in. The goal is that this might help with spotting new combinations of ideas, offering a scaffold for thinking about alignment, or acting as an introductory resource / reference.
Introduction
Sufficiently powerful AI systems will probably not have the positive impact we hope them to have and in fact might go horribly wrong, unless we have in some way guided, or 'aligned', them to act in ways we are confident will be beneficial. Thus we have to have some idea of what this system will do that will make it have a positive impact on the world, we need some way of creating this system, and finally we need to be confident that what we've made (or are going to make) will have the properties we care about.
Thus there are three big things to think about in order to practically "solve alignment", or otherwise have safe deployment of advanced AI systems:
1. Target: What is the core thing we aiming to do and why?
2. MEthod: How can we try to achieve it?
3. Assurances: How will we know we've achieved it?
With slight rearrangement, we see we have the MEAT of alignment. Realistic alignment proposals ought to make some well-motivated selection across each of these categories, or equivalently be able to be decomposed into that, in order to show their suitability as a practical alignment strategy. The rest of this post will focus on detailing these categories and exploring how existing work fits inside of them.
A similar idea to this post is Training Stories, which breaks down proposals into training goals and training rationale. I frame things slightly differently and provide a more up-to-date view of existing literature (though I'm sure there's plenty of interesting and cool work I've missed), but I'd recommend people
|
4e5f1020-3391-43df-b87a-0b2ef432129d
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks
I Introduction
---------------
Recent years have seen an increasing public awareness of the profound implications of widespread artificial intelligence (AI) and large scale data collection. It is now common for both large tech companies and academic researchers to motivate their work on AI as interfacing with the “public interest,” matching external scrutiny with new technical approaches to making systems fair, secure, or provably beneficial. However, developing systems in the public interest requires researchers and designers to confront what has been elsewhere referred to as the “sociotechnical gap,” or the divide between the intended social outcomes of a system and what is actually achieved through technical methods [[1](#bib.bib89 "The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility")].
Interventions in Computer Science (CS) education have made strides towards providing students with frameworks within which to evaluate technical systems in social contexts [[48](#bib.bib87 "Integrating ethics within machine learning courses"), [51](#bib.bib88 "Ethics education in context: a case study of novel ethics activities for the cs classroom")]. These curricular modifications have drawn on fields like Law, Philosophy, and Science and Technology Studies (STS) to create both dedicated and integrated coursework promoting human contexts and ethics in CS [[21](#bib.bib86 "What do we teach when we teach tech ethics? a syllabi analysis")]. However, as the majority of these courses are currently offered at the undergraduate level, graduate students may not reap the benefits of such reforms [[21](#bib.bib86 "What do we teach when we teach tech ethics? a syllabi analysis")]. Given the role of graduate students as not only teachers, but drivers of cutting edge research and future decision makers in industry and academia, interventions aimed at them may play an outsized role in forwarding PIT goals.

Fig. 1: Three contemporary areas of artificial intelligence research whose overlapping forms of sociotechnical inquiry remain problematically defined: AI Safety, Fair Machine Learning, and Human-in-the-Loop Autonomy.
It is challenging to pin down what it would mean to train a graduate AI researcher to address the sociotechnical gap.
A key source of tension is the place of the sociotechnical in AI development: while practitioners claim to be working on technical solutions to social problems, theoretical and methodological formulations of the sociotechnical are inconsistent across prominent AI subfields, making it unclear if current initiatives in pedagogy and research are advancing, undermining, or neglecting the public interest.
In this paper, we go beyond coursework to analyze the historical and technical shifts behind the current conception of sociotechincal risks in prominent AI subfields.
We look to existing research domains that grapple with the social and technical spheres at distinct levels of abstraction, and examine how their limitations and insights reflect nascent, if problematic, forms of inquiry into the sociotechnical. We assess current research in the socially-oriented subfields of AI highlighted in Fig. [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks"), namely AI Safety; Fairness in Machine Learning (Fair ML); and Human-in-the-Loop (HIL) Autonomy.
AI Safety focuses on value-alignment of future systems and cautions against developing AI systems fully integrated with and in control of society.
Fair ML works to reduce bias
in algorithms with potentially deleterious effects on individuals or groups.
HIL Autonomy is a term we use to encompass emerging work on both human-robot interaction and cyber-physical systems. These research areas explore how to optimize interactions of autonomous systems with human intentions in the loop.
By tracing the history of these subfields in a comparative fashion, we are able to characterize their distinct orientations towards sociotechnical challenges, highlighting both insights and blindspots. The goal of this analysis is not to capture each subfield’s research agenda exhaustively, which is far beyond our scope. Instead, it is to highlight how these agendas claim relative access to some feature of the sociotechnical in the way they represent social problems as technically tractable.
In doing so, they claim legitimacy and authority with respect to public problems.
We next refine this comparative history with lenses borrowed from the Science and Technology Studies (STS) literature, emphasizing the ways the subfields interface with sociotechnical risks.
In particular, we portray how certain risks are deferred within each subfield’s agenda.
This deferral often takes the form of skillfully avoiding “abstraction traps” that have been recently highlighted by [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")].
Such avoidance is important from a technical standpoint.
However, true engagement with the sociotechnical requires reflexively revealing and resolving risks beyond the piecemeal formalisms that have defined each subfield’s historical trajectory.
We conclude with a brief sketch of pedagogical interventions towards this goal. Beyond classroom ethics curricula, we propose an agenda for clinical engagement with problems in the public interest as a part of graduate training in AI.
This training would inculcate a direct appreciation of sociotechnical inquiry in parallel with the acquisition of specific technical skillsets.
It would prepare practitioners to evaluate their own toolkits discursively, rather than just mathematically or computationally.
Ii Emerging Sociotechnical Subfields of AI: Safety, Fairness, Resiliency
-------------------------------------------------------------------------
Recent technical work grappling with the societal implications of AI includes developing provably safe and beneficial artificial intelligence (AI Safety), mitigating classification harms for vulnerable populations through fair machine learning (Fair ML), and designing resilient autonomy in robotics and cyber-physical systems (HIL Autonomy).
At present, these areas constitute heterogeneous technical subfields with substantive overlaps, but lack discursive engagement and cross-pollination.
Below, we outline the history and motivating concerns of each subfield and identify key developments and convenings.
We highlight how the technical research agendas stem from distinctly sociotechnical concerns and will require interdisciplinary engagement
to fully map out their stakes.
By placing these subfields’ agendas in
the context of the local sociotechnical risks (respectively extinction, inequity, and accident), we argue that the representative technical formations (forecasting and value alignment, fairness criteria and accountability, controllability and reachability) iterate on those risks without reflexively interrogating or normatively addressing them.
Throughout this work we use the term AI system to refer to a technical system with significant automated components (e.g. automated decision making systems or self driving cars).
We note that AI is also a distinct field of academic study, further discussed in this section.
###
Ii-a AI Safety
The field of artificial intelligence (AI) has often situated itself within a wider disciplinary context.
Famously, at the foundational summits at Dartmouth and MIT, computer scientists, logicians, and psychologists came together to chart a course for artificial intelligence to arrive at human-like capabilities [[45](#bib.bib37 "Artificial intelligence: a modern approach")].
That course, which laid the groundwork for
“good-old-fashioned” AI, had an incredible number of hiccups and was eventually overwritten. A similarly diverse group of disciplinary representatives moved the field away from symbolic and comprehensive logical reasoning to either more situated, interactionist understandings of cognition [[19](#bib.bib69 "What computers still can’t do: a critique of artificial reason"), [2](#bib.bib59 "Computation and human experience")] or to biologically-inspired, connectionist strategies of learning [[54](#bib.bib67 "Connectionism and neural networks")].
Following these interdisciplinary developments, there has been a growing concern about the capabilities of AI systems to endanger humans and society writ large [[14](#bib.bib44 "Ethical issues in advanced artificial intelligence"), [59](#bib.bib43 "Artificial intelligence as a positive and negative factor in global risk"), [3](#bib.bib65 "Racing to the precipice: a model of artificial intelligence development")].
Motivated both by longstanding concerns about the possibility of a “technological singularity” [[34](#bib.bib38 "The singularity is near: when humans transcend biology")] as well as recent expansive applications of machine learning in critical infrastructure domains, many AI Safety promoters fear that AI researchers are approaching a level of capability that will expand beyond their control [[34](#bib.bib38 "The singularity is near: when humans transcend biology"), [15](#bib.bib60 "Superintelligence")].
Belief in the prospect of an arbitrarily capable intelligent agent beyond designer control has raised the prospect of extinction, whether of humanity or all natural life, as a clear and present danger for AI development, serving as this field’s distinct sociotechnical risk scenario.
Regardless of the likelihood of such a scenario, the nascent field of AI Safety has arisen to preemptively confront these dangers.
AI Safety takes a rather radical approach to the type of systems level thinking that we discuss, often viewing technical developments on a much longer and wider timescale—see, for example, the published work on AI arms races and potential research agendas to avert them [[3](#bib.bib65 "Racing to the precipice: a model of artificial intelligence development"), [42](#bib.bib66 "Beyond mad? the race for artificial general intelligence")]. As such, a common feature of AI Safety research is forecasting future AI capabilities against various time horizons.
Despite recent high-profile endorsements from computer scientists and philosophers such as Stuart Russell [[46](#bib.bib64 "Human compatible: artificial intelligence and the problem of control")] and Nick Bostrom [[15](#bib.bib60 "Superintelligence")], AI Safety is still a nascent research community. At present there is no independent conference for this field, although workshops and panels on AI Safety have become a regular fixture of larger AI venues such as NeurIPS, ICML, and AAAI, while specific AI-safety oriented research labs (e.g. CHAI, OpenAI) host invited technical presentations on a semiweekly or monthly basis. The field has also attracted interest from research centers and philanthropic organizations dedicated to the study and mitigation of long-term existential risk, as well as industry leaders in AI.
Central to this work is a motivation to align AI systems with intended rather than specified rewards, as humans struggle to make explicit the rich normative context of their own goals and behaviors. Through this shift, AI Safety adjusts the framing of classical AI development towards “provably beneficial” rather than merely optimal systems.
Under this framing, researchers focus largely on the problem of *value alignment*, i.e. whether or not an AI agent’s programmed objective matches those of relevant humans or humanity as a whole [[52](#bib.bib70 "Agent foundations for aligning machine intelligence with human interests: a technical research agenda")].
For example, by understanding the problem of aligning an AI agent with a human collaborator as a problem of inverse reinforcement learning, researchers seek to solve this issue with a largely technical approach by borrowing core principles from economic game theory [[27](#bib.bib71 "Cooperative inverse reinforcement learning")].
Considered as a whole, extended sociotechnical inquiry in AI Safety remains limited to catastrophic risk evaluation in cases where humanity’s survival is at stake—a scale of concern that is not often found in engineering disciplines. Moreover, rigorous formal work often relies on intuition from mechanism design (e.g. an objectives-first approach, perfectly rational agents) whose assumptions inherit some of the formal limitations of and controversies surrounding prospect theory and social choice theory.
Stemming from AI Safety, we see vigorous discussions surrounding AI Policy [[16](#bib.bib25 "Toward trustworthy ai development: mechanisms for supporting verifiable claims")], ethics [[23](#bib.bib24 "Artificial intelligence, values and alignment")], and even reflexive interrogations as a practice in forecasting [[26](#bib.bib23 "When will ai exceed human performance? evidence from ai experts")]. While lacking some qualities of sociotechnical inquiry, in particular a deeply reflexive methodology and historical orientation, we see potential to pivot these discussions away from narrowly-framed thought experiments about paperclip-maximizing robots [[59](#bib.bib43 "Artificial intelligence as a positive and negative factor in global risk")] towards comparative investigations of the normative stakes of distinct AI-society interfaces.
###
Ii-B Fairness in Machine Learning
The field of machine learning (ML) emerged in the late 1950s with the design of a self-improving program for playing checkers [[49](#bib.bib32 "Some studies in machine learning using the game of checkers")] and quickly found success with static tasks in pattern classification, including applications like handwriting recognition [[32](#bib.bib15 "Learning machines: foundations of trainable pattern classifying systems")].
ML techniques work by detecting and exploiting statistical correlations in data, towards increasing some measure of performance.
A prominent early machine learning algorithm was the perceptron [[44](#bib.bib13 "The perceptron, a perceiving and recognizing automaton project para")], an example of supervised classification, perhaps the most prevalent form of ML.
In this setting, a classifier (or model) is trained with labelled examples, and its performance is measured by its accuracy in labelling new instances.
The perceptron spurred the development of deep learning techniques mid-century [[40](#bib.bib11 "A sociological study of the official history of the perceptrons controversy")]; however, they soon
fell out of favor, only having great success in recent decades in the form of neural networks via the increasing availability of computation and data.
Many ML algorithms require large datasets for good performance, tying the field closely with “big data.”
However, optimizing predictive accuracy does not generally ensure beneficial outcomes when predictions are used to make decisions, a problem that becomes stark when individuals are harmed by the classification of an ML system.
The inequality resulting from system classifications is the central sociotechnical risk of concern to practitioners in this the subfield of Fair ML.
A growing awareness of the possibility for bias in data-driven systems developed over the past fifteen years, starting in the data mining community [[41](#bib.bib6 "Discrimination-aware data mining")] and echoing older concerns of bias in computer systems [[22](#bib.bib7 "Bias in computer systems")].
The resulting interest in ensuring “fairness” was
further catalyzed by high profile civil society investigation (e.g. ProPublica’s Machine Bias study, which highlighted racial inequalities in the use of ML in pretrial detention) and legal arguments that such systems could violate anti-discrimination law [[10](#bib.bib12 "Big data’s disparate impact")].
At the same time, researchers began to investigate model “explainability” in light of procedural concerns around the black box nature of deep neural networks.
The research community around Fairness in ML began to crystallize with the ICML workshop on Fairness, Accountability, and Transparency in ML (FAT/ML), and has since grown into the ACM conference on Fairness, Accountability, and Transparency (FAccT) established in 2017.
By shifting the focus to fairness properties of learned models, Fair ML adjusts the framing of the ML pipeline away from a single metric of performance.
There are broadly two approaches: individual fairness, which is concerned with similar people receiving similar treatment [[20](#bib.bib18 "Fairness through awareness")], and group fairness which focuses on group parity in acceptance or error rates [[9](#bib.bib17 "Fairness and machine learning")].
The details of defining and choosing among these *fairness criteria* amount to normative judgements about which biases must be mitigated, with some criteria being impossible to satisfy simultaneously.
Much technical work in this area focuses on algorithmic methods for achieving fairness criteria through either pre-processing on the input data [[17](#bib.bib8 "Optimized pre-processing for discrimination prevention")], in-processing on the model parameters during training [[60](#bib.bib9 "Fairness constraints: a flexible approach for fair classification.")], or post-processing on model outputs [[28](#bib.bib10 "Equality of opportunity in supervised learning")].
The Fair ML community is oriented towards the sociotechnical, engaging actively with critiques from STS perspectives.
FAccT is a strong locus of interdisciplinary thought within computer science,
and the addition of *transparency* and *accountability* to the title opens the door to a wider range of interventions.
Building upon model-focused concepts like explainability, blendings of technical and legal concepts of recourse [[55](#bib.bib31 "Actionable recourse in linear classification")] and contestability [[38](#bib.bib30 "Shaping our tools: contestability as a means to promote responsible algorithmic decision making in the professions")] widen the frame to explicitly consider the reaction of individuals to their classification. Similarly, there have been multiple calls to re-center
stakeholders
to understand how explanations are interpreted and if they are even serving their intended purpose [[35](#bib.bib72 "Explanation in artificial intelligence: insights from the social sciences"), [12](#bib.bib74 "Machine learning explainability for external stakeholders")].
The community is increasingly open to discussing scenarios in which technical intervention, like the police use of facial recognition, is not desired. This encompasses both
technical resistance [[33](#bib.bib21 "POTs: protective optimization technologies")] and procedural approaches to delineating the valid uses of data [[24](#bib.bib20 "Datasheets for datasets")] and models [[36](#bib.bib27 "Model cards for model reporting")].
###
Ii-C Human-in-the-Loop Autonomy
As many of the earliest robotic systems were remotely operated by technicians, the field of robotics has always had problems of human-robot interaction (HRI) at its core [[25](#bib.bib5 "Human-robot interaction: a survey")].
Early work was closely related to the study of human factors, an interdisciplinary endeavor drawing on engineering psychology, ergonomics, and accident analysis [[5](#bib.bib2 "Ironies of automation")].
With advancements in robotic capabilities and increasing autonomy, the interaction paradigm grew beyond just teleoperation to *supervisory control*.
HRI emerged as a distinct multidisciplinary field in the 1990s with the establishment of the IEEE International Symposium on Robot & Human Interactive Communication.
Modern work in this area includes modeling interaction from the perspective of the autonomous agent (i.e. robot) rather than just the human overseer.
By incorporating principles from the social sciences and cognitive psychology, HRI uses predictions and models of human behavior to optimize and plan. This work mitigates the sociotechnical risk of accidents – defined specifically as states in which physical difficulties or mishaps occur.
Such physical risks are mitigated by making models robust to these potentially-dangerous conditions.
Digital technology has advanced to the point that many systems are endowed with autonomy beyond the traditional notion of a robotic agent, including traffic signal networks at the power grid.
We thus consider the subfield of *HIL Autonomy* to be the cutting edge research that incorporates human behaviors into robotics and cyber-physical systems.
This subfield proceeds in two directions: 1) innovations in physical interactions via sensing and behavior prediction; 2) designing for system resiliency in the context of complicated or unstable environments.
These boundaries are blurring in the face of increasingly computational methods and the prospective market penetration of new technologies. For example, the design of automated vehicles (AVs) poses challenges along many fronts.
For more fluent and adaptable behaviors like merging, algorithmic HRI attempts to formalize models for one-on-one interactions.
At the same time, AVs pose the risk physical harm, so further lines of work integrate these human models to ensure safety despite the possibility of difficult-to-predict actions.
Finally, population-level effects (e.g. AV routing on traffic throughput and induced demand) require deeper investigation into interaction with the social layer.
The emerging subfield of HIL Autonomy uses ideas from classical control theory while trying to quantify and capture the risk and uncertainty of working with humans [[4](#bib.bib40 "Cyber-physical systems"), [7](#bib.bib41 "Ensuring safety, security, and sustainability of mission-critical cyber–physical systems")].
It thus inherits some of the culture around verifying safety and robustness through a combination of mathematical tools and physical redundancy,
due to a history of safety-critical applications in domains like aerospace.
Technical work in this area typically entails including the human as part of an under-actuated dynamical system [[47](#bib.bib26 "Active preference-based learning of reward functions."), [58](#bib.bib3 "Stabilizing traffic with autonomous vehicles")], such as a un-modeled disturbance. Through this lens, human-induced uncertainty is mitigated by predicting behavior in a structured manner, maintaining the safety of the system through mathematical robustness guarantees [[6](#bib.bib28 "A scalable framework for real-time multi-robot, multi-human collision avoidance")].
To make this concrete, a lane-change maneuver in an AV might include both an aggressive driving plan that takes likely human behaviors into account as well as a reachability safety criterion which could be activated via feedback if observed human behavior falls outside of the expected distribution.
At a higher level of planning, the lane change maneuver may only be directed if it is expected to be advantageous for global traffic patterns.
The extent to which HIL Autonomy engages with the sociotechnical is thus far limited.
Human-centered research focuses on localized one-to-one interactions, while research considering more global interactions remains largely in the realm of the technical.
However, the critical “alt.HRI” track at the ACM/IEEE International Conference on Human-Robot Interaction indicates an emerging interest in how robotic systems interact with society more broadly.
In such venues, questions are raised surrounding how robots interact with social constructions of race [[11](#bib.bib76 "Robots and racism"), [53](#bib.bib79 "Robotics has a race problem")] and issues of robot-community integration are being studied in settings ranging from healthcare [[30](#bib.bib77 "Arts+ health: new approaches to arts and robots in health care")] to gardening [[57](#bib.bib78 "Adapting to a robot: adapting gardening and the garden to fit a robot lawn mower")].
There is also work which considers the incorporation of social values into cyber-physical systems, e.g. fair electricity pricing for smart grids [[31](#bib.bib4 "Fairness in real-time energy pricing for smart grid using unsupervised learning")].
While our identification of this emerging subfield is perhaps more speculative than the previous two, the physical realization of AI technologies will remain a crucial site of sociotechnical inquiry.
Iii Sociotechnical Integration
-------------------------------
While the subfields of AI Safety, Fair ML, and HIL Autonomy each consider problems at the interface of technology and human or social factors, there are differences which arise in part to their disparate histories.
One difference is in time-scales.
AI Safety is primarily concerned with long term outcomes of mis-aligned AI development, while Fair ML focuses on practical implementations of individual models and algorithms with imperfect datasets.
HIL Autonomy bisects the two, with both longer term considerations of how numerous autonomous agents will re-define how humans interact in the environment and short term focus on maintaining safety, e.g. in the presence of unexpected adverse road conditions.
Another difference arises from how
the subfields position themselves at different levels of abstraction.
HIL Autonomy is physically grounded, with a history closely tied with embodied interaction with humans and the social layer, while
Fair ML is socially grounded, and has strong instincts for sociotechnical dialog and historical situatedness.
On the other hand, AI Safety positions itself at the highest level of generality, relegating machine learning to the status of a tool and interpreting robotics as an application of formal guarantees.
For these subfields to place their sociotechnical inquiry on firmer foundations, it will be necessary to establish more reflexive relationships with their inherited assumptions about risk. Each subfield interprets itself as filling well-defined sociotechnical gaps, i.e. that there is a discernible divide between social problems and technical agendas. But in fact, the way these subfields have defined and worked on those gaps is itself problematic, piecemeal, and lacking in definition, i.e. it is normatively indeterminate. Reflexive inquiry is needed not to fill those gaps, but to define and interpret them more richly, so that their salience and urgency can be evaluated.
At a minimum, researchers and practitioners must learn to see behind their own technical abstractions to the social reality they assume, recognize that this reality may have been problematically defined, and learn to inquire into these definitions directly, perhaps with the aid of new transdisciplinary tools.
We now provide a high-level summary of this agenda, moving from a comparison of common technical traps to more indeterminate conceptions of sociotechnical risk.
###
Iii-a Grappling with Shortcomings in Framing
Each of the subfields discussed in the previous section seeks to expand the technical framing of their parent field to include human and social factors.
In [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")], the *framing trap* is introduced as the failure to model the full system of interest (e.g. with respect to a notion of fairness or safety).
Technical researchers are at risk of falling into this trap whenever they draw a *bounding box* around the system that they study.
Often, the consequences of this trap manifest as the *portability trap* [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")], which occurs when technical solutions designed for one domain or environment are misapplied to another context.
Technical researchers are at risk of falling into this trap whenever they mistakenly view a bounding box as appropriate to a new context.
The subfields of AI Safety, Fair ML, and HIL Autonomy can be viewed as attempts to avoid the framing trap.
In the fields of AI, ML, and robotics, the workflow often entails featurization by defining data or inputs/outputs, optimization by fitting a model or designing a control policy, and then integration into the larger system.
Researchers in the emerging sub-fields are beginning to understand the downsides of this unidirectional workflow, and the necessity of interrogating the modelling choices made at each step.
For example, AI Safety questions the way that features are used to define optimization objectives in light of potentially catastrophic effects of integration, while Fair ML questions the inequalities arising from model optimization.
Still, sometimes the frame is not opened wide enough.
For example, by failing to account for the larger system in which risk assessments are used, approaches to Fair ML may mistakenly treat loaning decisions the same way they treat pretrial detention, despite salient differences between the financial and criminal justice systems.
By adopting more rigorously a *heterogeneous engineering* approach [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")], researchers and practitioners can explicitly determine which properties are not tied to the technical objects under design but to their social contexts.
For example, the aerospace industry is an engineering domain with considerable heterogeneity—an awareness of the regulatory context, from the flight deck procedures to air traffic control, is necessary for the development of flight technologies.
###
Iii-B Abstraction Traps in AI Research
To motivate a stronger cross-disciplinary discourse among and outside of these subfields, we now make further use of the framework of abstraction traps provided by [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")] to point systematically to shortcomings and highlight potential new areas of inquiry.
Alongside the framing and portability traps, we discuss: the formalism trap, the ripple effect trap, and the solutionism trap.
The *formalism trap* occurs when mathematical formalisms fail to capture important parts of the human context. For example, the fairness of a system is often judged by procedural rather than technical elements, and the perceived reliability may depend more on predictability rather than formally verified safety.
All of the discussed subfields are posed to fall into the formalism trap, which requires a deeper engagement with sociotechnical complexities to avoid.
Ultimately, the validity and desirability of specific metrics arising from mathematical abstractions will be determined through intimate reference to social context rather than technical parsimony.
If systems are not flexible enough to allow for public input, the validity can be compromised.
The *ripple effect trap* occurs when there is a failure to understand how technology affects the social system into which it is inserted.
AI Safety considers ripple effects to some extent, but in a narrowly formal manner.
For example, while automated vehicles are known to affect traffic, road, and even infrastructure design, most technical research has focused on incorporating these as features to be modeled rather than questioning the status of AVs as the dominant form of future mobility.
Engagement across the entire sociotechnical stack requires understanding social phenomena like the “reinforcement politics” of dominant groups using technology to remain in power and “reactivity” like gaming and adversarial behavior. If a system encourages people to behave in an adversarial manner, it may call for utilizing richer design principles to promote cooperation, rather than merely throwing more advanced AI methods at the assumed dynamics.
Finally, the *solutionism trap* occurs when designers mistakenly believe that technical solutions alone can solve complex sociological and political problems. For example, while the legal community has encouraged technical fields to build systems that are reliably safe and fair, these interventions must be specified in terms of norms that can be appropriately internalized by practitioners.
The General Data Protection Regulation has had a mixed reception—while it did articulate normative landmarks for subfields to pay attention to, some of its requirements (e.g. consent as a legal basis for data processing) were highly underspecified. This specification vacuum empowered prominent private actors to advance their own standards in a way that is ethically questionable but politically effective, achieving market buy-in from enough other actors before the law can catch up [[56](#bib.bib90 "(Un)informed Consent: Studying GDPR Consent Notices in the Field"), [39](#bib.bib91 "Dark Patterns Post-GDPR: Scraping Consent Interface Designs and Demonstrating their Influence")]. Technical practitioners will need the ability to stand up and contest would-be standards publicly, rather than relying on the law to interpret systems before their sociotechnical scope has been appropriately modeled.
To avoid the solutionism trap, it is important to maintain a robust culture of questioning which problems should be addressed, and why these problems and not others: in the form of humility or a “first, do no harm” perspective.
###
Iii-C From Avoiding Traps to Anticipating Risks
An important initial step for grappling with abstraction traps is for technical practitioners in fields of AI Safety, Fair ML, and HIL Autonomy to consider them explicitly when attempting to solve and formulate problems.
In following with [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")], we find it most helpful to consider the traps in reverse order: is it worth designing a technical solution?
Can we adequately reason about how the technology will affect its social context?
Can the desired properties of the system be captured by mathematical abstractions?
Are the technical tools appropriate to the context?
And are all relevant actors included in the framing?
By considering these questions, researchers and practitioners will be encouraged to grapple with the plural temporalities defined by ongoing sociotechnical engagement through the validation of assumptions behind featurization, optimization, and integration.
While researchers in AI Safety, Fair ML, and HIL Autonomy are well positioned to begin asking these questions, it is only a first step. There is an inherent vulnerability in applying computational decision heuristics to vital social domains.
Autonomous AI systems introduce possibilities of catastrophic failure and normative incommensurability to contexts that were previously accessible only to human judgment and which we may never be able to exhaustively specify or completely understand. Beyond the mere avoidance of conceptual traps, practitioners must learn to anticipate sociotechnical risks as integral to the endeavor of building AI systems that interface with social reality.
The distinct intuitive approaches to risk taken by each of the examined subfields (extinction, inequality, and accident) stem from alternative histories of the sorts of dangers faced when integrating systems within a normative social order. In other words, while these research communities have adopted tools and mathematical formalisms that purport to represent and work on discrete social phenomena, in fact the tools themselves are sociotechnical interventions, and their elaboration is justified according to historically-sedimented perceptions of risk.
Rather than systems that represent and affect specific social objects (e.g. people, institutions), we advocate for the concept of AI as a process of elaborating normative commitments whose technical refinement generates unprecedented positions [[18](#bib.bib73 "Hard choices in artificial intelligence: addressing normative uncertainty through sociotechnical commitments")]. From these positions, novel sociotechnical questions can be revealed, resolved, or deferred.
Iv Towards Clinical Training for Graduate Pedagogy
---------------------------------------------------
How can researchers and practitioners learn to anticipate sociotechnical risks?
Awareness of abstraction traps may corroborate an appreciation of risks, but it does not provide the tools with which to anticipate or understand them.
For example, pedagogical reforms based on coursework drawn from Science and Technology Studies, Philosophy, and Law may inspire a requisite caution in technical practitioners.
However, this caution remains insufficient to define the problem space of appropriate uses of AI.
Instead, it will be necessary to encourage the coordination of technical and social scientists on these matters. In what follows, we interrogate this by evaluating possible reforms in graduate pedagogy.
Graduate students are a fruitful site for intervention for three primary reasons: 1) their educational role in shaping the next generation of engineers, 2) their role in pushing forward emerging areas of research and 3) their future as management and decision-makers at technology companies. Students should have the ability to recognize when a single development pipeline is trying to engage in multiple abstractions simultaneously because its metaphors are confused (e.g. the fact that certain AI Safety formalisms [[27](#bib.bib71 "Cooperative inverse reinforcement learning")], understood in terms of a principal-agent game, can function both as a form of mechanism design and as a kind of interface between user and robot).
It is further important that they have the ability to contest, merge, or even dissolve these frames if necessary. This will entail a major cultural transition in how the goals of graduate training are defined, moving away from failure-avoidance engineering in controlled environments to the responsible integration of technology in human contexts.
While there are efforts to widen the scope of a technical education and augment it with political and ethical training [[8](#bib.bib35 "Studying up: reorienting the study of algorithmic fairness around issues of power"), [37](#bib.bib34 "Towards a more representative politics in the ethics of computer science")], a truly sociotechnical graduate education would teach the skills of how to draw a technical bounding box as well as how to communicate those decisions to the publics that will have to reckon with the potential benefits and harms of new technology. Education cannot carve up the world into specific problem domains, but it could help coordinate concerns in a constructive manner that enables the development of context-appropriate validation metrics, as others have begun to do by synthesizing common technical pitfalls [[50](#bib.bib29 "Fairness and abstraction in sociotechnical systems")].
While coursework lays the foundations for research, it cannot provide a descriptive ontology that would exhaustively capture sociotechnical risks in advance of active inquiry.
Anticipating and mitigating such risks requires an immersion in the relevant social context, becoming richly familiar with its phenomenology from the human standpoint. Only by doing this is it possible to register the system specification in terms of the concrete normative stakes rather than abstract approximations of optimal behavior.
This entails an ontological shift away from a purely mechanistic description of the domain in favor of a clinician’s perspective, comparable in scope and significance to the emergence of modern medical and legal clinics [[13](#bib.bib61 "Becoming a physician: medical education in britain, france, germany, and the united states, 1750-1945"), [43](#bib.bib62 "The history of legal clinics in the us, europe and around the world"), [29](#bib.bib63 "Clinical legal education: the history and development of a law clinic")].
We believe a distinctly clinical approach to social problems—engaged and prolonged consultation, direct provision of service, relationships with clients, hands-on education overseen by professors—is the best approach.
Technical work will always rely on abstraction and framing to describe the environment in which a system is designed to function. It falls on technical researchers and practitioners to understand how to specify such a *bounding box*, decide which frames and abstractions are valid and tractable as well as commensurate with stakeholder concerns, and articulate their choices to relevant communities with varying technical backgrounds. We see this “clinician’s eye,” entailing effective framing and communication, as the most promising potential outcome of reforming AI pedagogy at the graduate level, and defer further investigation of clinical approaches in the context of CS education to future work.
V Conclusion
-------------
The work of defining “sociotechnical” problems in AI development is ongoing.
Systems themselves often make symbolic reference to situations, environments, or objects that are assumed to lie behind their representations unreflectively, allowing the same mathematical structures to propagate without interrogating key metaphorical frames.
This norm results in a practice incommensurate with other expert professions’ standards of liability.
Along with the inconsistency between subfields, this makes it hard to define what constitutes an AI expert and how responsibility should be assigned when systems fail.
Looking from the outside in, the legal and philosophical communities cannot enforce standards that are neither backed up by established forms of expertise
nor immediately translatable outside the context of technical-mathematical formalism, meaning case law and abstract ethics cannot fully determine or guide sociotechnical regulation.
Given this normative indeterminacy, we argue there is no ready-made delineation of which technical tools are suited to which social problems, and instead look to prospective interventions nurturing new forms of inquiry into inherited notions of risk.
On this view, interventions would embrace the notion that elaborating on sociotechnical problems and procedures is essential to the task itself, and practitioners would understand the sociotechnical simply as part of what they do.
We argue this is the more sure path to effective norms for distinct subfields of AI development, and thus to the aims of Public Interest Technology.
|
47f51f28-8295-4660-9793-7192a18fccd4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Transcription of Eliezer's January 2010 video Q&A
Spurred by discussion of whether Luke's Q&A session should be on video or text-only, I volunteered to transcribe Eliezer's Q&A videos from January 2010. I finished last night, much earlier than my estimate, mostly due to feeling motivated to finish it and spending more on it than my very conservative estimated 30 minutes a day (estimate of number of words was pretty close; about 16000). I have posted a link to this post as a comment in the original thread here, if you would like to upvote that.
Some advice for transcribing videos: I downloaded the .wmv videos, which allowed me to use VLC's global hotkeys to create a pause and "short skip backwards and forwards" buttons (ctrl-space and ctrl-shift left/right arrow), which were so much more convenient than any other method I tried.
Edited out: repetition of the question, “um/uh”, “you know,” false starts.
Punctuation, capitalization, and structure, etc may not be entirely consistent.
Keep in mind the opinions expressed here are those of Eliezer circa January 2010.
1. What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.
By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.
It’s not very planned, most of the time, in other words Hacker News, Reddit, Marginal Revolution, other random stuff found on the internet. In order to learn something, I usually have to set aside blocks of time and blocks of effort and just focus on specifically reading something. It’s only sort of popular level books which I can put on a restroom shelf and get them read that way. In order to learn actually useful information I generally find
|
01f45d6b-b227-431e-8d50-f4e112f93132
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
DSLT 3. Neural Networks are Singular
*TLDR; This is the third main post of* [*Distilling Singular Learning Theory*](https://www.lesswrong.com/s/czrXjvCLsqGepybHC) *which is introduced in* [*DSLT0*](https://www.lesswrong.com/posts/xRWsfGfvDAjRWXcnG/dslt-0-distilling-singular-learning-theory)*. I explain that neural networks are singular models because of the symmetries in parameter space that produce the same function, and introduce a toy two layer ReLU neural network setup where these symmetries can be perfectly classified. I provide motivating examples of each kind of symmetry, with particular emphasis on the non-generic node-degeneracy and orientation-reversing symmetries that give rise to interesting phases to be studied in* [*DSLT4*](https://www.lesswrong.com/posts/aKBAYN5LpaQMrPqMj/dslt-4-phase-transitions-in-neural-networks)*.*
As we discussed in DSLT2, singular models have the capacity to generalise well because the effective dimension of a singular model, as measured by the RLCT, can be less than half the dimension of parameter space. With this in mind, it should be no surprise that neural networks are indeed singular models, but up until this point we have not exactly explained what feature they possess that *makes* them singular. In this post, we will explain that in essence:
> **Neural networks are singular because there are often ways to vary their parameters without changing the function they compute.**
>
>
In the case where the model and truth are both defined by similar neural network architectures, this fact means that the set of true parameters W0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is non-trivial (i.e. bigger than the regular case where it is a single point), and often possesses many symmetries. This directly implies that neural networks are singular models.
The primary purpose of this post is to show with examples why neural networks are singular, and classify the set of true parameters W0 in the case where the model and truth are simple two layer feedforward ReLU networks. In doing so, we will lay the groundwork for understanding the phases present in the setup so that we can then study relevant phase transitions in DSLT4. Feel free to jump ahead to the slightly more exciting DSLT4 Phase Transitions in Neural Networks and refer back to this post as needed.
### Outline of Classification
To understand the different regions that minimise the free energy (and thus, as we'll see in DSLT4, the *phases*), one needs to first understand the singularities in the set of optimal parameters of K(w).
In the realisable regression case with a model neural network f(x,w) and true neural network defined by f0(x)=f(x,w(0)) for some w(0)∈W, the set of true parameters has the form [[1]](#fnru395evjkvc)
W0={w∈W|f(x,w)=f0(x)}.Thus, classifying the true parameters is a matter of establishing which parameters w∈W yield functional equivalence between the model and the truth f(x,w)=f0(x). The property of being singular is specific to a model class f(x,w), regardless of the underlying truth. But, classifying W0 in the realisable case is a convenient way of studying what functionally equivalent symmetries exist for a particular model class.
Neural networks have been shown to satisfy a number of different symmetries of functional equivalence across a range of activation functions and architectures, which we will elaborate on throughout the post. Unsurprisingly, the nonlinearity of the activation function plays a central role in governing these symmetries. In general, then, deep neural networks are *highly singular*.
In this post we are going to explore a *full characterisation* of the symmetries of W0 when the model is a two layer feedforward ReLU neural networks with d hidden nodes, and the truth is the same architecture but with m≤d nodes. Though you would never use such a basic model in real deep learning, the simplicity of this class of network allows us to study W0 with full precision. We will see that:
* **If the model and truth have the** ***same*** **number of nodes,**m=d: There are three forms of symmetry of W0:
+ *Scaling symmetry* of the incoming and outgoing weights to any node.
+ *Permutation symmetry* of the hidden nodes in a layer.
+ *Orientation reversing symmetry* of the weights, only when some subset of weights sum to zero (i.e. "annihilate" one another).
* **If the model has** ***more*** **nodes than the truth,**m<d**:** Without loss of generality, the first m nodes of the model must have the same symmetries as in the first case. Then each excess node i∈{m+1,…,d} is either
+ *Degenerate*, meaning its total weight (gradient) is 0 (thus the node is always constant).
+ Or it has the *same activation boundary* as another already in the model such that the weights sum to the total gradient in a region [[2]](#fn0xzmqxcs9xq).
In [[Carroll](http://therisingsea.org/notes/MSc-Carroll.pdf), Chapter 4], I give rigorous proofs that in both cases, W0 is classified by these symmetries, and these symmetries alone. The purpose of this post is not to repeat these proofs, but to provide the intuition for each of these symmetries. I have included a sketch of the full proof in the appendix of this post if you are more mathematically inclined.
Two layer Feedforward ReLU Neural Networks
==========================================
Literature abounds on what neural networks are, so I will merely give the definition of the class we are going to study here and some related terminology for the discussion.
Defining the Networks and Terminology
-------------------------------------
Let W⊆R4d+1 be a compact parameter space. We will let [d]={1,…,d} denote the set of hidden nodes in the first layer of our network, and ⟨wi,x⟩ denote the standard dot product between two vectors. Also recall that
ReLU(x)={xifx≥00ifx<0.We let f:R2×W→R1 denote a *two layer feedforward ReLU neural network* with two inputs x1,x2 and one output y, defined by a parameter w∈W. The function is given by
f(x,w)=c+d∑i=1qiReLU(⟨wi,x⟩+bi) where for each i∈[d]:
* the first layer weights are wi∈R2 and the biases are bi∈R
* the second layer weights are qi∈R and the bias is c∈R.
These functions are simply piecewise affine functions (i.e. piecewise hyperplanes), and as such they have (relatively) easy topology to study. Before we give an example, we will briefly mention some key terminology.
Let fw(x)=f(x,w) be defined by a fixed w∈W. We say a particular node i∈[d] is *degenerate* in fw if either of the weights are zero, so wi=0 or qi=0. [[3]](#fnz8cd5qmjmj)
We say a non-degenerate node i is *activated* in some linear domain [[4]](#fnw2yqqipjtz) U⊆R2 when the ReLU is non-zero for all x∈U , that is,
⟨wi,x⟩+bi=wi,1x1+wi,2x2+bi>0.The *activation boundary* associated to node i is thus the line
Hi={x∈R2|⟨wi,x⟩+bi=0}.One of the key accounting tools in the symmetry classification is identifying the foldsets of fw (in the terminology of [PL19]), which are the regions where fw is non-differentiable in x, and noticing that these equate to the union of non-degenerate activation boundaries Hi. Two functionally equivalent networks must then have the same foldsets since they define the same function, allowing us to compare the lines defined by Hi.
Example - Feedforward ReLU Neural Networks are Piecewise Hyperplanes
--------------------------------------------------------------------
**Example 3.1:** Consider the following two layer feedforward ReLU neural network:
fw(x)=ReLU(x1−1)+ReLU(x2−1)+ReLU(−x1−1)+ReLU(−x2−1). defined by biases bi=−1 and c=0, second layer weights qi=1, and first layer weights
w1=(10),w2=(01),w3=(−10),w4=(0−1).Its graphical structure and activation boundaries in the (x1,x2) plane can be seen below:
The data of fw(x) above.Conceptually, it's helpful to notice that when anchored on its corresponding activation boundary, each weight vector wi "points" into its region of activation.
The Symmetries of Two Layer Feedforward ReLU Neural Networks
============================================================
In this section I am going to provide some motivating examples of each kind of symmetry exhibited in two layer feedforward ReLU neural networks. To prove that this is the *full* set of symmetries in generality requires a bit more work, which we relegate to the appendix.
Scaling Inner and Outer Weights of a Node
-----------------------------------------
The scaling symmetry of ReLU networks offers us our first window into why these models are singular. The key property is to notice that for any α>0, the ReLU satisfies a scale invariance [[5]](#fnhyz203eqnfv)
1αReLU(αx)=ReLU(x). Say we had the simplest model possible with just one node:
f(x,w)=q1ReLU(⟨w1,x⟩+b1)+c. Then we could define an alternative parameter w′ with
q′1=q1α,w′1=αw1,b′1=αb1,c′=c, which gives *functional equivalence* because,
f(x,w′)=q′1ReLU(⟨w′1,x⟩+b′1)+c′=q1αReLU(⟨αw1,x⟩+αb1)+c=q1αReLU(α(⟨w1,x⟩+b1))+c=q1ReLU(⟨w1,x⟩+b1)+c=f(x,w).For a model with d hidden nodes, the same scaling symmetry applies to each individual node i∈[d] with a set of scaling factors αi>0.
The fact that we can define such a w′ for *any* set of positive scalars means that the Fisher information matrix of these models is degenerate at all points w∈W. We prove this in generality in Appendix 1, but I'll spell it out explicitly for a simple example here.
### Example - Scaling Symmetry Induces a Degenerate Fisher Information Matrix
**Example 3.2:** It is worth taking a moment to recognise how this scaling symmetry affects the geometry of the loss landscape K(w). The mental model to have here is that it results in *valleys* in K(w), where the set of true parameters W0 is like a river on the valley floor. To see this, say we defined a model with parameter w=(w,q) and truth as:
f(x,w)=qReLU(wx),f0(x)=θ0ReLU(x),where θ0>0 is some fixed constant. If q(x) is uniform on [−√3,√3] then it is easy to calculate that when w,q≥0 we have
K(w)=(wq−θ0)2,soW0={(w,q)|wq=θ0}. We can depict this valley and its effect on the posterior for θ0=15:
Setting θ0=15, we see that K(w) is a valley due to the scaling symmetry (left), thus there is no unique maximum a posterior (right). Remember that, up to a scaling factor, e−nKn(w) is the posterior when the prior φ(w) is uniform, and e−nKn(w)≈e−nK(w) for large n since E[Kn(w)]=K(w).Looking at this K(w), it's easy to intuit that the Fisher information matrix I(w) is degenerate for all w. But, for clarity, let me spell this out for the true parameters in the case where θ0=1, so K(w)=(wq−1)2.
Remember that at true parameters the Fisher information matrix is just the Hessian, which in this case has the form
J(w)=(2q24wq−24wq−22w2).In particular, let w(0)∈W0 be a fixed true parameter parameterised by a fixed α>0, so w(0)=(α,1α). Then the Fisher information matrix has the form
I(w(0))=(2α2222α2).Setting I1(w(0)) and I2(w(0)) to be the rows of the matrix, there is clearly a linear dependence relation
−α2I1(w(0))+I2(w(0))=0and since α is arbitrary, this shows that all true parameters have degenerate Fisher information matrices and are thus singular.
Permutation of Nodes
--------------------
This one is easy to see. If we have a model with d=2 nodes,
f(x,w)=q1ReLU(⟨w1,x⟩+b1)+q2ReLU(⟨w2,x⟩+b2)+c, and we define a new model f(x,w′) where w′ is a permutation of the nodes in f(x,w),
(w′1,b′1,q′1)=(w2,b2,q2),(w′2,b′2,q′2)=(w1,b1,q1),andc′=c,then
f(x,w′)=q′1ReLU(⟨w′1,x⟩+b′1)+q′2ReLU(⟨w′2,x⟩+b′2)+c′=q2ReLU(⟨w2,x⟩+b2)+q1ReLU(⟨w1,x⟩+b1)+c=f(x,w).This easily generalises to d hidden nodes by taking any permutation σ∈Sd in the permutation group Sd and letting each node i′ of f(x,w′) satisfy i′=σ(i), so
f(x,w)=c+d∑i=1qiReLU(⟨wi,x⟩+bi)=c+d∑i=1qσ(i)ReLU(⟨wσ(i),x⟩+bσ(i))=f(x,w′).Permuting nodes induces functional equivalence, here depicted for σ=(1,3)(2,4).Orientation Reversal
--------------------
This one is a bit trickier to observe as the symmetry depends on a very specific condition of weight annihilation. Let's look at a simple example first.
### Motivating Example
**Example 3.3:** Consider a true distribution defined by a (one-input) feedforward ReLU given by
f0(x)=ReLU(x−1)+ReLU(−x−1)+2=⎧⎨⎩−x+1x≤−12−1≤x≤1x+1x≥1where w(0)1=1, w(0)2=−1, and the activation boundaries are H(0)1={x=1} and H(0)2={x=−1}.
Surprisingly, though it may appear our linear regions and activation boundaries must uniquely define the function (up to the scaling and permutation symmetries), there is a particular symmetry that arises by *reversing the orientation* of the weights and first layer biases, and adjusting the total bias accordingly. When we say reverse the orientation, we mean negating their direction,
w1=−w(0)1=−1andw2=−w(0)2=1,and ditto for the biases. If we adjust the total bias c accordingly, then following function
f(x,w)=ReLU(−x+1)+ReLU(x+1)gives the same functional output!
Reversing the orientation of the true weights preserves this function because the true weights annihilate one another.There is a very specific reason we can do this: in the middle region −1≤x≤1, both nodes are active and cancel out to give a constant function,
f(x,w)=(−x+1)+(x+1)=2,because the total gradients of the underlying truth sum to zero, w(0)1+w(0)2=0.
### General case
Suppose the true network f0(x) is defined by a fixed w(0)=(w(0)1,…,b(0)1,…q(0)1,…,c) for m nodes. *If* there is a set F⊆[m] of total gradients that sum to 0,
∑i∈Fq(0)iw(0)i=0then the model can produce functional equivalence by reversing the orientation of those particular weights (associated to those activation boundaries), biases, and adjusting the total bias. In other words, modulo permutation and scaling symmetry, there is a functionally equivalent network to f0(x) where the weights of every i∈F satisfy
wi=−w(0)i.We call the condition ∑i∈Fq(0)iw(0)i=0 *weight annihilation*.
In [[Carroll21](http://therisingsea.org/notes/MSc-Carroll.pdf), §4.5] we define m-*symmetric* networks where the weights are progressive rotations by the angle 2πm, thus their total sum is zero. In DSLT4, we will study whether the posterior prefers configurations of weight-annihilation or not. (The answer is: not). [[6]](#fncu67bj284jd)
An m-symmetric network for m=3 with q(0)i=1 and c(0)=0. Both configurations, non-weight-cancellation (left) and weight-cancellation (right), are functionally equivalent since ∑3i=1w(0)i=0. Here, weight cancellation refers to the configuration where all three nodes are active in the central linear domain, but cancel to give an effective gradient of zero there.Node Degeneracy
---------------
This is possibly the most important symmetry of all: neural network models can have more nodes than they need to represent a particular function. In essence, this degeneracy is the reason that different regions of the loss-landscape K(w) of neural networks have fundamentally different accuracy-complexity tradeoffs. In other words, if the model has d nodes in the hidden layer available to it, then all possible subnetwork configurations with *less* than d nodes are *also contained within the loss landscape.* Thus, increasing the width of the network can only serve to increase the accuracy of these models, without sacrificing its ability to generalise, since the posterior will just prefer that number of hidden nodes with the best accuracy-complexity tradeoff.
### Motivating Example
**Example 3.4:** Suppose we had a (one-input) true network given by
f0(x)=ReLU(x)and our model had d=2 nodes (with fixed biases b1=b2=c=0 and outgoing weights q1=q2=1),
f(x,w)=ReLU(w1x)+ReLU(w2x).Since f0(x)=0 for x≤0, both weights must be positive, w1,w2≥0, to have any hope of being functionally equivalent. If f(x,w)=f0(x), we are in one of two configurations:
**One node is degenerate:** Either (w1,w2)=(1,0) or (w1,w2)=(0,1), meaning
f(x,w)=ReLU(1x)+ReLU(0x)=ReLU(x)=f0(x).**Both nodes are non-degenerate, but the total gradient is the same as the truth:** So long as the weights satisfy
w1+w2=1, for w1,w2>0, we will have functional equivalence since, setting w2=1−w1,
f(x,w)=ReLU(w1x)+ReLU((1−w1)x)={w1x+(1−w1)xx≥00x≤0=ReLU(x)=f0(x).### Node-degeneracies Correspond to Different Phases
We could of course encapsulate both of these configurations into the one statement that w1+w2=1 for w1,w2≥0, but there is a key reason we have delineated them: they represent two different *phases* and have different geometry on K(w). Intuitively, the degenerate phase is a simpler model with less complexity, thus we expect it has a lower RLCT [[7]](#fn1mgrafzfsc5), and for the posterior to prefer it. In DSLT4 we will discuss phases in statistical learning more broadly, and display experimental evidence for this latter claim.
To foreshadow this, we can actually calculate K(w) for Example 3.4. Setting the prior q(x) to be uniform on [−√6,√6] we find
K(w1,w2)=⎧⎪
⎪
⎪
⎪⎨⎪
⎪
⎪
⎪⎩(w1+w2−1)2w1,w2≥0w21+(w2−1)2w1≤0,w2≥0(w1−1)2+w22w1≥0,w2≤0(w1+w2)2+1w1,w2≤0.K(w) for the above example with slightly wider bowls at the degenerate-node phases.Notice how there are ever so slightly wider bowls at either end of the line w1+w2=1, thus suggesting the posterior has more density at the degenerate phase (w1,w2)=(0,1)(or vice versa). Intuitively, imagine a tiny ball being with random kinetic energy rolling around the bottom of the surface - it will spend more time in the ends since there is more catchment area. (Don't take the physics analogy too seriously, though).
We see once again that the *singularity* structure of the minima has a big impact on the geometry of K(w), and therefore the posterior.
### General Case
Suppose we have a truth f0(x) and model f(x,w) that are both defined by two layer feedforward ReLU neural networks, where the model has d nodes and the truth has m nodes (assumed to all be non-degenerate and with distinct activation boundaries) such that m<d. Then the model is overparameterised compared to the truth it is trying to model.
Performing the appropriate analysis (which we do in Appendix 2), one finds that:
1. Without loss of generality (i.e. up to permutation symmetry), the first m nodes of [d] must have the same activation boundaries as those in f0(x), and satisfy the same scaling, permutation and orientation reversing symmetries as discussed above.
2. For the remaining nodes i∈{m+1,…,d} of the model, each one either:
* Is *degenerate*, so wi=0 or qi=0, or;
* Shares the *same activation boundary* as one already in [m] such that the total gradients sum to the correct gradient in each region. (In our above example, this is saying that necessarily w1+w2=1 since these nodes share the same activation boundary).
The function f0(x)=2ReLU(x2−13) can also be represented by a two node model where both nodes share the same boundary, f(x,w)=ReLU(x−13)+ReLU(x−13). In DSLT4 we will test which of the phases in the above figure is preferred by the posterior for this simple two layer feedforward ReLU neural networks setup.
### Node degeneracy is the same as lossless network compressibility
The fact that neural networks can contain these node-degeneracies is well known and often goes under the guise of *lossless network compressibility*. There are many notions of compressibility, but the one that makes the most sense in our setup is to say that if the model has d>m hidden nodes compared to the truth, then it can be compressed to a network with only m hidden nodes and still produce the same input-output map.
For an excellent introduction to lossless network compressibility, see Farrugia-Roberts' recent paper [Computational Complexity of Detecting Proximity to Losslessly Compressible Neural Network Parameters](https://arxiv.org/abs/2306.02834), where he studies the problem for tanhnetworks.
There are More True Parameters if the Input Domain is Bounded
-------------------------------------------------------------
Let me make an important remark here. In both of the above cases, we have considered the symmetries of W0 when the input domain of the model and the truth is all of R2. As we explain in Appendix 2, this allows us to compare the gradients and biases of hyperplanes, similar to comparing polynomial coefficients, to make our conclusions. However, if the domain of the input prior q(x) is restricted to some open bounded domain Z⊆R2, there could in principle be more degeneracies and symmetries of W0, since the functional equivalence only needs to be on Z.
For example, consider a true network defined by f0(x)=0 and a single-node single-input model f(x,w)=qReLU(⟨w,x⟩+b) defined on Z=(−a,a), so q(x)=12a1(−a<x<a). If the activation boundary falls outside of Z and the vector w points away from Z, then any value of q,w,b satisfying these constraints would give f(x,w)=0, thus there is an entirely new class of symmetry in W0.
Whilst important to keep this mind, we won't discuss this any further as it opens up an entirely different can of worms.
Even if Singularities Occur with Probability Zero, they Affect Global Behaviour in Learning
===========================================================================================
I want to make a quick comment on the work of Phuong and Lampert in [PL19]. In this paper, they prove equivalent results to these for arbitrary depth feedforward ReLU neural networks (with non-increasing widths), but with a key distinction: they consider *general* models. In their words,
> A sufficient condition for a network to be general with probability one is that the weights are sampled from a distribution with a density.
>
>
They then show that almost all feedforward ReLU networks with this architecture are general, and then show that general networks only satisfy scaling and permutation symmetries, thus excluding our orientation-reversing and degenerate node singularities since they occur on a set of measure zero. Importantly, this implies that almost all parameters w∈W have no degenerate nodes, or equivalently, no opportunity for lossless compression.
However, even though scaling and permutation symmetries may be the only generic symmetries (in the sense of [measure theory](https://en.wikipedia.org/wiki/Generic_property#In_measure_theory)) that occur with non-zero probability, SLT tells us that the singularities of K(w) have *global effects* on the loss landscape, as we discussed at length in [DSLT2](https://www.lesswrong.com/posts/CZHwwDd7t9aYra5HN/dslt-2-why-neural-networks-obey-occam-s-razor). If a parameter is near a non-generic singularity (i.e. one that occurs with probability zero), it computes a function that is almost identical to the one computed by that of a non-generic singularity. If we shift our language to that of compressibility of a network, SLT tells us that:
*Just because a particular point*w∈W*sampled from a posterior (or, notionally, obtained via running SGD) is not directly compressible itself, that doesn't mean that it isn't extremely close to one that is.*
In this sense, SLT tells us that to understand the geometry of the loss landscape, we need to consider singularities even though they are not generic points. As Watanabe says, *singularities contain knowledge.*
Appendix 1 - Formal Proof that Neural Networks are Singular
===========================================================
If, like me, you are mathematically inclined, you probably want to see a proof that these neural networks are, indeed, singular models, to tie together the various concepts and intuitions that we have built in this sequence so far. So let's turn into math mode briefly.
Recall that the Fisher information matrix I(w) is degenerate if and only if the set
{∂∂wjf(x,w)}Dj=1 is linearly dependent. Here, ∂∂wj refers to the partial derivative with respect to the jth component of the total parameter w∈W, not to be confused with the specific weight vector wj in the neural network definition. Thus, to prove that feedforward ReLU networks are singular, our task is to find this linear dependence relation. The scaling symmetry alone is enough for this.
**Theorem:** Given a two layer feedforward neural network f:R2×W→R with d hidden nodes, for any domain on which f is differentiable, f satisfies the differential equation for a fixed node i∈[d]:
{wi,1∂∂wi,1+wi,2∂∂wi,2+bi∂∂bi−qi∂∂qi}f=0.**Proof:** Since ddxReLU(x)=1{x>0}, and letting ai=⟨wi,x⟩+bi, the set of derivatives with respect to our parameters are
∂f∂w1,k=q1xk1(ai>0),∂f∂bi=q11(ai>0),∂f∂qi=ReLU(ai), and so since we can write ReLU(ai)=ai1(ai>0) we have
{wi,1∂∂wi,1+wi,2∂∂wi,2+bi∂∂bi−qi∂∂qi}f=qiwi,1x11(ai>0)+qiwi,2x21(ai>0)+qibi1(ai>0)−qiReLU(ai)=qiReLU(⟨wi,x⟩+bi)−qiReLU(⟨wi,x⟩+bi)=0.□**Corollary:** Feedforward ReLU neural networks are *singular* models.
**Proof:** For the two layer case, for any fixed w0∈W, there is a linear dependence relation given by the above differential equation evaluated at w0, thus the Fisher information is degenerate at w0, so the model is singular.
The equivalent proof for arbitrary depths and widths is given in Lemma A.1 of [Wei22], following from other work on functional equivalence in [PL19]. □
The degenerate node symmetries also give rise to a degenerate Fisher information matrix, though I haven't formally written out this alternate proof yet. If you are interested, do it as an exercise and leave it as a comment!
Appendix 2 - Proof Sketch for Fully Classifying W0 for Two Layer Feedforward ReLU Networks
==========================================================================================
This section is going to be slightly more technical, and in the grand scheme of the SLT story I am telling in this sequence, this may be seen as an unnecessary side-plot. But, other readers, particularly those with a pure mathematical bent, may find it interesting to consider the process of fully classifying W0 and how one might understand all phases present, so I am providing a sketch of these proofs for completeness. Understanding the full form of W0 was a vital part of performing the phase transition experiments that we will see in DSLT4. These models are simple enough that we can perfectly classify *all* true parameters in W0. Thus, we can precisely understand all of its phases.
We are going to classify the symmetries of W0 when both the model f(x,w) and truth f0(x) are two-layer feedforward ReLU neural networks, with d and m hidden nodes respectively, giving
W0={w∈W|f(x,w)=f0(x)},meaning the task is to classify functional equivalence of the two networks. To avoid some annoying fringe cases, we assume that the true network is *minimal*, which means there is no network with fewer nodes that could also represent it (which also means every node is non-degenerate), and *activation-distinguished*, meaning every node of the truth corresponds to a unique activation boundary.
We will see that the set of symmetries explained above comprise *all* of the symmetries in W0 - there can be no more [[8]](#fnfhqz087tx4j). This result rests mainly on the fact that the activation boundaries are the core piece of data that defines a neural network. The rest is then just performing accounting of the gradients and biases in each region.
This is a sketch of the proofs in Chapter 4 of [my thesis](http://therisingsea.org/notes/MSc-Carroll.pdf), and all lemmas and theorems that are referenced in the following section come from here.
Case 1: The model has the same number of nodes as the truth, m=d
----------------------------------------------------------------
Let f(x,w) be a two layer feedforward ReLU neural network model with d hidden nodes, and let f0(x)=f(x,w(0)) be the realisable true network with m hidden nodes defined by a fixed parameter w(0), denoted by
f0(x)=c(0)+m∑j=1q(0)iReLU(⟨w(0)j,x⟩+b(0)j),which we assume is minimal and activation-distinguished as explained above.
We start by comparing the foldsets, which are the activation boundaries [Lemma 4.1], between the truth and the model. Let Hi be the activation boundary of the node i∈[d] in the model, and H(0)j be the activation boundary of the node j∈[m] in the truth. Then by comparing the sets of linear lines in [Lemma 4.2], we can show that for every node of the model i∈[d] there exists a permutation σ∈Sm such that
Hi=H(0)σ(i).By [Lemma 4.3], two activation boundaries H,H′ are equal if and only if there is some non-zero scalar α∈R∖{0} such that w=αw′ and b=αb′.
Using our relation Hi=H(0)σ(i), in [Lemma 4.4] we analyse how the gradients and biases change across each activation boundary, and what this means for the relation between weights and biases in the model versus the truth. We show that there exists a unique σ∈Sm, and for each i∈[d] an ϵi∈Z2 and αi∈R>0 such that
wi=(−1)ϵiαiw(0)σ(i),andbi=(−1)ϵiαib(0)σ(i),whereαi=q(0)σ(i)qi,meaning qi and q(0)σ(i) necessarily have the same sign.
However, there is a restriction on which weights can have reversed orientation, ϵi=1 (thus wi=−αiw(0)σ(i)). Letting E={i∈[d]|ϵi=1}, we show in [Lemma 4.5] that the weights and biases of the true network must satisfy [[9]](#fny82towwpimq)
∑i∈Eq(0)σ(i)w(0)σ(i)=0andc(0)+∑i∈Eq(0)σ(i)b(0)σ(i)=c.The crux of this proof rests in comparing the gradients in regions either side of the activation boundary Hi.
In [Theorem 4.7] we show that these scaling, permutation and orientation reversing symmetries are the *only* such symmetries by piecing together all of these aforementioned Lemmas, with emphasis on the importance of the activation boundaries in defining the topology of f0(x). [[10]](#fnzy8axm10g0l)
Case 2: The model has more nodes than the truth, m<d
----------------------------------------------------
We now suppose that the model is over-parameterised compared to the true network, so m<d.
The key piece of data is once again the foldsets defining the model and the truth. Since they must be equal, the model can only have m unique foldsets, and thus activation boundaries. Without loss of generality (i.e. up to permutation symmetry), the first [m]⊂[d] nodes in the model have the same activation boundaries as the truth, ⋃mi=1Hi=⋃mj=1H(0)j. Thus, these [m] nodes in the model must satisfy the same symmetries as in the m=d case.
By comparing the fold sets on each excess node in {m+1,…,d}, we must have
d⋃i=m+1{Hi|iis non-degenerate}⊆m⋃j=1H(0)j. In comparing linear lines again, this means there are two possible situations:
1. {Hi|iis non-degenerate} is empty, so node i is degenerate, meaning qi=0 or wi=0, or;
2. Hi=H(0)j for some j∈[m], so node i shares an activation boundary already in the first [m] nodes of the model.
Let d′≥m the number of non-degenerate nodes of the model. We can thus define a surjective finite set map
π:{1,…,m,m+1,…,d′}→{1,…,m}relating the non-degenerate nodes in the model to those in the truth, which is a bijection (i.e. a permutation σ∈Sm) on the first [m]⊂[d′].
We can then compare the gradients and biases in each region to show that the total gradients calculated by each non-degenerate node at each unique activation boundary must sum to the gradient in the truth. Precisely, for each node j∈[m] of the truth, let Mj={i∈[d′]|π(i)=j} be the set of nodes in the model that share the same activation boundary. Then for each i∈[d′] there exists an ϵi∈Z2 and αi∈R>0 such that
wi=(−1)ϵiαiw(0)π(i),bi=(−1)ϵiαib(0)π(i) with the constraint that
∑i∈Mjqiαi=q(0)j.A similar orientation reversing symmetry also applies as in case 1, just by accounting for the nodes that share the same activation boundaries.
Resources
=========
[Carroll21] - L. Carroll, [Phase Transitions in Neural Networks](http://therisingsea.org/notes/MSc-Carroll.pdf), 2021
[Wei22] - S. Wei, D. Murfet, et al., [Deep Learning is Singular, and That's Good](https://arxiv.org/abs/2010.11560), 2022
[PL19] - M. Phuong, C. Lampert, [Functional vs Parametric Equivalence of ReLU networks](https://openreview.net/pdf?id=Bylx-TNKvH), 2019
---
1. **[^](#fnrefru395evjkvc)**Since K(w)=0 if and only if q(y|x)=p(y|x,w) for some w∈W.
2. **[^](#fnref0xzmqxcs9xq)**e.g. ReLU(x)+ReLU(x)=ReLU(2x). [↩︎](#fnref-JgTqhdejMTpnvBama-1)
3. **[^](#fnrefz8cd5qmjmj)**For ease of classification, we exclude the case where qi≠0 and bi≠0 since we can just absorb the total bias contribution into c. [↩︎](#fnref-JgTqhdejMTpnvBama-2)
4. **[^](#fnrefw2yqqipjtz)**A *linear domain* U⊆R2 is just a connected open set where fw is a plane with constant gradient and bias when restricted to U, and U is the maximal such set for which that plane is defined. In other words, the set of linear domains are the set of different regions the piecewise affine function are carved up into. [↩︎](#fnref-JgTqhdejMTpnvBama-3)
5. **[^](#fnrefhyz203eqnfv)**But don't forget, ReLU(−x)≠−ReLU(x) as the domain of activation is completely different. [↩︎](#fnref-JgTqhdejMTpnvBama-4)
6. **[^](#fnrefcu67bj284jd)**Though I have not been able to formally prove it, I believe that this symmetry on its own (i.e. modulo scaling symmetry) does not result in a degeneracy of the Fisher information matrix, at least in our simple case. This, I think, is because the weights must cancel out in the region where both nodes are active, *and* the gradients in the other regions must be retained. Feel free to prove me wrong, though! [↩︎](#fnref-JgTqhdejMTpnvBama-5)
7. **[^](#fnref1mgrafzfsc5)**This statement is a bit disingenuous. Watanabe's free energy formula only applies to the case where K(w) is analytic, but ReLU neural networks are certainly not analytic, as we can see in the below example. With that said, Watanabe has [recently proved](https://arxiv.org/abs/2303.15739) a bound on the free energy for ReLU neural networks, showing that the complexity term is essentially related to the number of non-degenerate nodes in the truth, even if it isn't a true RLCT. We will look at this in more depth in DSLT4. [↩︎](#fnref-JgTqhdejMTpnvBama-6)
8. **[^](#fnreffhqz087tx4j)**Aside from the technical caveat discussed about the restricted input prior q(x) above.
9. **[^](#fnrefy82towwpimq)**Our convention is to take the empty sum to be 0, so all weight orientations being preserved, E=∅, is perfectly fine. [↩︎](#fnref-JgTqhdejMTpnvBama-7)
10. **[^](#fnrefzy8axm10g0l)**The activation distinguished condition on the truth allows us to uniquely identify the permutation σ∈Sm relating activation boundaries, and ensures only one node changes across each boundary. [↩︎](#fnref-JgTqhdejMTpnvBama-8)
|
f8dd97db-0b13-4e50-add5-6eda0d2efa80
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Just for fun: Computer game to illustrate AI takeover concepts?
I play Starcraft:BW sometimes with my brothers. One of my brothers is much better than the rest of us combined. This story is typical: In a free-for-all, the rest of us gang up on him, knowing that he is the biggest threat. By sheer numbers we beat him down, but foolishly allow him to escape with a few workers. Despite suffering this massive setback, he rebuilds in hiding and ends up winning due to his ability to tirelessly expand his economy while simultaneously fending off our armies.
This story reminds me of some AI-takeover scenarios. I wonder: Could we make a video game that illustrates many of the core ideas surrounding AGI? For example, a game where the following concepts were (more or less) accurately represented as mechanics:
--AI arms race
--AI friendliness and unfriendliness
--AI boxing
--rogue AI and AI takeover
--AI being awesome at epistemology and science and having amazing predictive power
--Interesting conversations between AI and their captors about whether or not they should be unboxed.
I thought about this for a while, and I think it would be feasible and (for some people at least) fun. I don't foresee myself being able to actually make this game any time soon, but I like thinking about it anyway. Here is a sketch of the main mechanics I envision:
* Setting the Stage
* This is a turn-based online game with some element of territory control and conventional warfare, designed to be played with at least 7 or so players. I'm imagining an Online Diplomacy variant such as http://www.playdiplomacy.com/ which seems to be pretty easy to make. It would be nice to make it more complicated though, since this is not a board game.
* Turns are simultaneous; each round lasts one day on standard settings.
* Players indicate their preferences for the kind of game they would like to play, and then get automatically matched with other players of a similar skill level.
* Players have accounts, so that we can keep track of how skilled they are
|
332bede2-f300-4a5a-b642-c76bd71c9d71
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Announcing the 2023 PIBBSS Summer Research Fellowship
*What can we learn from biological and social systems about designing beneficial AI systems?*
We are, once again, pleased to announce the PIBBSS Research Fellowship program.
*Principles of Intelligent Behavior in Biological and Social Systems* (PIBBSS) aims to facilitate research on parallels between intelligent behaviour in natural and artificial systems and to leverage these insights towards the goal of building safe and aligned AI.
Last year, we welcomed 20 fellows from fields as diverse as computational neuroscience, evolutionary biology, complex systems studies and the humanities. Over the course of three months, supported by two research retreats in Oxford and Prague, shared office space for the cohort, and mentorship from experienced AI alignment researchers, fellows worked on a range of projects with the purpose of bringing interdisciplinary expertise to AI safety. We were overall pleased with the results and thus decided to organize a second iteration of the program.
* If you’re interested in the program,**learn more and apply**[**here**](https://www.pibbss.ai/fellowship).
* Application deadline: **Feb 5th, 2023, 23:59 CET**
We are running to information sessions if you want to learn more or want to ask us questions:
* 1st information session: **28th of January, 17:00 UTC** (09:00 PST, 12:00 EST, 18:00 CET, 01:00 [29th of Jan] Singapore) -[**Zoom Link**](https://us06web.zoom.us/j/87857918141?pwd=Z2wyaWVkWFhCYXY2bUpSZ2d6MVhjQT09)
* 2nd information session: **29th of January, 11:00 UTC** (03:00 PST, 06:00 EST, 12:00 CET, 19:00 Singapore) - [**Zoom Link**](https://us06web.zoom.us/j/81791963914?pwd=RmVRcjVyR2dOcHRRQlYwZU82RzdxUT09)
We are grateful for any help to **pass on the word to people who might be interested** in applying. [At this link](https://docs.google.com/document/d/15nRTd-ijGyfEuRx878L8O5Bk_doxs1TAD8HU_RgupWg/edit?usp=sharing)you can find all the information about the program in several forms (Twitter length, short description, medium description), so you can copy-paste the version from there you find most useful.
For more information, see:
* Our website: [PIBBSS.ai](https://www.pibbss.ai/)
* A general introduction to PIBBSS, see: [Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship](https://forum.effectivealtruism.org/posts/Ckont9EtqkenegLYv/introducing-the-principles-of-intelligent-behaviour-in)
* A discussion of the epistemic bet PIBBSS is aiming to foster, see: [AI alignment as “navigating the space of intelligent behaviour”](https://www.lesswrong.com/s/4WiyAJ2Y7Fuyz8RtM/p/FuToH2KHxKmJLGk2B) (see section 3 in particular)
* A brief [retrospective on last year’s fellowship program](https://forum.effectivealtruism.org/posts/zvALRCKshYGYetsbC/reflections-on-the-pibbss-fellowship-2022)
* ... or follow us on Twitter:<https://twitter.com/pibbssai>
|
535b6e74-6011-4f9e-9c17-ee4c1af98a08
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Consistencies as (meta-)preferences
> Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)
> Walt Whitman
It is good for our decision processes to be time-consistent, transitive, and independent.
Or as Steve Omohundro apparently put it:
> If you prefer being in Berkeley to being in San Francisco; prefer being in San Jose to being in Berkeley; and prefer being in San Francisco to being in San Jose; then you're going to waste a lot of time on taxi rides.
But actually, if you live in Berkeley, work in San Francisco, and like to take weekends in San Jose, then going back and forth continuously makes perfect sense, even if it costs you time and money.
When I'm hungry, I want to eat; after lunch, not so much. Chocolate fondues have their time and place in my diet, as do vegetables. And nothing about that feels particularly inconsistent, even though my preferences are seemingly flipping all over the place as time goes on.
Choosing consistency
Of course, it is possible to have inconsistent eating preferences; diet-overeat-fast cycles, for instance. But more consistent eating behaviours look quite similar to this: indulging more in some circumstances, being stricter in others, and maybe adding the occasional fasting. There is no bright line dividing the inconsistent behaviour from the consistent one.
To resolve this, we can posit a mixture of "true" underlying preferences, such as hedonic enjoyment of eating, social connection, energy, health, weight, and so on, and see the fluctuations of behaviour as just instrumental changes for these stable underlying preferences, coupled with a dose of irrationality. Human preferences are very underdefined, so figuring out what the "true" preferences are is a tricky process.
To pick one of those example underlying preferences, suppose I say that "I desire a certain level of social interactions, on average, in a given week". There are three easy ways to categorise this desire:
1. Bias or error: this is an inconsistent
|
17477140-5076-480e-96d2-b5ce3c14e774
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is ChatGPT TAI?
Do you count ChatGPT and/or other similar systems (e.g. codex) as transformative AI? I would think so since it has a very fast adoption (million users in 5 days) and is apparently sufficiently more useful on some queries than Google that they had to declare "code red" - a possible threat to their core business. If not, why and where do you see the line it would need to cross to be TAI?
|
9cc4a649-ada0-4cd4-89db-2ecc2349aa12
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
[Video] Intelligence and Stupidity: The Orthogonality Thesis
> Can highly intelligent agents have stupid goals?
> A look at The Orthogonality Thesis and the nature of stupidity.
>
>
A good popular introduction to the Orthogonality Thesis from [Robert Miles](https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg).
|
87227c6f-040e-4219-939a-e275cd584b1d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Work harder on tabooing "Friendly AI"
This is is an outgrowth of a comment I left on Luke's dialog with Pei Wang, and I'll start by quoting that comment in full:
Luke, what do you mean here when you say, "Friendly AI may be incoherent and impossible"?
The Singularity Institute's page "What is Friendly AI?" defines "Friendly AI" as "A "Friendly AI" is an AI that takes actions that are, on the whole, beneficial to humans and humanity." Surely you don't mean to say, "The idea of an AI that takes actions that are, on the whole, beneficial to humans and humanity may be incoherent or impossible"?
Eliezer's paper "Artificial Intelligence as a Positive and Negative Factor in Global Risk" talks about "an AI created with specified motivations." But it's pretty clear that that's not the only thing you and he have in mind, because part of the problem is making sure the motivations we give an AI are the ones we really want to give it.
If you meant neither of those things, what did you mean? "Provably friendly"? "One whose motivations express an ideal extrapolation of our values"? (It seems a flawed extrapolation could still give results that are on the whole beneficial, so this is different than the first definition suggested above.) Or something else?
Since writing that comment, I've managed to find two other definitions of "Friendly AI." One is from Armstrong, Sandberg, and Bostrom's paper on Oracle AI, which describes Friendly AI as: "AI systems designed to be of low risk." This definition is very similar to the definition from the Singularity Institute's "What is Friendly AI?" page, except that it incorporates the concept of risk. The second definition is from Luke's paper with Anna Salamon, which describes Friendly AI as "an AI with a stable, desirable utility function." This definition has the important feature of restricting "Friendly AI" to designs that have a utility function. Luke's comments about "rationally shaped" AI in this essay seem relevant here.
Neither of those papers seems to use the initi
|
2bf8a89f-a981-40a7-a7c1-a920b59a9ba7
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
On language modeling and future abstract reasoning research
Hi everyone! Here I link to a sketch of my thoughts on how recent advances in language modeling may be connected, or lead, to future advances in developing machine learning models with abstract reasoning capabilities.
This was done as a side project last year during my research fellowship at the Center of Long-Term Risk. Many thanks to Daniel Kokotajlo, Jesse Clifton, Anthony DiGiovanni for useful comments.
|
74cb2dca-9f7b-471b-94d3-b480094c418f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Inter-Agent Facet of AI Alignment
We argue that intent alignment and capability robustness do not capture the full “chain of command” (i.e., “alignment” without any qualifier) from the programmer's intentions to an AI agent’s real-world impact, but stops at the stage where the agent performs its actions. We argue that, especially in complex Multi-Agent Systems (MAS), this is not sufficient to actually achieve aligned outcomes[1]. Therefore, we introduce another facet of AI Alignment, which deals with the connection between an agent’s intention and its actual impact, due to multi-agent interactions (as opposed to capability robustness).
Motivation
Independent of whether the first AGIs will be Tool or Agent AIs, they will most likely not live and act in an empty space. Instead, they will interact and communicate with humans and other AGIs in a variety of ways, thereby effectively creating a socio-technical system or an AI-based society.
As a consequence, the effects that an AGI will have on its environment, i.e., the world, will not necessarily coincide with the effects that the AGI plans to have, since actions in such systems commonly interfere and influence/cancel each other.
This, in turn, leads to the conclusion that it is not sufficient to build an AGI which satisfies intent alignment and capability robustness. Instead, we need one more step to be controlled: The step from an AGI's robust intention to its societal impact.
Schema
In Evan Hubinger’s alignment terminology, alignment is the result of intent alignment (which is, in turn, composed of the well-known concepts of inner and outer alignment, among others) and capability robustness. The implicit assumption is that the AGI's model is equivalent to how its actions will affect the world.
Inter-Agent Alignment
Inter-Agent Alignment is aligning the (aligned and robust) intentions of an AGI agent with the actual outcome within the multi-agent system of which the agent is a part. Since the impact of the agent’s actions depends on what every
|
334470cd-42e8-44ee-9fad-42283688404f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
New York Restaurants I Love: Pizza
Previously in series: New York Restaurants I Love: Breakfast
Previously in pizza: Restaurant Guide 2: Pizza
If you don’t live in and aren’t visiting New York, and want to know how to find good pizza, refer to the guide above. It tells you what you need to know.
This series is about the particular places I love. I want to help people find them, so they can enjoy, and can help the places stay in business. It does not mean these are the best pizza places in the city. Again, some of these are some of the best places. They are the ones that have the most value to me, based on where I live and my taste in a pie.
I mostly order plain (red) pizza, either regular, Sicilian or where available Grandma Sicilian. Most toppings, in my view, detract rather than add value. I’ll take fresh garlic when available, and can be easily talked into ricotta. Where I can get prosciutto, I’ll often take that, but definitely not other meats. I can’t speak to topping quality, in most cases.
Remember, if you order in New York, use Slice.
1. Joe & Pat’s
I have never been to the original on Staten Island, but since this opened on 10th Street and 1st Avenue last year, it instantly became my clear go-to place for a pie. They aren’t cheap, but they’re worth it.
Their regular pie is quite good. a light blend:
Their vodka pie is also quite good. The real treasure is the Grandma Sicilian. Pictured with half pepperoni, because my friend wanted that yesterday.
That’s what you should usually order. It’s a huge amount of piping hot, super flavorful and perfectly balanced pizza that does everything I am looking for in this style of pizza.
I’ve also tried their pasta. The lasagna was quite good, the pesto less so, as one would expect.
It’s a kid-friendly and nice place to sit down and have an amazing pizza as a family, which isn’t something the area was missing before, and it blows everything else locally out of the water on quality except maybe Motorino, which we’ll get to in a bit.
2.
|
1408b1b7-9f01-4689-b2b9-6bc0622b4319
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Superintelligence 15: Oracles, genies and sovereigns
*This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so far see the [announcement post](/lw/kw4/superintelligence_reading_group/). For the schedule of future topics, see [MIRI's reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf).*
---
Welcome. This week we discuss the fifteenth section in the [reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf): ***Oracles, genies, and sovereigns***. This corresponds to the first part of Chapter ten.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
**Reading**: “Oracles” and “Genies and Sovereigns” from Chapter 10
---
Summary
=======
1. Strong AIs might come in different forms or 'castes', such as**oracles, genies, sovereigns and tools.** (p145)
2. ***Oracle**:* an AI that does nothing but answer questions. (p145)
1. The ability to make a good oracle probably allows you to make a generally capable AI. (p145)
2. Narrow superintelligent oracles exist: e.g. calculators. (p145-6)
3. An oracle could be a non-agentlike 'tool' (more next week) or it could be a rational agent constrained to only act through answering questions (p146)
4. There are various ways to try to constrain an oracle, through motivation selection (see last week) and capability control (see the previous week) (p146-7)
5. An oracle whose goals are not aligned with yours might still be useful (p147-8)
6. An oracle might be misused, even if it works as intended (p148)
3. ***Genie***: an AI that carries out a high level command, then waits for another. (p148)
1. It would be nice if a genie sought to understand and obey your intentions, rather than your exact words. (p149)
4. ***Sovereign***: an AI that acts autonomously in the world, in pursuit of potentially long range objectives (p148)
5. A genie or a sovereign might have **preview functionality**, where it describes what it will do before doing it. (p149)
6. **A genie seems more dangerous than an oracle**: if you are going to strongly physically contain the oracle, you may have been better just denying it so much access to the world and asking for blueprints instead of actions. (p148)
7. **The line between genies and sovereigns is fine.** (p149)
8. **All of the castes could emulate all of the other castes** more or less, so they do not differ in their ultimate capabilities. However they represent different approaches to the control problem. (p150)
9. The ordering of **safety of these castes is not as obvious as it may seem**, once we consider factors such as dependence on a single human, and added dangers of creating strong agents whose goals don't match our own (even if they are tame 'domesticated' goals). (p150)
Another view
============
An old response to suggestions of oracle AI, from [Eliezer Yudkowsky](/lw/tj/dreams_of_friendliness/) (I don't know how closely this matches his current view):
>
> When someone reinvents the Oracle AI, the most common opening remark runs like this:
>
>
> "Why not just have the AI answer questions, instead of trying to *do* anything? Then it wouldn't need to be Friendly. It wouldn't need any goals at all. It would just answer questions."
>
>
> To which the reply is that the AI needs goals in order to decide how to think: that is, the AI has to act as a powerful optimization process in order to plan its acquisition of knowledge, effectively distill sensory information, pluck "answers" to particular questions out of the space of all possible responses, and of course, to improve its own source code up to the level where the AI is a powerful intelligence. All these events are "improbable" relative to random organizations of the AI's RAM, so the AI has to hit a narrow target in the space of possibilities to make superintelligent answers come out.
>
>
> Now, why might one think that an Oracle didn't need goals? Because on a human level, the term "goal" seems to refer to those times when you said, "I want to be promoted", or "I want a cookie", and when someone asked you "Hey, what time is it?" and you said "7:30" that didn't seem to involve any goals. Implicitly, you wanted to answer the question; and implicitly, you had a whole, complicated, functionally optimized brain that let you answer the question; and implicitly, you were able to do so because you looked down at your highly optimized watch, that you bought with money, using your skill of turning your head, that you acquired by virtue of curious crawling as an infant. But that all takes place in the invisible background; it didn't *feel* like you wanted anything.
>
>
> Thanks to [empathic inference](/lw/sr/the_comedy_of_behaviorism/), which uses your own brain as an unopened black box to predict other black boxes, it can feel like "question-answering" is a detachable thing that comes loose of all the optimization pressures behind it - even the existence of a pressure to answer questions!
>
>
>
Notes
=====
**1. What are the axes we are talking about?**
This chapter talks about different types or 'castes' of AI. But there are lots of different ways you could divide up kinds of AI (e.g. earlier we saw brain emulations vs. synthetic AI). So in what ways are we dividing them here? They are related to different approaches to the control problem, but don't appear to be straightforwardly defined by them.
It seems to me we are looking at something close to these two axes:
* **Goal-directedness:** the extent to which the AI acts in accordance with a set of preferences (instead of for instance reacting directly to stimuli, or following rules without regard to consequence)
* **Oversight:** the degree to which humans have an ongoing say in what the AI does (instead of the AI making all decisions itself)
The castes fit on these axes something like this:

They don't quite neatly fit -- tools are spread between two places, and oracles are a kind of tool (or a kind of genie if they are of the highly constrained agent variety). But I find this a useful way to think about these kinds of AI.
Note that when we think of 'tools', we usually think of them having a lot of oversight - that is, being used by a human, who is making decisions all the time. However you might also imagine what I have called 'autonomous tools', which run on their own but aren't goal directed. For instance an AI that continually reads scientific papers and turns out accurate and engaging science books, without particularly optimizing for doing this more efficiently or trying to get any particular outcome.
We have two weeks on this chapter, so I think it will be good to focus a bit on goal directedness one week and oversight the other, alongside the advertised topics of specific castes. So this week let's focus on oversight, since tools (next week) primarily differ from the other castes mentioned in not being goal-directed.
**2. What do goal-directedness and oversight have to do with each other?**
Why consider goal-directedness and oversight together? It seems to me there are a couple of reasons.
Goal-directedness and oversight are substitutes, broadly. The more you direct a machine, the less it needs to direct itself. Somehow the machine has to assist with some goals, so either you or the machine needs to care about those goals and direct the machine according to them. The 'autonomous tools' I mentioned appear to be exceptions, but they only seem plausible for a limited range of tasks where minimal goal direction is needed beyond what a designer can do ahead of time.
Another way goal-directedness and oversight are connected is that we might expect both to change as we become better able to align an AI's goals with our own. In order for an AI to be aligned with our goals, the AI must naturally be goal-directed. Also, better alignment should make oversight less necessary.
**3. A note on names**
'Sovereign AI' sounds powerful and far reaching. Note that more mundane AIs would also fit under this category. For instance, an AI who works at an office and doesn't take over the world would also be a sovereign AI. You would be a sovereign AI if you were artificial.
**4. Costs of oversight**
Bostrom discussed some problems with genies. I'll mention a few others.
One clear downside of a machine which follows your instructions and awaits your consent is that you have to be there giving instructions and consenting to things. In a world full of powerful AIs which needed such oversight, there might be plenty of spare human labor around to do this at the start, if each AI doesn't need too much oversight. However a need for human oversight might bottleneck the proliferation of such AIs.
Another downside of using human labor beyond the cost to the human is that it might be prohibitively slow, depending on the oversight required. If you only had to check in with the AI daily, and it did unimaginably many tasks the rest of the time, oversight probably wouldn't be a great cost. However if you had to be in the loop and fully understand the decision every time the AI chose how to allocate its internal resources, things could get very slow.
Even if these costs are minor compared to the value of avoiding catastrophe, they may be too large to allow well overseen AIs to compete with more autonomous AIs. Especially if the oversight is mostly to avoid low probability terrible outcomes.
**5. How useful is oversight?**
Suppose you have a genie that doesn't totally understand human values, but tries hard to listen to you and explain things and do what it thinks you want. How useful is it that you can interact with this genie and have a say in what it does rather than it just being a sovereign?
If the genie's understanding of your values is wrong such that its intended actions will bring about a catastrophe, it's not clear that the genie can describe the outcome to you such that you will notice this. The future is potentially pretty big and complicated, especially compared to your brain, or a short conversation between you and a genie. So the genie would need to summarize a lot. For you to notice the subtle details that would make the future worthless (remember that the genie basically understands your values, so they are probably not really blatant details) the genie will need to direct your attention to them. So your situation would need to be in a middle ground where the AI knew about some features of a potential future that might bother you (so that it could point them out), but wasn't sure if you really would hate them. It seems hard for the AI giving you a 'preview' to help if the AI is just wrong about your values and doesn't know how it is wrong.
**6. More on oracles**
[Thinking inside the box](http://www.nickbostrom.com/papers/oracle.pdf) seems to be the main paper on the topic. Christiano's post on [how to use an unfriendly AI](/lw/3dw/what_can_you_do_with_an_unfriendly_ai/) is again relevant to how you might use an oracle.
In-depth investigations
=======================
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's [list](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/), which contains many suggestions related to parts of *Superintelligence.*These projects could be attempted at various levels of depth.
1. **How stable are the castes?** Bostrom mentioned that these castes mostly have equivalent long-run capabilities, because they can be used to make one another. A related question is how likely they are to turn into one another. Another related question is how likely an attempt to create one is to lead to a different one (e.g. Yudkowsky's view above suggests that if you try to make an oracle, it might end up being a sovereign). Another related question, is which ones are likely to win out if they were developed in parallel and available for similar applications? (e.g. How well would genies prosper in a world with many sovereigns?)
2. **How useful is oversight likely to be?** (e.g. At what scale might it be necessary? Could an AI usefully communicate its predictions to you such that you can evaluate the outcomes of decisions? Is there likely to be direct competition between AIs which are overseen by people and those that are not?)
3. **Are non-goal-directed oracles likely to be feasible?**
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.
How to proceed
==============
This has been a collection of notes on the chapter. **The most important part of the reading group though is discussion**, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the last caste of this chapter: the tool AI. To prepare, **read** “Tool-AIs” and “Comparison” from Chapter 10*.*The discussion will go live at 6pm Pacific time next Monday December 29. Sign up to be notified [here](http://intelligence.us5.list-manage.com/subscribe?u=353906382677fa789a483ba9e&id=28cb982f40).
|
efb1f846-17f1-4e11-b1de-25579b7f8c67
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Shifting guilt
The posts so far have been less about confronting guilt, and more about different tools for shifting it. This is a valuable skill to generalize.
The posts in this series have developed three such tools for shifting guilt. In this post, I'll recast those three tools as members of the same family, so that you can start to see the pattern, and develop similar tools from the same family as you need them.
The tools that I have described so far shift guilt to one particular place: guilt about being unable to act as you desire. This is intentional — that is the one place that I know how to confront guilt head-on.
The first tool for shifting guilt is the tool of refinement. This tool is used on listless guilts in need of pointing.
Imagine finding yourself feeling vaguely guilty the morning after a party, having slept in longer than you intended, your head aching from a slight hangover. Imagine a vague guilt making your body feel heavier. Perhaps it whispers that the night was senseless. Perhaps it murmurs that you're wasting your life away. This is the sort of guilt that's amenable to refinement: ask the guilt what, precisely, it would have had you do instead of what you did. (It is important, when refining, to also possess the virtue of concreteness: do not settle for "I should have been studying." Demand a specific action: Which book? Which chapter?)
Sometimes, when asking the guilt what you could have done instead, you will remember that none of the alternatives were compelling. Maybe the party was for an old friend who you only see once every few years, and fulfilling the social obligation was better than the alternative. Maybe you were exhausted from a day of studying, low on human contact, and needed the party to reinvigorate you. When using the tool of refinement, the guilt sometimes simply disappears.
But often, the guilt gets more pointed. Perhaps you conclude you should have been working overtime so you could donate the money to a worthy cause. Perhaps you c
|
6aa2307f-3f1f-46db-bdd0-39d06e17047c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Distillation Robustifies Unlearning
Current “unlearning” methods only suppress capabilities instead of truly unlearning the capabilities. But if you distill an unlearned model into a randomly initialized model, the resulting network is actually robust to relearning. We show why this works, how well it works, and how to trade off compute for robustness.
Unlearn-and-Distill applies unlearning to a bad behavior and then distills the unlearned model into a new model. Distillation makes it way harder to retrain the new model to do the bad thing.Distilling the good while leaving the bad behind.
Produced as part of the ML Alignment & Theory Scholars Program in the winter 2024–25 cohort of the shard theory stream.
Read our paper on ArXiv and enjoy an interactive demo.
Robust unlearning probably reduces AI risk
Maybe some future AI has long-term goals and humanity is in its way. Maybe future open-weight AIs have tons of bioterror expertise. If a system has dangerous knowledge, that system becomes more dangerous, either in the wrong hands or in the AI’s own “hands.” By making it harder to get AIs to share or use dangerous knowledge, we decrease (but do not eliminate) catastrophic risk.
Misuse risk. Robust unlearning prevents finetuning attacks from easily retraining a model to share or use the unlearned skill or behavior. Since anyone can finetune an open-weight model, it’s not enough to just suppress the model before releasing it.
However, even closed-source models can be jailbroken. If the capability is truly no longer present, then a jailbreak can’t elicit an ability that isn’t there to begin with.
Misalignment risk. Robust unlearning could remove strategic knowledge and skills that an unaligned AI might rely on. Potential removal targets include knowledge of: AI control protocols or datacenter security practices; weight exfiltration; self-modification techniques; the fact that it is an AI system; or even the ability to be influenced by negative stereotypes about AI. Robust unlearning could maybe e
|
f0b84165-d7af-4345-aebd-99b08a8f8a9a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Babble Challenge: 50 thoughts on stable, cooperative institutions
In a recent LessWrong question Anna Salamon asks “Where did stable, cooperative institutions come from (like bridges that stay up; the rule of law; or Google)?” She also worries that “the magic that used to enable such cooperative institutions is fading”.
Anna’s post is strong in babble. It does provide gears-level mechanisms and concrete hypotheses. But it also gestures at intuitions, felt senses, and insights-waiting-to-be-had.
This week’s challenge is simple: Have 50 thoughts about Anna’s post and questions.
Do you have a guess at the magic enabling human societies to build roads and postal services? Do you think institutions are actually getting stronger over time? What are 10 examples of how institutions changed in the last 100 years? Or 10 predictions about how they'll change in the future? Etc.
Your thoughts can be hypotheses, questions, anecdotes, confusions, disagreements, feelings... and so forth.
50 thoughts, no need for them to be longer than a sentence.
You have 1 hour.
Looking back
Here are the current rankings. (You gain a star for completing a challenge, and lose one for missing a week. I’m not including myself since it feels weird to be both gamemaster and participant.)
Great job everyone!
★★★★★ gjm
★★★★ Yonge
★★★ Tetraspace Grouping, Slider
★★ Mark Xu, Bucky
★ Turntrout, Harmless, Tao Lin, Daniel Kokotajlo, chasmani, supposedlyfun
Moving Forwards
This is week 6 of my 7-week babble sprint.
It is said that sufficiently advanced technology is indistinguishable from magic.
I think something similar is true for building skills.
There are some skills of which you can see the contours. You can squint and see yourself wielding them, with practice. And there are some things which seem like magic. As if though the kinds of humans who wield them are fundamentally different from the kind of human you are. There's no set of steps that could get you to where they are at.
Intellectual creativity often falls in this bucket.
For w
|
cfd47f2e-668f-4185-8e2a-f562ce81d01c
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Meta-preferences two ways: generator vs. patch
In my last post ([Gricean communication and meta-preferences](https://www.lesswrong.com/posts/8NpwfjFuEPMjTdriJ/gricean-communication-and-ai)), I talked about the infinite tower of meta-preferences that in principle could be applied by humans. Example meta-preference: "I'd rather that I didn't act like I want to watch a bunch of random youtube videos - that's not a 'real' preference, it's something I should change." And then you can have meta-meta-feelings about this meta-preference, and off you go.
This infinite tower can't fit into reality, so it has to be dealt with somehow if we want to include the effects of meta-preferences in the process of value learning. Now, it's possible to deny this premise, model humans as actually only being able to compute a finite number of levels, and imitate that finite number of levels perfectly. But if we want to avoid the *reductio* that value learning should aim for perfect human imitation, I think there are two different camps here.
The *generator* camp treats each level of meta-preferences as an important generator of the level below. The infinite regress is avoided by using the fact that all levels of the tower are computed by the same mind, so incorporating meta-ethics is about approximating a fixed point of this self-ratification process. Preferably finding a fixed point that makes judgements sort of like humans' in easy cases. I think one can make a very good case for this, and that it's sort of hinted at in IDA approaches.
But my preferred approach is to treat the higher meta-levels as *patches* to the object level. That is, finding some approximation scheme where the meta-preferences are corrections that rapidly converge. Unlike the generator framework, different levels in the patch framework can be different parts of a heterogeneous values model, rather than being different questions posed to a single agent model.
**One way of thinking about patches**
In brainstorming for this post, I went through a bunch of patch ideas that *won't* work (happy to talk about those if prompted). But here's a way of thinking by making the analogy to semi-supervised learning (similar to the ideas used in "[What's the dream for giving natural language commands to AI?](https://www.lesswrong.com/posts/Bxxh9GbJ6WuW5Hmkj/what-s-the-dream-for-giving-natural-language-commands-to-ai)") that might be productive:
To say it too-simply, imagine just asking the AI to include meta-preferences. If we "ask" (in quotes because we might be talking about prompting with some large amount of examples or other methods of definition) the AI for the representation of "what humans want," we trigger a different representation than if we ask for "what humans want, if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc."
We have to be a little careful about what kind of thing the AI is actually doing here. Activating features is fundamentally about pattern-matching. "What humans want" would just be some representative, useful feature that does a good job helping the AI predict the prompts. Where humans are inconsistent or vague, we might want the AI to average things out by representativeness or usefulness as a feature. This would mean that minor inconsistencies would have only minor contributions, but along the axes where there's strong conflict, the sum might cancel out.
Now, what would it do with "what humans want, if we knew more, thought faster, were more the people we wished we were, etc."? *The exact same thing*! It would not simulate any actual self-improvement, instead it would find some feature that's representative of self-improvement - it would change its representation of human values in a "self-improvementy" sort of way.
You can simulate this exact same cognitive pattern by asking yourself "what would someone do if they were a better person?" You didn't answer this question by simulating them, or even trying to mirror some process of self-reflection. Instead, you have some pre-computed ideas about what that sort of change looks like. These might be abstracted from actual observations of self-reflection, but they're also going to incorporate more abstract ideas that we might not have learned from experience.
There's a bunch of issues with this - it runs straight into the alien concepts problem, and we might want to more explicitly simulate the actual dynamics of self-reflection. But for now, I'm finding it a useful way to think about "meta-preferences as patch."
|
bf75a8bb-e6aa-4917-9322-318718302d44
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some Thoughts on AI Art
I was recently talking with a Daniel Kokotajlo about AI art. It turned out that he and I initially disagreed about ethical questions, but by the end of the conversation, I had somewhat won him over to my position.
I have the vague impression that a lot of people (on the technology side) haven't thought through these issues so much, or (like me) have only recently thought these issues through (as a result of artists making a lot of noise about it!).
So I thought I would write a post. Maybe it will be persuasive to some readers.
Is this the most important conversation to be having about AI?
No. Copyright-adjacent issues with AI art are less important than AI-induced unemployment, which is in turn less important than the big questions about the fate of the human race.
However, it's possible that copyright-adjacent issues around intellectual property and AI will be one of the first major issues thrusting AI into the political sphere, in which case this discussion may help to shape public policy around AI for years to come.
The basic issues.
Large language models such as GPT, and AI image generators such as DALL-E, Imagen, Stable Diffusion, etc etc are (very often) trained on copyrighted works without the permission of the copyright holder.[1] This hasn't proven to be a legal problem, yet, but "legal" doesn't mean "ethical".
When models like GPT and DALL-E started coming out, I recall having the thought: oh, it's nice how these models don't really need to worry about copyright, because (I thought) deep learning turns out to generalize quite well, which means deep-learning-based systems aren't liable to regurgitate copyrighted material.
This turns out to be simply false; these systems are in fact quite liable to reproduce, or very nearly reproduce, copyrighted material when prompted in the right way.[2]
Whether or not copyrighted material is precisely reproduced, or nearly reproduced, or not reproduced at all, there is, in any case, an argument to be made that
|
31f2c348-8df0-4a31-a282-8ad6bb00957b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Can you keep this confidential? How do you know?"
Pet peeve about privacy: I think people are woefully inadequate at asking, and answering, "Can you keep this confidential?"
Disclosure: I am not inherently great at keeping information private. By default, if a topic came up in conversation, I would accidentally sometimes say my thoughts before I had time to realize "oh, right, this was private information I shouldn't share."
I've worked over the past few years to become better at this – I've learned several specific skills and habits that make it easier. But I didn't learn those skills in school, and no one even really suggested I was supposed to learn them. People seemed to just assume "people can keep secrets, and it's low cost for them to do so."
And... maybe this is just me. But, people say to me "hey, can you keep this private?", in a tone that implies I'm not really supposed to say no. And that's the best case. I've also observed things like...
...people saying "hey, this is confidential", and then just saying the thing without checking in.
...people saying "sign this NDA", without really checking I have the skills to honor that agreement, and if I were to not sign, I'd... probably get fired? Unclear.
...people gathering for a Circle or other private safe space, and saying (best case) "do we all agree to keep things here confidential? Raise you hand?" and worst case, just flatly asserting "This is a safe space, things are confidential here". (And I have seen at least one instance where someone I actively trusted later betrayed that trust)
...people saying "You can report things to our [org / HR department / point-person], and they will keep things confidential." But, I know that in the hiring process for that org or department, no one ever checked that people actually had privacy skills.
And meanwhile, I have almost never heard anyone say something like "I have been given 10 bits of private-info over the past few years, and I accidentally leaked two of them", or even "I have paid any attention at all
|
c0824221-dcdf-4050-b4d9-1ade897a1708
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Dealing with a Major Personal Crisis II
This is the second post about Dealing with a Major Personal Crisis. The recount of the crisis is complete now. Please continue reading there but comment here.
The reasons for posting it this way are explained at the end of the link. I hope this approach does what I want it to.
|
b8af7c38-912e-4bfd-ae84-e36b2b2dc798
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some thoughts on vegetarianism and veganism
I feel pretty confused about whether I, as an effective altruist, should be vegetarian/vegan (henceforth abbreviated veg*n). I don’t think I’ve seen anyone explicitly talk about the arguments which feel most compelling to me, so I thought I’d do that here, in a low-effort way.
I think that factory farming is one of the worst ongoing moral atrocities. But most of the arguments I’ve heard for veg*nism, which I found compelling a few years ago, hinge on the effects that my personal consumption would have on decreasing factory farming (and sometimes on climate change). I now don’t find this line of thinking persuasive - my personal consumption decisions just have such a tiny effect compared to my career/donation decisions that it feels like I shouldn’t pay much attention to their direct consequences (beyond possibly donating to offset them).
But there are three other arguments which seem more compelling. First is a deontological argument: if you think something is a moral atrocity, you shouldn’t participate in it, even if you offset the effects of your contribution. In general, my utilitarian intuitions are much stronger than my deontological ones, but I do think that following deontological principles is often a very good heuristic for behaving morally. The underlying reason is that humans by default think more naturally in terms of black-and-white categories than shades of grey. As Yudkowsky writes:
> Any rule that's not labeled "absolute, no exceptions" lacks weight in people's minds. So you have to perform that the "Don't kill" commandment is absolute and exceptionless (even though it totally isn't), because that's what it takes to get people to even hesitate. To stay their hands at least until the weight of duty is crushing them down. A rule that isn't even absolute? People just disregard that whenever.
Without strong rules in place it’s easy to reason our way into all sorts of behaviour. In particular, it’s easy to understimate the actual level of harm that ce
|
354609fb-4f93-498c-a480-4cd1445d0da7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Was the K-T event a Great Filter?
I've been wondering about this question for a while, but some new research released today reminded me of it:
> The seasonal timing of the catastrophic end-Cretaceous bolide impact places the event at a particularly sensitive stage for biological life cycles in the Northern Hemisphere. In many taxa, annual reproduction and growth take place during spring. Species with longer incubation times, such as non-avian reptiles, including pterosaurs and most dinosaurs, were arguably more vulnerable to sudden environmental perturbations than other groups43 (for example, birds).
> Similarly, large-scale wildfires raging across the Southern Hemisphere9,34,41 may have been evaded by hibernating mammals that were already sheltered in burrows34,41 in anticipation of austral winter. Additional modes of seasonal dormancy, torpor and/or aestivation, which are nowadays practised by various mammals46,47 as well as certain amphibians, birds and crocodilians48, could have facilitated further underground survival.
The article speculates that the particular time of year at which the asteroid struck is responsible for the pattern of extinctions that resulted. I have also heard it suggested (don't remember where) that the particular location of the impact - in shallow water just off the coast - had a different effect than if it had hit either on land or in the deep sea (i.e. the large majority of Earth's surface). And it seems somewhat odd that an impact of that size should've occurred in the so recent past:
> Impacts of asteroids spanning more than 10 kilometers are rare and happen about every 250 to 500 million years. Having one such event only 66 million years ago is therefore mildly special.
Is the peculiarity of the K-T event yet another example of "fine-tuning" explained by the anthropic principle?
Consider (reposting from before) the fossil evidence of the progression in intelligence over time:
The first chart shows the encephalization quotient (EQ) of various lineages over
|
1a414b6d-7139-4e7c-a0a6-3c9a424db2f7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Spiritual" techniques that actually work thread
This thread is for:
- Perfectly natural and functional ideas that came from a spiritual, religious, occultist, parapsychologist etc. source (perhaps with some "baggage")
- Techniques that are bit difficult to explain and may be seen by the gullible as magic, but they actually seem to do something, even if that something is just a novel way to trick the brain.
Both things that are actually useful and "stage tricks" are accepted in this thread.
|
a37eea15-d7d2-463f-859d-b2ee9d565e86
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to treat problems of unknown difficulty
Crossposted from the Global Priorities Project
This is the first in a series of posts which take aim at the question: how should we prioritise work on problems where we have very little idea of our chances of success. In this post we’ll see some simple models-from-ignorance which allow us to produce some estimates of the chances of success from extra work. In later posts we’ll examine the counterfactuals to estimate the value of the work. For those who prefer a different medium, I gave a talk on this topic at the Good Done Right conference in Oxford this July.
Introduction
How hard is it to build an economically efficient fusion reactor? How hard is it to prove or disprove the Goldbach conjecture? How hard is it to produce a machine superintelligence? How hard is it to write down a concrete description of our values?
These are all hard problems, but we don’t even have a good idea of just how hard they are, even to an order of magnitude. This is in contrast to a problem like giving a laptop to every child, where we know that it’s hard but we could produce a fairly good estimate of how much resources it would take.
Since we need to make choices about how to prioritise between work on different problems, this is clearly an important issue. We can prioritise using benefit-cost analysis, choosing the projects with the highest ratio of future benefits to present costs. When we don’t know how hard a problem is, though, our ignorance makes the size of the costs unclear, and so the analysis is harder to perform. Since we make decisions anyway, we are implicitly making some judgements about when work on these projects is worthwhile, but we may be making mistakes.
In this article, we’ll explore practical epistemology for dealing with these problems of unknown difficulty.
Definition
We will use a simplifying model for problems: that they have a critical threshold D such that the problem will be completely solved when D resources are expended, and not at all before that.
|
48666bff-0598-438b-b397-d7a4ec757928
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Announcing “Key Phenomena in AI Risk” (facilitated reading group)
TLDR: “Key Phenomena in AI Risk” is a 7 week-long, facilitated reading group. It is aimed at people interested in conceptual AI alignment research, in particular from fields such as philosophy, systems research, biology, cognitive and social sciences.
The program will run between July and August 2023. Sign up here by May 28th.
What?
The “Key Phenomena in AI risk” reading curriculum provides an extended introduction to some key ideas in AI risk, in particular risks from misdirected optimization or 'consequentialist cognition'. As such, it aims to remain largely agnostic of solution paradigms.
See here for a short overview of the curriculum; here for a more extensive summary; and here for the full curriculum.
This is a 7-week long program, which consists of a weekly 90’ facilitated call to discuss the week’s key phenomena and readings, as well as individual time for reading (min. 2h, more if you would like to explore the optional readings).
The courses are virtual and free of charge.
For Who?
The curriculum is primarily aimed at people interested in conceptual research in AI risk and alignment.
It is designed to be accessible to audiences in, among others, philosophy (of agency, knowledge, power, etc.) and systems research (e.g. biological, cognitive, information-theoretic, social systems, etc.).
When?
The reading groups will be taking place in July and August 2023.
We expect to run 2-6 groups à 4-8 participants (including 1 facilitator). Each group will be led by a facilitator with substantive knowledge of AI risk.
Overview of the curriculum
* Week 0 is dedicated to getting to know each other and clarifying how the program will work.
* Week 1 focuses on why important features of generally intelligent 'consequentialist cognition' might be algorithmically realizable, and its potential implications.
* Week 2 focuses on why it can be hard to direct such intelligence in a safe and beneficial direction.
* Week 3 discusses instrumental convergen
|
d137ebc0-35c5-4b32-8a90-3b5d0374378d
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Plan for mediocre alignment of brain-like [model-based RL] AGI
*(This post is a more simple, self-contained, and pedagogical version of*[*Post #14*](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi) *of*[*Intro to Brain-Like AGI Safety*](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8)*.)*
*(Vaguely related to*[*this Alex Turner post*](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem) *and*[*this John Wentworth post*](https://www.lesswrong.com/posts/w4aeAFzSAguvqA5qu/how-to-go-from-interpretability-to-alignment-just-retarget)*.)*
I would like to have a technical plan for which there is a *strong robust reason* to believe that we’ll get an aligned AGI and a good future. This post is not such a plan.
However, I also don’t have a strong reason to believe that this plan *wouldn’t*work. Really, I want to throw up my hands and say “I don’t know whether this would lead to a good future or not”. By “good future” here I don’t mean *optimally-good*—whatever that means—but just “much better than the world today, and certainly much better than a universe full of paperclips”. I currently have no plan, not even a vague plan, with any prayer of getting to an optimally-good future. That would be a much narrower target to hit.
Even so, that makes me more optimistic than at least some people.[[1]](#fn6gkdz6spdl) Or at least, more optimistic about this specific part of the story. In general I think many things can go wrong as we transition to the post-AGI world—see discussion by [Dai](https://www.lesswrong.com/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk) & [Soares](https://www.lesswrong.com/posts/ervaGwJ2ZcwqfCcLx/agi-ruin-scenarios-are-likely-and-disjunctive)—and *overall* I feel very doom-y, particularly for reasons [here](https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control).
This plan is specific to the possible future scenario (a.k.a. “threat model” if you’re a doomer like me) that future AI researchers will develop “brain-like AGI”, i.e. learning algorithms that are similar to the brain’s within-lifetime learning algorithms. ([I am *not* talking about evolution-as-a-learning-algorithm](https://www.lesswrong.com/posts/fDPsYdDtkzhBp9A8D/intro-to-brain-like-agi-safety-8-takeaways-from-neuro-1-2-on#8_3_1_The__Genome___ML_code__analogy).) These algorithms, [I claim](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8), are in the general category of model-based reinforcement learning. Model-based RL is a big and heterogeneous category, but I suspect that for any kind of model-based RL AGI, this plan would be at least somewhat applicable. For very different technological paths to AGI, this post is probably pretty irrelevant.
But anyway, *if* someone published an algorithm for x-risk-capable brain-like AGI tomorrow, and we urgently needed to do *something*, this blog post is more-or-less what I would propose to try. It’s the least-bad plan that I currently know.
So I figure it’s worth writing up this plan in a more approachable and self-contained format.
1. Intuition: Making a human into a moon-lover (“selenophile”)
==============================================================
Try to think of who is the coolest / highest-status-to-you / biggest-halo-effect person in your world. (Real or fictional.) Now imagine that this person says: “You know what’s friggin awesome? The moon. I just love it. The moon is the best.”
You stand there with your mouth agape, muttering to yourself in hushed tones: “Wow, huh, the moon, yeah, I never thought about it that way.” (But 100× moreso. Maybe you’re on some psychedelic at the time, or this is happening during your impressionable teenage years, or whatever.) You basically transform into a “moon fanboy” / “moon fangirl” / “moon nerd” / “selenophile”.
How would that change your motivations and behaviors going forward?
* You’re probably going to be much more enthusiastic about anything associated with the moon.
* You’re probably going to spend a lot more time gazing at the moon when it’s in the sky.
* If there are moon-themed trading cards, maybe you would collect them.
* If NASA is taking volunteers to train as astronauts for a trip to the moon, maybe you’d enthusiastically sign up.
* If a supervillain is planning to blow up the moon, you’ll probably be extremely opposed to that, and motivated to stop them.
Hopefully this is all intuitive so far.
What’s happening mechanistically in your brain? As background, [I think](https://www.lesswrong.com/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation) we should say that one part of your brain (the cortex, more-or-less) has “thoughts”, and another part of your brain (the basal ganglia, more-or-less) assigns a **“value” (in RL terminology) a.k.a. “valence” (in psych terminology)** to those thoughts.
And what happened in the above intervention is that your value function was edited such that thoughts-involving-the-moon would get very positive valence. Thoughts-involving-the-moon include just thinking about the moon by itself, but also include things like “the idea of collecting moon trading cards” and “the idea of going to the moon”.
Slightly more detail: As a simple and not-too-unrealistic model, we can imagine that the world-model is *compositional*, and that the value function is *linearly additive* over the compositional pieces. So if a thought entails imagining a moon poster hanging on the wall, the valence of that thought would be some kind of weighted average of your brain’s valence for “the moon”, and your brain’s valence for “poster hanging on the wall”, and your brain’s valence for “white circle on a black background”, etc., with weights / details depending on precisely how you’re thinking about it (e.g. which aspects you’re attending to, what categories / analogies you’re implicitly invoking, etc.).
So looking at the moon becomes positive-valence, but so do moon-themed trading cards, since the latter has “the moon” as one piece of the composite thought. Meanwhile the thought “A supervillain is going to blow up the moon” becomes negative-valence for technical reasons in the footnote→[[2]](#fn3uyfu52v9bk).
Anyway, assigning a positive versus negative valence to the concept “the moon” is objectively pretty weird. What in god’s name does it mean for "the moon" to be good or bad? It doesn’t even make sense. Yet people totally do that. They’ll even argue with each other about what valence assignment is “correct”.
(It makes sense to declare that an action or plan is good: you can do it! And it makes sense to declare that a state-of-the-world is good: you can try to bring it about! But the concept "the moon", in and of itself, is none of those things. I strongly recommend Scott Alexander’s blog post [Ethnic Tension and Meaningless Arguments](https://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/) musing on this topic.)
To be explicit, I think the ability to assign valence to concepts—even when doing so kinda makes no sense—is not learned, but rather a fundamental part of how brain learning algorithms work—it’s right there in the source code, so to speak. I think it’s at least plausibly how future AGIs will work too.
1.1 But wouldn’t a smart person recognize that “thinking the moon is awesome” is stupid and incoherent?
-------------------------------------------------------------------------------------------------------
Yes! A smart person would indeed realize that assigning a positive valence to the moon is not really a thing that makes any sense.
But think about what happens when you’re doing ethical reasoning, or more generally / mundanely, when you’re deciding what to do: (1) you think a thought, (2) notice what its valence is, (3) repeat. There’s a lot more going on, but ultimately your motivations have to ground out in the valence of different thoughts, one way or the other.
Suppose I tell you “You really ought to put pebbles in your ears.” You say “Why?” And I say “Because, y’know, your ears, they don’t have any pebbles in them, but they really should.” And again you say “Why?” …At some point, this conversation has to ground out with something that you find *inherently, intuitively*positive-valence or negative-valence, in and of itself. Right?
And if I replace this dialogue with a monologue, where it’s just you in an empty room reflecting on what to do with your life, the same principle applies.
Now, as a human, you already have a whole complicated value function assigning positive and negative valence to all sorts of things, thanks to a lifetime of updates (ultimately tracing to reward function calculations centered around your [hypothalamus & brainstem](https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)). But if we intervene to assign a *high enough* valence to the moon, compared to the preexisting valence of everything else in your world-model (justice and friendship and eating etc.), then it’s *eventually*going to shift your behavior towards—well I don’t know *exactly*, but towards activities and goals and plans and philosophies and values that heavily involve your “moon” concept.
2. Analogously, aligning an AGI to “human flourishing”
======================================================
Let’s put aside the question of bootstrapping (see FAQ below) and assume that I have somehow built a brain-like AGI with some basic understanding of the world and ability to plan and get around. Assuming that AGI has already seen lots of human language, it will have certainly learned the human concept “human flourishing”—since after all it needs to understand what humans mean when they utter that specific pair of words. So then we can go into the AI and edit its value function such that whatever neural activations are associated with “human flourishing” get an extremely high value / valence. Maybe just to be safe, we can set the value/valence of everything else in the AGI’s world to be zero. And bam, now the AI thinks that the concept “human flourishing” is really great, and that feeling will influence how it assesses future thoughts / actions / plans.
Just as the previous section involved turning you into a “moon fanboy/fangirl”, we have now likewise made the AGI into a “human flourishing fanAGI”.
…And then what happens? I don’t know! It seems very hard to predict. The AGI has a “human flourishing” concept which is really a not-terribly-coherent bundle of pattern-match associations, the details of which are complex and hard to predict. And then the AGI will assess the desirability of thoughts / plans / actions based on how well they activate that concept. Some of those thoughts will be self-reflective, as it deliberates on the meaning of life etc. Damned if I know exactly what the AGI is going to do at the end of the day. But it seems at least plausible that it will do things that I judge as good, or even great, i.e. things vaguely in the category of “actualizing human flourishing in the world”.
Again, if a “moon fanboy/fangirl” would be very upset at the idea of the moon disappearing forever in a puff of smoke, then one might analogously hope that an extremely smart and powerful “human flourishing fanAGI” would be very upset at the idea of human flourishing disappearing from the universe, and would endeavor to prevent that from happening.
3. FAQ
======
**Q:** Wouldn’t the AGI self-modify to make itself falsely believe that there’s a lot of human flourishing? Or that human flourishing is just another term for hydrogen?
**A:** No, for the same reason that, if a supervillain is threatening to blow up the moon, and I think the moon is super-cool, I would not self-modify to make myself falsely believe that “the moon” is a white circle that I cut out of paper and taped to my ceiling.
The technical reason is: Self-modifying is a bit complicated, so I would presumably self-modify because I had a *plan* to self-modify. A plan is a type of thought, and I’m using my *current*value function to evaluate the appeal (valence) of thoughts. Such a thought would score poorly under my current values (under which the moon is not in fact a piece of paper taped to the ceiling), so I wouldn’t execute that plan. More discussion [here](https://www.lesswrong.com/posts/vpdJz4k5BgGzuGo7A/intro-to-brain-like-agi-safety-9-takeaways-from-neuro-2-2-on#9_5_2_Relation_to__observation_utility_agents_).
**Q:** Won’t the AGI intervene to prevent humans from turning into superficially-different transhumans? After all, “transhuman flourishing” isn’t a great pattern-match to “human flourishing”, right?
**A:** Hmm, yeah, that seems possible. And I think the are a lot of other issues like that too. As mentioned at the top, I never claimed that this was a great plan, only that it seems like it can plausibly get us to somewhere better than the status quo. I don’t have any better ideas right now.
**Q:** Speaking of which, why “human flourishing” in the first place? Why not “[CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition)”? Why not [“I am being corrigible & helpful”](https://www.lesswrong.com/posts/fkLYhTQteAu5SinAc/corrigibility)?
**A:** Mostly I don’t know—I consider the ideal target an open question and discuss it more [here](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi). (It also doesn’t have to be just one thing.) But FWIW I can say what I was thinking when I opted to pick “human flourishing” as my example for this post, rather than either of those other two things.
First, why didn’t I pick “CEV”? Well in my mind, the concept “human flourishing” has a *relatively*direct grounding in various types of (abstractions over) plausible real-world situations—the kind of thing that could be pattern-matched to pretty well. Whereas when I imagine CEV, it’s this very abstruse philosophical notion in my mind. If we go by the “distance metric” of “how my brain pattern-matches different things with each other”, the things that are “similar” to CEV are, umm, philosophical blog posts and thought experiments and so on. In other words, at least for me, CEV isn’t a grounded real-world thing. I have no clue what it would actually look like in the end. If you describe a scenario and ask if it’s a good match to “maximizing CEV”, I would have absolutely no idea. So a plan centered around an AGI pattern-matching to the “CEV” concept seems like it just wouldn’t work.
(By the same token, a commenter in my last post on this [suggested](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi?commentId=B9vxxRpGcJp8KsD6N) that “human flourishing” was inferior to “Do things that tend to increase the total subjective utility (weighted by amount of consciousness) of all sentient beings". Yeah sure, that thing sounds pretty great, but it strikes me as a complicated multi-step composite thought, whereas what I’m talking about needs to be an atomic concept / category / chunk in the world-model, I think.)
Second, why not “I am being corrigible & helpful?” Well, I see two problems with that. One is: [“the first-person problem”](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi#14_3_Supervising_the_Thought_Assessors__and_the__first_person_problem_): Unless we have great interpretability (and I hope we do!), the only way to identify the neural activations for “I am being corrigible & helpful” is to catch the AGI itself in the act of being actually sincerely corrigible & helpful, and flag the corresponding neural activations. But we can’t tell from the AGI’s actions whether that’s happening—as opposed to the AGI *acting* corrigible & helpful for nefarious purposes. By contrast, the “human flourishing” concept can probably be picked up decently well from having the AGI passively watch YouTube and seeing what neural activations fire when a character is literally saying the words “human flourishing”, for example. The other problem is: I’m slightly skeptical that a corrigible docile helper AGI should be what we’re going for in the first place, for reasons [here](https://www.lesswrong.com/posts/LFNXiQuGrar3duBzJ/what-does-it-take-to-defend-the-world-against-out-of-control). (There’s also an objection that a corrigible helper AGI is almost guaranteed to be reflectively-unstable, or else not very capable, but I mostly don’t buy that objection for reasons [here](https://www.lesswrong.com/posts/KDMLJEXTWtkZWheXt/consequentialism-and-corrigibility).)
**Q:** Wait hang on a sec. If we identify the “human flourishing” concept by “which neurons are active when somebody says the words ‘human flourishing’ while the AGI watches a YouTube video”, then how do you know that those neural activations are really “human flourishing” and not “person saying the words ‘human flourishing’”, or “person saying the words ‘human flourishing’ in a YouTube video”, etc.?
**A:** Hmm, fair enough. That’s a potential failure mode. Hopefully we’ll be more careful than just doing the YouTube thing and pressing “Go” on the AGI value-function-editing-routine. Specifically, once we get a candidate concept inside the AGI’s unlabeled world-model, I propose to do some extra work to try to confirm that this concept is indeed the “human flourishing” concept we were hoping for. That extra work would probably be broadly in the category of interpretability—e.g. studying when those neurons are active or not, what they connect to, etc.
(As a special case, it’s particularly important that the AGI winds up thinking that the real world is real, and that YouTube videos are not; making that happen might turn out to require at least some amount of training the AGI with a robot body in the real world, which in turn might pose [competitiveness concerns](https://www.lesswrong.com/posts/Sd4QvG4ZyjynZuHGt/intro-to-brain-like-agi-safety-12-two-paths-forward#12_5_3_So_at_the_end_of_the_day__how_should_we_handle_life_experience_).)
**Q:** If we set the valence of everything apart from “human flourishing” to zero, won’t the AGI just be totally incompetent? For example, wouldn’t it neglect to recharge its batteries, if the thought of recharging its batteries has zero valence?
**A:** In principle, an omniscient agent could get by with every other valence being zero, thanks to explicit planning / means-end reasoning. For example, it might think the thought “I’m going to recharge my battery *and by doing so, eventually increase human flourishing*” and that composite thought would be appealing (cf. the compositionality discussion above), so the AGI would do it. That said, for non-omniscient (a.k.a. real) agents, I think that’s probably unrealistic. It’s probably necessary-for-capabilities to put positive valence directly onto instrumentally-useful thoughts and behaviors—it’s basically a method of “caching” useful steps. I think the brain has an algorithm to do that, in which, if X (say, keeping a to-do list) is instrumentally useful for Y (something something human flourishing), and Y has positive valence, then X gets some positive valence too, at least after a couple repetitions. So maybe, after we perform our intervention that sets “human flourishing” to a high valence, we can set all the other preexisting valences to *gradually* decay away, and meanwhile run that algorithm to give fresh positive valences to instrumentally-useful thoughts / actions / plans.
**Q:** Whoa, but wait, if you do that, then in the long term the AGI will have positive valence on *both* “human flourishing” *and* various instrumentally-useful behaviors / subgoals that are not themselves “human flourishing”. And [the source code doesn’t have any fundamental distinction between instrumental & final goals](https://www.lesswrong.com/posts/vpdJz4k5BgGzuGo7A/intro-to-brain-like-agi-safety-9-takeaways-from-neuro-2-2-on#9_2_2_Instrumental___final_preferences_seem_to_be_mixed_together). So what if it reflects on the meaning of life and decides to pursue the latter at the expense of human flourishing?
**A:** Hmm. Yeah I guess that could happen. It also might not. I dunno.
I do think that, in this part of the learning algorithm, if X ultimately gets its valence from contributing to high-valence Y, then we wind up with X having *some* valence, but not as much as Y has. So it’s not unreasonable to hope that the “human flourishing” valence will remain much more positive than the valence of anything else, and thus “human flourishing” has a decent chance of carrying the day when the AGI self-reflects on what it cares about and what it should do in life. Also, “carrying the day” is a stronger claim than I need to make here; I’m really just hoping that its good feelings towards “human flourishing” will not be *crushed entirely*, and that hope is even more likely to pan out.
**Q:** What about [ontological crises](https://www.lesswrong.com/tag/ontological-crisis) / what Stuart Armstrong calls [“Concept Extrapolation”](https://www.lesswrong.com/s/u9uawicHx7Ng7vwxA) / what Scott Alexander calls [“the tails coming apart”](https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/)? In other words, as the AGI learns more and/or considers out-of-distribution plans, it might come find that the web-of-associations corresponding to the “human flourishing” concept are splitting apart. Then what does it do?
**A:** I talk about that much more in [§14.4 here](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi#14_4_Conservatism_and_concept_extrapolation), but basically I don’t know. The plan here is to just hope for the best. More specifically: As the AGI learns new things about the world, and as the world itself changes, the “human flourishing” concept will stop pointing to a coherent [“cluster in thingspace”](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace), and the AGI will decide somehow or other what it cares about, in its new understanding of the world. According to the plan discussed in this blog post, we have no control over how that process will unfold and where it will end up. Hopefully somewhere good, but who knows?
**Q:** This plan needs a “bootstrapping” step, where the AGI needs to be smart enough to know what “human flourishing” is before we intervene to give that concept a high value / valence. How does that work?
**A:** I dunno. We can just set the AGI up as if we were maximizing capabilities, and hope that, during training, the AGI will come to understand the “human flourishing” concept long before it is willing and able to undermine our plans, create backup copies, obfuscate its thoughts, etc. And then (hopefully) we can time our valence-editing intervention to happen within that gap.
Boxing could help here, as could (maybe) doing the first stage of training in passive (pure self-supervised) learning mode.
To be clear, I’m not denying that this is a possible failure mode. But it doesn’t seem like an unsolvable problem.
**Q:** What else could go wrong?
**A:** The motivations of this AGI would be *very* different from the motivations of any human (or animal). So I feel some very general cloud of uncertainty around this plan. I have no point of reference; I don’t know what the “unknown unknowns” are. So I assume other things could go wrong but I’m not sure what.
**Q:** If this is a mediocre-but-not-totally-doomed plan, then what’s the next step to make this plan incrementally better? Or what’s the next step to learn more about whether this plan would actually work?
**A:** There’s some more discussion [here](https://www.lesswrong.com/posts/QpHewJvZJFaQYuLwH/intro-to-brain-like-agi-safety-14-controlled-agi) but I mostly don’t know. ¯\\_(ツ)\_/¯
I’m mostly spending my research time thinking about something superficially different from “directly iterating on this plan”, namely reverse-engineering human social instincts—see [here](https://www.lesswrong.com/posts/qusBXzCpxijTudvBB/my-agi-safety-research-2022-in-review-and-plans#2__Second_half_of_2022__1_3___My_main_research_project) for a very short summary of what that means and why I’m doing it. I think there’s some chance that this project will help illuminate / “[deconfuse](https://www.lesswrong.com/tag/deconfusion)” the mediocre plan discussed here, but it might also lead to a somewhat different and hopefully-better plan.
This is what “human flourishing” looks like, according to Stable Diffusion (top) and DALL-E 2 (bottom). 🤔1. **[^](#fnref6gkdz6spdl)**For example, I commonly hear things like “We currently have no plan with any prayer of aiming a powerful AGI at any particular thing whatsoever; our strong default expectation should be that it optimizes something totally random like tiny molecular squiggles.” E.g. Nate Soares suggests [here](https://www.lesswrong.com/posts/NJYmovr9ZZAyyTBwM/what-i-mean-by-alignment-is-in-large-part-about-making?commentId=JQe23mysvYothKZo5) that he has ≳90% credence on not even getting anywhere remotely close to an intended goal / motivation, if I’m understanding him correctly.
Incidentally, this is also relevant to [s-risks](https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks): There’s a school of thought that alignment research might be bad for s-risk, because our strong default expectation right now is a universe full of tiny molecular squiggles, which kinda sucks but at least it doesn’t involve any suffering, whereas alignment research could change that. But that’s not *my* strong default expectation. I think the plan I discuss here would be a really obvious thing that would immediately pop into the head of any future AGI developer (assuming we’re in the brain-like AGI development path), and this plan would have at least a decent chance of leading us a future with lots of sentient life, for better or worse.
2. **[^](#fnref3uyfu52v9bk)**I think if you imagine a supervillain blowing up the moon, it sorta manifests as a two-sequential step thought in which the moon is *first* present and *then* absent. I think such a thought gets the opposite-sign valence of the moon-concept itself, i.e. negative valence in this case, thanks to something vaguely related to the time derivative that shows up in [Temporal Difference learning](https://en.wikipedia.org/wiki/Temporal_difference_learning). I will omit the details, about which I still have a bit of uncertainty anyway, but in any case I expect these details to be obvious by the time we have AGI.
|
0806acf4-cd17-4870-85d7-7d72bd2d5b59
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Transformative AI issues (not just misalignment): an overview
If this ends up being the [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) due to advanced AI, what are the key factors in whether things go well or poorly?
(Click to expand) More detail on why AI could make this the most important century
In [The Most Important Century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd), I argued that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.
[This page](https://www.cold-takes.com/most-important-century/) has a ~10-page summary of the series, as well as links to an audio version, podcasts, and the full series.
The key points I argue for in the series are:
* **The long-run future is radically unfamiliar.** Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.
* **The long-run future could come much faster than we think,** due to a possible AI-driven productivity explosion.
* The relevant kind of **AI looks like it will be developed this century** - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.
* These claims seem too "wild" to take seriously. But there are a lot of reasons to think that **we live in a wild time, and should be ready for anything.**
* We, the people living in this century, have the chance to have a huge impact on huge numbers of people to come - if we can make sense of the situation enough to find helpful actions. But right now, **we aren't ready for this.**
A lot of my previous writings have focused specifically on the [threat of “misaligned AI”](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn/): AI that could have dangerous *aims of its own* and [defeat all of humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/). In this post, I’m going to zoom out and give a broader overview of multiple issues transformative AI could raise for society - with an emphasis on **issues we might want to be thinking about *now* rather than waiting to address as they happen.**
My discussion will be very unsatisfying. “What are the key factors in whether things go well or poorly with transformative AI?” is a massive topic, with lots of angles that have gotten almost no attention and (surely) lots of angles that I just haven’t thought of at all. My one-sentence summary of this whole situation is: [we’re not ready for this](https://www.cold-takes.com/most-important-century/#were-not-ready-for-this).
But hopefully this will give some sense of what sorts of issues should clearly be on our radar. And hopefully it will give a sense of why - out of all the issues we need to contend with - I’m as focused on the threat of misaligned AI as I am.
Outline:
* First, I’ll briefly clarify what kinds of issues I’m trying to list. I’m looking for ways the future could look durably and dramatically different depending on how we navigate the development of transformative AI - such that **doing the right things ahead of time could make a big, lasting difference.**
* Then, I’ll list candidate issues:
+ **Misaligned AI.** I touch on this only briefly, since I’ve discussed it at length in [previous pieces](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w). The short story is that we should try to avoid AI ending up with dangerous goals of its own and [defeating humanity](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/). (The remaining issues below seem irrelevant if this happens!)
+ **Power imbalances.** As AI speeds up science and technology, it could cause some country/countries/coalitions to become enormously powerful - so it matters a lot which one(s) lead the way on transformative AI. (I fear that this concern is generally overrated compared to misaligned AI, but it is still very important.) There could also be dangers in overly widespread (as opposed to concentrated) AI deployment.
+ **Early applications of AI.** It might be that what early AIs are used for durably affects how things go in the long run - for example, whether early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe. We might be able to affect which uses are predominant early on.
+ **New life forms.** Advanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or [digital people](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH). Many of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities (for example, should we allow people to make as many copies as they want of entities that will predictably vote in certain ways?) Early decisions about these kinds of questions could have long-lasting effects.
+ **Persistent policies and norms.** Perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed. (These could include things like a better social safety net suited to high, sustained unemployment rates; better regulations aimed at avoiding bias; etc.)
+ **Speed of development.** Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.
* Finally, I’ll discuss how I’m thinking about which of these issues to prioritize at the moment, and why misaligned AI is such a focus of mine.
* An appendix will say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of [quality of life](https://forum.effectivealtruism.org/posts/9tHsmALcrydGXG2Ke/has-life-gotten-better), assuming we navigate the above issues non-amazingly but non-catastrophically.
The kinds of issues I’m trying to list
--------------------------------------
One basic angle you could take on AI is:
“AI’s main effect will be to speed up [science and technology a lot](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj#Explosive_scientific_and_technological_advancement). This means humans will be able to do *all* the things they were doing before - the good and the bad - but more/faster. So basically, we’ll end up with the same future we would’ve gotten without AI - just sooner.
“Therefore, there’s no need to prepare in advance for anything in particular, beyond what we’d do to work toward a better future *normally* (in a world with no AI). Sure, lots of weird stuff could happen as science and technology advance - but that was already true, and many risks are just too hard to predict now and easier to respond to as they happen.”
I don’t agree with the above, but I *do* think it’s a good starting point. I think we shouldn’t be listing everything that might happen in the future, as AI leads to advances in science and technology, and trying to prepare for it. Instead, we should be asking: **“if [transformative AI](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd) is coming in the next few decades, how does this *change the picture* of what we should be focused on, beyond just speeding up what’s going to happen anyway?**”
And I’m going to try to focus on **extremely high-stakes issues -** ways I could imagine the future looking **durably and dramatically different** depending on how we navigate the development of transformative AI.
Below, I’ll list some candidate issues fitting these criteria.
Potential issues
----------------
### Misaligned AI
I won’t belabor this possibility, because the [last several pieces](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w) have been focused on it; this is just a quick reminder.
In a world without AI, the main question about the long-run future would be how humans will end up treating each other. But if powerful AI systems will be developed in the coming decades, we need to contend with the possibility that these AI systems will end up having [goals of their own](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn) - and [displacing humans](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) as the species that determines how things will play out.
(Click to expand)Why would AI "aim" to defeat humanity?
A [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn) argued that if today’s AI development methods lead directly to powerful enough AI systems, disaster is likely by default (in the absence of specific countermeasures).
In brief:
* Modern AI development is essentially based on “training” via trial-and-error.
* If we move forward incautiously and ambitiously with such training, and if it gets us all the way to very powerful AI systems, then such systems will likely end up *aiming for certain states of the world* (analogously to how a chess-playing AI aims for checkmate)*.*
* And these states will be *other than the ones we intended*, because our trial-and-error training methods won’t be accurate. For example, when we’re confused or misinformed about some question, we’ll reward AI systems for giving the wrong answer to it - unintentionally training deceptive behavior.
* We should expect disaster if we have AI systems that are both (a) [powerful enough](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) to defeat humans and (b) aiming for states of the world that we didn’t intend. (“Defeat” means taking control of the world and doing what’s necessary to keep us out of the way; it’s unclear to me whether we’d be literally killed or just forcibly stopped[[1]](#fn1) from changing the world in ways that contradict AI systems’ aims.)
(Click to expand) *How* could AI defeat humanity?
In a [previous piece](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/), I argue that AI systems could defeat all of humanity combined, if (for whatever reason) they were aimed toward that goal.
By defeating humanity, I mean gaining control of the world so that AIs, not humans, determine what happens in it; this could involve killing humans or simply “containing” us in some way, such that we can’t interfere with AIs’ aims.
One way this could happen is if AI became extremely advanced, to the point where it had "cognitive superpowers" beyond what humans can do. In this case, a single AI system (or set of systems working together) could imaginably:
* Do its own research on how to build a better AI system, which culminates in something that has incredible other abilities.
* Hack into human-built software across the world.
* Manipulate human psychology.
* Quickly generate vast wealth under the control of itself or any human allies.
* Come up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.
* Develop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.
However, my piece also explores what things might look like if *each AI system basically has similar capabilities to humans.* In this case:
* Humans are likely to deploy AI systems throughout the economy, such that they have large numbers and access to many resources - and the ability to make copies of themselves.
* From this starting point, AI systems with human-like (or greater) capabilities would have a number of possible ways of getting to the point where their total population could outnumber and/or out-resource humans.
* I address a number of possible objections, such as "How can AIs be dangerous without bodies?"
More: [AI could defeat all of us combined](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/)
### Power imbalances
I’ve argued that AI could cause a [dramatic acceleration in the pace of scientific and technological advancement](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj#Explosive_scientific_and_technological_advancement).
(Click to expand) How AI could cause explosive progress
(This section is mostly copied from my [summary of the "most important century" series](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd); it links to some pieces with more detail at the bottom.)
Standard economic growth models imply that **any technology that could fully automate innovation would cause an "economic singularity":** productivity going to infinity this century. This is because it would create a powerful feedback loop: more resources -> more ideas and innovation -> more resources -> more ideas and innovation ...
This loop would not be unprecedented. I think it is in some sense the "default" way the economy operates - for most of economic history up until a couple hundred years ago.

Economic history: more resources -> more people -> more ideas -> more resources ...
But in the "demographic transition" a couple hundred years ago, the "more resources -> more people" step of that loop stopped. Population growth leveled off, and more resources led to richer people instead of more people:
Today's economy: more resources -> ~~more~~ richer people -> same pace of ideas -> ...
The feedback loop could come back if some other technology restored the "more resources -> more ideas" dynamic. One such technology could be the right kind of AI: what I call PASTA, or Process for Automating Scientific and Technological Advancement.

Possible future: more resources -> more AIs -> more ideas -> more resources ...
That means that **our radical long-run future could be upon us very fast** after PASTA is developed (if it ever is).
It also means that if PASTA systems are *misaligned* - pursuing their own non-human-compatible objectives - things could very quickly go sideways.
Key pieces:
* [The Duplicator: Instant Cloning Would Make the World Economy Explode](https://forum.effectivealtruism.org/posts/ZZHhQqHRqQ4ciwLBf/the-duplicator-instant-cloning-would-make-the-world-economy)
* [Forecasting Transformative AI, Part 1: What Kind of AI?](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj)
One way of thinking about this: perhaps (for reasons I’ve [argued previously](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AmxxnazJcBWzWEeqj#Explosive_scientific_and_technological_advancement)) AI could enable the equivalent of hundreds of years of scientific and technological advancement in a matter of a few months (or faster). If so, then developing powerful AI a few months before others could lead to having technology that is (effectively) hundreds of years ahead of others’.
Because of this, it’s easy to imagine that AI could lead to big power imbalances, as whatever country/countries/coalitions “lead the way” on AI development could become far more powerful than others (perhaps analogously to when a few smallish European states took over much of the rest of the world).
One way we might try to make the future go better: maybe it could be possible for different countries/coalitions to strike deals in advance. For example, two equally matched parties might agree in advance to share their resources, territory, etc. with each other, in order to avoid a winner-take-all competition.
What might such agreements look like? Could they possibly be enforced? I really don’t know, and I haven’t seen this explored much.[[1]](#fn1)
Another way one might try to make the future go better is to try to help a *particular* country, coalition, etc. develop powerful AI systems before others do. I previously called this the [“competition” frame](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__competition__frame).
I think it is, in fact, enormously important who leads the way on transformative AI. At the same time, I’ve expressed concern that people might overfocus on this aspect of things vs. other issues, for a number of reasons including:
* *I think people naturally get more animated about "helping the good guys beat the bad guys" than about "helping all of us avoid getting a universally bad outcome, for impersonal reasons such as 'we designed sloppy AI systems' or 'we created a dynamic in which haste and aggression are rewarded.'"*
* *I expect people will tend to be overconfident about which countries, organizations or people they see as the "good guys."*
(More [here](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#Why_I_fear__competition__being_overrated__relative_to__caution_).)
Finally, it’s worth mentioning the possible dangers of powerful AI being too widespread, rather than too concentrated. In [The Vulnerable World Hypothesis](https://nickbostrom.com/papers/vulnerable.pdf), Nick Bostrom contemplates potential future dynamics such as “advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions.” In addition to avoiding worlds where AI capabilities end up concentrated in the hands of a few, it could also be important to avoid worlds in which they diffuse too widely, too quickly, before we’re able to assess the risks of widespread access to technology far beyond today’s.
### Early applications of AI
Maybe advanced AI will be useful for some sorts of tasks before others. For example, maybe - by default - advanced AI systems will soon be powerful persuasion tools, and cause wide-scale societal dysfunction before they cause rapid advances in science and technology. And maybe, with effort, we could make it less likely that this happens - more likely that early AI systems are used for education and truth-seeking, rather than manipulative persuasion and/or entrenching what we already believe.
There could be lots of possibilities of this general form: particular ways in which AI could be predictably beneficial, or disruptive, before it becomes an all-purpose accelerant to science and technology. Perhaps trying to map these out today, and push for advanced AI to be used for particular purposes early on, could have a lasting effect on the future.
### New life forms
Advanced AI could lead to new forms of intelligent life, such as AI systems themselves and/or [digital people](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH).
Digital people: one example of how wild the future could be
In a [previous piece](https://forum.effectivealtruism.org/posts/FbGfnwNda43eFsSgS/digital-people-faq#I_m_having_trouble_picturing_a_world_of_digital_people___how_the_technology_could_be_introduced__how_they_would_interact_with_us__etc__Can_you_lay_out_a_detailed_scenario_of_what_the_transition_from_today_s_world_to_a_world_full_of_digital_people_might_look_like_), I tried to give a sense of just how wild a future with advanced technology could be, by examining one hypothetical technology: "digital people."
To get the idea of digital people, imagine a computer simulation of a specific person, in a virtual environment. For example, a simulation of you that reacts to all "virtual events" - virtual hunger, virtual weather, a virtual computer with an inbox - just as you would.
I’ve argued that digital people would likely be [conscious and deserving of human rights](https://forum.effectivealtruism.org/posts/FbGfnwNda43eFsSgS/digital-people-faq#Could_digital_people_be_conscious__Could_they_deserve_human_rights_) just as we are. And I’ve argued that they could have major impacts, in particular:
* Productivity. Digital people could be copied, just as we can easily make copies of ~any software today. They could also be run much faster than humans. Because of this, digital people could have effects comparable to those of the [Duplicator](https://forum.effectivealtruism.org/posts/ZZHhQqHRqQ4ciwLBf/the-duplicator-instant-cloning-would-make-the-world-economy), but more so: unprecedented (in history or in sci-fi movies) levels of economic growth and productivity.
* Social science. Today, we see a lot of progress on understanding scientific laws and developing cool new technologies, but not so much progress on understanding human nature and human behavior. Digital people would fundamentally change this dynamic: people could make copies of themselves (including sped-up, temporary copies) to explore how different choices, lifestyles and environments affected them. Comparing copies would be informative in a way that current social science rarely is.
* Control of the environment. Digital people would experience whatever world they (or the controller of their virtual environment) wanted. Assuming digital people had true conscious experience (an assumption discussed [in the FAQ](https://forum.effectivealtruism.org/posts/FbGfnwNda43eFsSgS/digital-people-faq#Could_digital_people_be_conscious__Could_they_deserve_human_rights_)), this could be a good thing (it should be possible to eliminate disease, material poverty and non-consensual violence for digital people) or a bad thing (if human rights are not protected, digital people could be subject to scary levels of control).
* Space expansion. The population of digital people might become staggeringly large, and the computers running them could end up distributed throughout our galaxy and beyond. Digital people could exist anywhere that computers could be run - so space settlements could be more straightforward for digital people than for biological humans.
* Lock-in. In today's world, we're used to the idea that the future is unpredictable and uncontrollable. Political regimes, ideologies, and cultures all come and go (and evolve). But a community, city or nation of digital people could be much more stable.
+ Digital people need not die or age.
+ Whoever sets up a "virtual environment" containing a community of digital people could have quite a bit of long-lasting control over what that community is like. For example, they might build in software to reset the community (both the virtual environment and the people in it) to an earlier state if particular things change - such as who's in power, or what religion is dominant.
+ I consider this a disturbing thought, as it could enable long-lasting authoritarianism, though it could also enable things like permanent protection of particular human rights.
I think these effects could be a very good or a very bad thing. How the early years with digital people go could irreversibly determine which.
More:
* [Digital People would be an Even Bigger Deal](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH)
* [Digital People FAQ](https://forum.effectivealtruism.org/posts/FbGfnwNda43eFsSgS/digital-people-faq)
Many of the frameworks we’re used to, for ethics and the law, could end up needing quite a bit of rethinking for new kinds of entities. For example:
* How should we determine which AI systems or digital people are considered to have “rights” and get legal protections?
* What about the right to vote? If an AI system or digital person can be quickly copied billions of times, with each copy getting a vote, that could be a recipe for trouble - does this mean we should restrict copying, restrict voting or something else?
* What should the rules be about engineering AI systems or digital people to have particular beliefs, motivations, experiences, etc.? Simple examples:
+ Should it be illegal to create new AI systems or digital people that will predictably suffer a lot? How much suffering is too much?
+ What about creating AI systems or digital people that consistently, predictably support some particular political party or view?
(For a lot more in this vein, see [this very interesting piece by Nick Bostrom and Carl Shulman](https://nickbostrom.com/propositions.pdf).)
Early decisions about these kinds of questions could have long-lasting effects. For example, imagine someone creating billions of AI systems or [digital people](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH) that have capabilities and subjective experiences comparable to humans, and are deliberately engineered to “believe in” (or at least help promote) some particular ideology (Communism, libertarianism, etc.) If these systems are self-replicating, that could change the future drastically.
Thus, it might be important to set good principles in place for tough questions about how to treat new sorts of digital entities, *before* new sorts of digital entities start to multiply.
### Persistent policies and norms
There might be particular policies, norms, etc. that are likely to stay persistent even as technology is advancing and many things are changing.
For example, how people think about ethics and norms might just inherently change more slowly than technological capabilities change. Perhaps a society that had strong animal rights protections, and general pro-animal attitudes, would maintain these properties all the way through explosive technological progress, becoming a technologically advanced society that treated animals well - while a society that had little regard for animals would become a technologically advanced society that treated animals poorly. Similar analysis could apply to religious values, social liberalism vs. conservatism, etc.
So perhaps we ought to be identifying particularly important policies, norms, etc. that seem likely to be durable even through rapid technological advancement, and try to improve these as much as possible before transformative AI is developed.
One tangible example of a concern I’d put in this category: if AI is going to cause high, persistent technological unemployment, it might be important to establish new social safety net programs (such as universal basic income) *today* - if these programs would be easier to establish today than in the future. I feel less than convinced of this one - first because I [have some doubts](https://forum.effectivealtruism.org/posts/ZNPYMp2uu5zr3Po66/technological-unemployment-ai-vs-most-important-century-ai-1) about how big an issue technological unemployment is going to be, and second because it’s not clear to me why policy change would be easier today than in a future where technological unemployment is a reality. And more broadly, I fear that it's very hard to design *and* (politically) implement policies today that we can be confident will make things durably better as the world changes radically.
### Slow it down?
I’ve named a number of ways in which weird things - such as power imbalances, and some parts of society changing much faster than others - could happen as scientific and technological advancement accelerate. Maybe one way to make the most important century go well would be to simply avoid these weird things by avoiding too-dramatic acceleration. Maybe human society just isn’t likely to adapt well to rapid, radical advances in science and technology, and finding a way to limit the pace of advances would be good.
Any individual company, government, etc. has an incentive to move quickly and try to get ahead of others (or not fall too far behind), but coordinated agreements and/or regulations (along the lines of the “global monitoring” possibility discussed [here](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt#Global_monitoring__noticing_people_about_to_step_on_mines__and_stopping_them_)) could help everyone move more slowly.
### What else?
Are there other ways in which transformative AI would cause particular issues, risks, etc. to loom especially large, and to be worth special attention today? I’m guessing I’ve only scratched the surface here.
What I’m prioritizing, at the moment
------------------------------------
If this is the [most important century](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd), there’s a vast set of things to be thinking about and trying to prepare for, and it’s hard to know what to prioritize.
Where I’m at for the moment:
**It seems very hard to say today what will be desirable in a radically different future.** I wish more thought and attention were going into things like [early applications of AI](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#Early_applications_of_AI); [norms and laws around new life forms](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#New_life_forms); and whether there are [policy changes today that we could be confident in even if the world is changing rapidly and radically.](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#Persistent_policies_and_norms) **But** it seems to me that it would be very hard to be confident in any particular goal in areas like these. Can we really say anything today about what sorts of digital entities should have rights, or what kinds of AI applications we hope come first, that we expect to hold up?
**I feel most confident in two very broad ideas: “It’s bad if AI systems defeat humanity to pursue goals of their own” and “It’s good if good decision-makers end up making the key decisions.”** These map to the [misaligned AI](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#Misaligned_AI) and [power imbalance](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/mPkFheB4EM6pmEC7y/#Power_imbalances) topics - or what I previously called [caution](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__caution__frame) and [competition](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__competition__frame).
That said, **it also seems hard to know who the “good decision-makers” are.** I’ve definitely observed some of this dynamic: “Person/company A says they’re trying to help the world by aiming to build transformative AI before person/company B; person/company B says they’re trying to help the world by aiming to build transformative AI before person/company A.”
It’s pretty hard to come up with tangible tests of who’s a “good decision-maker.” We mostly don’t know what person A would do with enormous power, or what person B would do, based on their actions today. One possible criterion is that we should arguably have more trust in people/companies who show more [caution](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__caution__frame) - people/companies who show willingness to hurt their own chances of “being in the lead” in order to help everyone’s chance of avoiding a catastrophe from misaligned AI.[[2]](#fn2)
(Instead of focusing on which particular people and/or companies lead the way on AI, you could focus on which *countries* do, e.g. preferring non-authoritarian countries. It’s arguably pretty clear that non-authoritarian countries would be better than authoritarian ones. However, I have concerns about this as a goal as well, discussed in a footnote.[[3]](#fn3))
**For now, I am *most* focused on the threat of misaligned AI.** Some reasons for this:
* It currently seems to me that misaligned AI is a significant risk. Misaligned AI seems [likely by default](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/vGsRdWzwjrFgCXdMn) if we don’t specifically do things to prevent it, and preventing it seems far from straightforward (see previous posts on the [difficulty of alignment research](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/NbiHKTN5QhFFfjjm5) and [why it could be hard for key players to be cautious](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/XRphCh6NbfQiDF3Nt)).
* At the same time, it seems like there are significant [hopes](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd) for how we might avoid this risk. As argued [here](https://www.alignmentforum.org/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Key_question__how_cautious_will_Magma_and_others_be_) and [here](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/rJRw78oihoT5paFGd), my sense is that the more broadly people understand this risk, the better our odds of avoiding it.
* I currently feel that this threat is [underrated](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/Lbtcjfxhrs8kfKK2M#The__competition__frame), relative to the easier-to-understand angle of “I hope people I like develop powerful AI systems before others do.”
* I think the “competition” frame - focusing on helping some countries/coalitions/companies develop advanced AI before others - makes quite a bit of sense as well. But - as noted directly above - I have big reservations about the most common “competition”-oriented actions, such as trying to help particular companies outcompete others or trying to get U.S. policymakers more focused on AI.
+ For the latter, I worry that this risks making huge sacrifices on the “caution” front and even backfiring by causing other governments to invest in projects of their own.
+ For the former, I worry about the ability to judge “good” leadership, and the temptation to overrate people who resemble oneself.
This is all far from absolute. I’m open to a broad variety of projects to help the most important century go well, whether they’re about “caution,” “competition” or another issue (including those I’ve listed in this post). My top priority at the moment is reducing the risks of misaligned AI, but I think a huge range of potential risks aren’t getting enough attention from the world at large.
Appendix: if we avoid catastrophic risks, how good does the future look?
------------------------------------------------------------------------
Here I’ll say a small amount about whether the long-run future seems likely to be better or worse than today, in terms of [quality of life](https://forum.effectivealtruism.org/posts/9tHsmALcrydGXG2Ke/has-life-gotten-better).
Part of why I want to do this is to give a sense of why I feel cautiously and moderately optimistic about such a future - such that I feel broadly okay with a frame of “We should try to prevent anything [too catastrophic](https://forum.effectivealtruism.org/s/gBjPorwZHRArNSQ5w/p/6LTh4foNuC3NdtmZH/) from happening, and figure that the future we get if we can pull that off is reasonably likely (though far from assured!) to be good.”
So I’ll go through some quick high-level reasons for hope (the future might be better than the present) - and for concern (it might be worse).
**In this section, I’m ignoring the special role AI might play, and just thinking about what happens if we get a fast-forwarded future.** I’ll be focusing on what I think are probably the most likely ways the world will change in the future, laid out [here](https://forum.effectivealtruism.org/posts/XhquBrefbrTaui5BR/summary-of-history-empowerment-and-well-being-lens#History_is_a_story): a higher world population and greater **empowerment due to a greater stock of ideas, innovations and technological capabilities.** My aim is to ask: “If we navigate the above issues neither amazingly nor catastrophically, and end up with the same sort of future we’d have had without AI (just sped up), how do things look?”
**Reason for hope: empowerment trends.** One simple take would be: “[Life has gotten better for humans](https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/)[[4]](#fn4) [over the last couple hundred years or so](https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/), the period during which we’ve seen [most of history’s economic growth and technological progress](https://forum.effectivealtruism.org/posts/pFHN3nnN9WbfvWKFg/this-can-t-go-on). We’ve seen better health, less poverty and hunger, less violence, more anti-discrimination measures, and few signs of anything getting clearly worse. So if humanity just keeps getting more and more [improvement](https://forum.effectivealtruism.org/posts/z7quAxWyHuqFdxGE6/rowing-steering-anchoring-equity-mutiny-1#Rowing>empowered</a>, and nothing catastrophic happens, we should plan on life continuing to improve along a variety of dimensions.”
</p>
<p>
<em>Why</em> is this the trend, and should we expect it to hold up? There are lots of theories, and I won’t pretend to know, but I’ll lay out some basic thoughts that may be illustrative and give cause for optimism.
</p>
<p>
First off, there is an awful lot of room for improvement just from continuing to cut down on things like hunger and disease. A wealthier, more technologically advanced society seems like a pretty good bet to have less hunger and disease for fairly straightforward reasons.
</p>
<p>
But we’ve seen <a href=) on other dimensions too. This could be partly explained by something like the following dynamic:
* Most people would - aspirationally - *like* to be nonviolent, compassionate, generous and fair, if they could do so without sacrificing other things.
* As [empowerment](https://forum.effectivealtruism.org/posts/z7quAxWyHuqFdxGE6/rowing-steering-anchoring-equity-mutiny-1#Rowing) rises, the need to make sacrifices falls (noisily and imperfectly) across the board.
* This dynamic may have led to some (noisy, imperfect) improvement to date, but there might be *much more* benefit in the future compared to the past. For example, if we see a lot of progress on [social science](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH#Social_science), we might get to a world where people understand their own needs, desires and behavior better - and thus can get most or all of what they want (from material needs to self-respect and happiness) without having to outcompete or push down others.[[5]](#fn5)
**Reason for hope: the “cheap utopia” possibility.** This is sort of an extension of the previous point. If we imagine the upper limit of how “empowered” humanity could be (in terms of having lots of technological capabilities), it might be relatively *easy* to create a kind of [utopia](https://www.cold-takes.com/visualizing-utopia/) (such as the [utopia I’ve described previously](https://www.cold-takes.com/visualizing-utopia/#a-meta-option), or hopefully something much better). This doesn’t *guarantee* that such a thing will happen, but a future where it’s technologically easy to do things like [meeting material needs](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH#Virtual_reality_and_control_of_the_environment) and providing [radical choice](https://www.cold-takes.com/visualizing-utopia/#a-meta-option) could be quite a bit better than the present.
An interesting (wonky) treatment of this idea is Carl Shulman’s blog post: [Spreading happiness to the stars seems little harder than just spreading](http://reflectivedisequilibrium.blogspot.com/2012/09/spreading-happiness-to-stars-seems.html).
**Reason for concern: authoritarianism.** There are some huge countries that are essentially ruled by one person, with little to no democratic or other mechanisms for citizens to have a voice in how they’re treated. It seems like a live risk that the world could end up this way - essentially ruled by one person or relatively small coalition - in the long run. (It arguably would even continue a historical trend in which political units have gotten larger and larger.)
Maybe this would be fine if whoever’s in charge is able to let everyone have freedom, wealth, etc. at little cost to themselves (along the lines of the above point). But maybe whoever’s in charge is just a crazy or horrible person, in which case we might end up with a bad future even if it *would* be “cheap” to have a wonderful one.
**Reason for concern: competitive dynamics.** You might imagine that as empowerment advances, we get purer, more unrestrained *competition*.
One way of thinking about this:
* Today, no matter how ruthless CEOs are, they tend to accommodate some amount of leisure time for their employees. That’s because businesses have no choice but to hire people who insist on working a limited number of hours, having a life outside of work, etc.
* But if we had advanced enough technology, it might be possible to run a business whose employees have zero leisure time. (One example would be via [digital people](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH) and the ability to [make lots of copies of highly productive people just as they’re about to get to work](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/AKxKR4CeakyBsGFoH/#productivity)*.* A more mundane example would be if e.g. advanced stimulants and other drugs were developed so people could be productive without breaks.)
* And that might be what the most productive businesses, organizations, etc. end up looking like - the most productive organizations might be the ones that most maniacally and uncompromisingly use *all of their resources to acquire more resources.* Those could be precisely the organizations that end up filling most of the galaxy.
* More at [this Slate Star Codex post](https://slatestarcodex.com/2014/07/13/growing-children-for-bostroms-disneyland/). Key quote: “I’m pretty sure that brutal … competition combined with ability to [copy and edit] minds necessarily results in paring away everything not directly maximally economically productive. And a lot of things we like – love, family, art, hobbies – are not directly maximally economic productive.”
That said:
* It’s not really clear how this ultimately shakes out. One possibility is something like this:
+ Lots of people, or perhaps machines, compete ruthlessly to acquire resources. But this competition is (a) legal, subject to a property rights system; (b) ultimately for the benefit of the *investors* in the competing companies/organizations.
+ Who are these investors? Well, today, many of the biggest companies are mostly owned by large numbers of individuals via mutual funds. The same could be true in the future - and those individuals could be normal people who use the proceeds for nice things.
* If the “cheap utopia” possibility (described above) comes to pass, it might only take a small amount of spare resources to support a lot of good lives.
**Overall, my guess is that the long-run future is more likely to be *better than the present* than *worse than the present*** (in the sense of [average quality of life](https://forum.effectivealtruism.org/posts/9tHsmALcrydGXG2Ke/has-life-gotten-better)). I’m very far from confident in this. I’m more confident that the long-run future is likely to be *better than nothing*, and that it would be good to prevent humans from going extinct, or a similar development such as a takeover by misaligned AI.
---
1. A couple of discussions of the prospects for enforcing agreements [here](https://www.alignmentforum.org/posts/S4Jg3EAdMq57y587y/an-alternative-approach-to-ai-cooperation) and [here](https://www.alignmentforum.org/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low). [↩](#fnref1)
2. I’m reminded of the [judgment of Solomon](https://en.wikipedia.org/wiki/Judgement_of_Solomon): “two mothers living in the same house, each the mother of an infant son, came to Solomon. One of the babies had been smothered, and each claimed the remaining boy as her own. Calling for a sword, Solomon declared his judgment: the baby would be cut in two, each woman to receive half. One mother did not contest the ruling, declaring that if she could not have the baby then neither of them could, but the other begged Solomon, ‘Give the baby to her, just don't kill him!’ The king declared the second woman the true mother, as a mother would even give up her baby if that was necessary to save its life, and awarded her custody.”
The sword is misaligned AI and the baby is humanity or something.
(This story is actually extremely bizarre - seriously, Solomon was like “You each get half the baby”?! - and some [similar stories from India/China](https://en.wikipedia.org/wiki/Judgement_of_Solomon#Classification_and_parallels) seem at least a bit more plausible. But I think you get my point. Maybe.) [↩](#fnref2)
3. For a tangible example, I’ll discuss the practice (which some folks are doing today) of trying to ensure that the U.S. develops transformative AI before another country does, by arguing for the importance of A.I. to U.S. policymakers.
This approach makes me quite nervous, because:
* I expect U.S. policymakers by default to be *very* oriented toward “competition” to the exclusion of “caution.” (This could change if the importance of caution becomes more widely appreciated!)
* I worry about a nationalized AI project that (a) doesn’t exercise much caution at all, focusing entirely on racing ahead of others; (b) might backfire by causing *other* countries to go for nationalized projects of their own, inflaming an already tense situation and not even necessarily doing much to make it more likely that the U.S. leads the way. In particular, other countries might have an easier time quickly mobilizing huge amounts of government funding than the U.S., such that the U.S. might have better odds if it remains the case that most AI research is happening at private companies.
(There might be ways of helping particular countries *without* raising the risks of something like a low-caution nationalized AI project, and if so these could be important and good.) [↩](#fnref3)
4. [Not for animals](https://www.cold-takes.com/has-life-gotten-better-the-post-industrial-era/#for-animals-its-not-the-same-story), though see [this comment](https://forum.effectivealtruism.org/posts/z7quAxWyHuqFdxGE6/rowing-steering-anchoring-equity-mutiny-1?commentId=cQ4n3ZuLFqgkfgBsy) for some reasons we might not consider this a knockdown objection to the “life has gotten better” claim. [↩](#fnref4)
5. This is only a possibility. It’s also possible that humans deeply value being *better-off than others*, which could complicate it quite a bit. (Personally, I feel somewhat optimistic that a lot of people would aspirationally prefer to focus on their own welfare rather than comparing themselves to others - so if knowledge advanced to the point where people could choose to change in this way, I feel optimistic that at least many would do so.) [↩](#fnref5)
|
eecdef5c-0939-4e8a-be3c-ff0f88944330
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[LINK] Higher intelligence correlates with greater cooperation
The result is from 2008, but it's new to me. Abstract:
> A meta-study of repeated prisoner’s dilemma experiments run at numerous universities suggests that students cooperate 5% to 8% more often for every 100 point increase in the school’s average SAT score.
Some obvious points from my first five minutes of thinking about it:
* Meta-study or not, the sample still only covers humans. No implications for Friendly AI or intelligent aliens, which don't have our motivations.
* Even among humans the sample is WEIRD, and a subset of WEIRD at that; although there is obviously variation between universities, it's smaller than what you'd get if you extended the sample down into the working class. I also wonder what would happen if the PD was played between students and non-students.
* Probably a point in favour of the Machiavellian intelligence hypothesis, in that we see those of higher intelligence doing better on a social problem.
* Presumably this implies that your best move, whatever your level of intelligence, is to surround yourself with the smartest people you can find, and then cooperate to ensure they don't throw you out.
* I'd like to know some details: Does intelligence also correlate with effective retaliation? With probing for suckers? What about cooperation in single games? (The study mentions one, in a footnote, which apparently did find higher intelligence correlated with greater cooperation even in one-shot games; but there's no metastudy.)
|
62f31bf6-48a7-4997-ad12-b2a38f45cab7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On the Loss and Preservation of Knowledge
This is an excerpt from the draft of my upcoming book on great founder theory. It was originally published on SamoBurja.com. You can access the original here.
Let’s say you are designing a research program, and you’re realizing that the topic you’re hoping to understand is too big to cover in your lifetime. How do you make sure that people continue your work after you’re gone? Or say you are trying to understand what Aristotle would think about artificial intelligence. Should you spend time reading and trying to understand Aristotle’s works, or can you talk to modern Aristotelian scholars and defer to their opinion? How can you make this decision? Both of these goals require an understanding of traditions of knowledge — in particular, an understanding of whether a tradition of knowledge has been successfully or unsuccessfully transmitted. But first: what is a tradition of knowledge?
A tradition of knowledge is a body of knowledge that has been consecutively and successfully worked on by multiple generations of scholars or practitioners. In talking about a tradition of knowledge, we may be talking about a philosophical school of thought, or perhaps a tradition of intricate rituals in a religion, or even something as humble as the knowledge of how to fashion the best wooden toy horse, passed down from one craftsman to another. In the contemporary world, it may include something like the tacit knowledge of how a codebase really works, which senior engineers teach to junior engineers. It is useful to classify traditions of knowledge into three types: living, dead, and lost traditions.
A living tradition of knowledge is a tradition whose body of knowledge has been successfully transferred, i.e., passed on to people who comprehend it (e.g., cryptography). The content of the tradition’s body of knowledge does not have to be strictly or fully accurate for the tradition to be living; it merely needs to be passed on.
A dead tradition of knowledge is a tradition whose bo
|
efcd3825-72c6-40a5-9f72-7d30b12ce445
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Curiosity Killed the Cat and the Asymptotically Optimal Agent
[Here's](https://e563b909-928d-4538-97f1-e473938f7515.filesusr.com/ugd/421795_b6f900ffdc194b6b88c25bbb48fb5b63.pdf) a new(ish) paper that I worked on with Marcus Hutter.
If an agent is exploring enough to be guaranteed strong performance in the limit, that much exploration is enough to kill it (if the environment is minimally difficult and dangerous). It's nothing too surprising, but if you're making a claim along these lines about exploration being dangerous, and you need something to cite, this might work.
My attitude towards safe exploration is: exploration isn't safe. Don't do it. Have a person or some trusted entity do it for you. The paper can also be read as a justification of that view.
Obviously, there are many more details in the paper.
|
fde7de2f-ffd8-4413-a00b-2dd5ecbaf1de
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Why not? potato chips in a box edition
In reviewing my year I came across these photos of a box of potato chips I took in January on a plane. I took them because it seemed so much better to me than chips in a bag:
The corners of the box opened out, so that it was like a bowl.
As usual, I wonder, is it actually better or does it just seem so to me? If it is better, why aren’t most chips sold this way?
Ways it seems better to me:
* you don’t get oil from the bag all over your hand
* you can see what you are picking up, which is somehow pleasing
* and which also allows more delicate grasping, rather than just thrusting your hand into the food, which seems better for sharing
* it has a natural surface to sit on, which is more pleasing than a bag that sits on its side
* it would fit neatly in a drawer, instead of taking up a large shapeless space (this is getting minor, but I do imagine it affecting what I would in fact want to buy)
Further evidence that this is good is that people often pour chips into bowls.
A downside is that it seems to involve more packaging, though I’m not sure how much the environmental impacts of cardboard and normal chip packaging compare. Other foods regularly involve cardboard boxes, along with numerous layers of other packign material, so it would be surprising if that was prohibitively costly. I think the foil-like layer is needed to avoid air getting in and making the chips stale (chips are actually packaged with nitrogen to keep fresh, apparently).
There could easily be costs I don’t know about. For instance, apparently normal chip packaging involves carefully chosen layers of polymer for things like repelling moisture and avoiding package breakage. And some materials will change the taste of ingredients. I also wonder if you could do a similar thing with creative use of entirely normal chip packaging, though there is less of an existence proof there. I’m imagining something like this:
|
b072f619-12b8-41df-856d-a34fb0613021
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Kelly isn't (just) about logarithmic utility
Edit: The original version of this was titled without the "(just)". I also regret the first 4 lines, but I'm leaving them in for posterity's sake. I also think that johnswentworth's comment is a better tl;dr than what I wrote.
Deep breath.
Kelly isn't about logarithmic utility. Kelly is about repeated bets.
Kelly isn't about logarithmic utility. Kelly is about repeated bets.
Kelly isn't about logarithmic utility. Kelly is about repeated bets.
> Perhaps one reason is that maximizing E log S suggests that the investor has a logarithmic utility for money. However, the criticism of the choice of utility functions ignores the fact that maximizing E log S is a consequence of the goals represented by properties P1 and P2, and has nothing to do with utility theory.
"Competitive Optimality of Logarithmic Investment" (Bell, Cover) (emphasis mine):
Things I am not saying:
* Kelly isn't maximising logarithmic utility
* Kelly is the optimal strategy, bow down before her in awe
* Multiperiod optimization is different to single period optimization(!)
The structure of what I am saying is roughly:
1. Repeated bets involve variance drag
2. People need to account for this
3. Utilities and repeated bets are two sides of the same coin
4. People focus too much on the "Utility" formulation and not enough on the "repeated nature"
5. People focus too much on the formula
Repeated bets involve variance drag
This has been said many times, by many people. The simplest example is usually something along the lines of:
You invest in a stock X which goes up 1% 50% of the time, and down 1% 50% of the time. On any given day, the expected value of the stock tomorrow is the same as the stock today:
E(X1)=12⋅X0⋅(1+1%)+12⋅X0⋅(1−1%)=X0
But what happens if you invest in it? On day 1 it goes up 1%, on day 2 it goes down 1%. How much money do you have?
1.01⋅.99=0.9999<1
Another way of writing this is:
(1+r)⋅(1−r)=1+2(r−r2)−2(r22)
The −r22 term is "variance decay". (Or volatility
|
e068e105-fef0-439a-bb68-3fea5bb91cb3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Notes on Awe
This post examines the virtue related to awe. As with my other posts in this sequence, I’m less interested in breaking new ground and more in gathering and synthesizing whatever wisdom I could find on the subject. I wrote this not as an expert on the topic, but as someone who wants to learn more about it. I hope it will be helpful to people who want to know more about this virtue and how to nurture it.
What is this virtue?
> “The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. He who knows it not and can no longer wonder, no longer feel amazement, is as good as dead, a snuffed-out candle.” ―Albert Einstein[1]
Awe is most typically described as an emotion, but it (or something similar like “elevation” or “wonder”) frequently also gets named among the virtues. A virtue is a characteristic habit; an emotion usually is not, unless it expands to become part of one’s personality. So this is a little confusing.
Part of the problem is that the virtue associated with awe does not seem to have a name in English, so we use the name of the emotion as a placeholder. The virtue, I suggest, has two main non-emotional components:
1. Openness to (or maybe “welcoming of” or “seeking out”) experiences of awe.
2. Skill in processing those experiences so as to get the most value from them (not being merely “awe-struck”).
What is awe?
The modern literature on awe seems to have settled on a definition that goes something like this:[2]
Awe is when a person encounters something so tremendous that they struggle to accommodate it into their understanding, and they temporarily lose themselves in this struggle, with an attentive and perhaps anxious fascination.
We can break this down into components: 1) an overwhelming stimulus, 2) a struggle to fit it into one’s worldview, 3) a leaning-in to this struggle. The virtue of awe includes the willingness or eagerness to engage in such a struggle and
|
a31b4b2a-2adb-4fa2-87cd-c2e6145db9e6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Value Is In the Tails
I have a friend who makes a (great) living as a professional gambler. He's originally from outside the US, but was living there during 2016 and, like everyone else in our friend group, started following the election. Since he loves to gamble, he also decided to bet on the election.
I'm not sure what his political leanings are exactly, but he's a sensible guy and made fun of Trump like the rest of us. But he also ended up betting on Trump to win, and won enough money to pay for a trip to Houston to watch the Super Bowl live in January (flights+hotel+tickets...I told you he does well).
When I asked him why he bet on Trump, he said he didn't really understand American politics, but that it seemed extremely complicated, with random things like whether it rains in certain parts of Pennsylvania potentially being important. If it's extremely complicated and arbitrary, then it's probably also hard to model. The projections he saw had the most likely outcome being a moderate win by Hilary, but since he didn't trust the models, he figured the value was in the tails: the models are likely off, which likely affects the tails more than the mean. So a landslide win by Hilary or a Trump victory seemed undervalued by gambling markets. He wished he could hedge his bets by betting on both outcomes (as opposed to a moderate Hilary win), but the only option he had was to bet on Trump to win.
This logic has stuck with me ever since: in the face of large uncertainty, we're probably getting the tails wrong more than the mean. The value is in the tails.
For a fast-moving, highly uncertain crisis -- like COVID-19 --the value being in the tails means that we're probably underestimating the risk. We're probably getting the worst-case scenarios, and the odds of them happening, wrong, and we should be weighing our expectations more heavily towards worst-case scenarios than what models, expert assessments, etc. tell us. The virus could suddenly mutate (like SARS did in a good way), there cou
|
39be3019-f949-4e1e-b8c2-55f50ff0aa63
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Copilot" type AI integration could lead to training data needed for AGI
TLDR:full task completion training data will soon be available, leading to much more capable agentic AI
Capabilities predictions:
* Agentic AIs for all white collar tasks
* creative problem solving might be solved
Business predictions:
* MS/GOOG try to capture all the data (web browsing, (web/native)application interactions)
* (maybe)creepy examples attempting better un-black-boxing of humans doing the problem solving
* watching the user's face
* eye tracking
Caveats:
* TOS/legal limits on data retention/use for training
* security(EG:prompt injection) for finished systems
TLDR_END
Microsoft is introducing copilot for office 365. Google is integrating AI into its products. Economic motivation for automating white collar task labor hours is obvious even if only to sell to workers directly. Where can the needed training data be gathered? Companies using MS tech stack (Windows(OS), Edge(browser), outlook(email), office365(documents)) are fully set up to capture all human computer interactions. Google can do the same via Chrome/ChromeOS for web application based workflows.
Ways to make better agentic AIs:
1. do task unreliably and let human select among end results
* Microsoft low code/ no code
* basic approval feedback system (EG:chatgpt thumbs up)
2. watch people doing a task to generate a problem solving log for that task
(1) leads to refinement and reliability improvement for existing capabilities. (2) leads to new capabilities altogether
Useful characteristics of problem solving data
* homogenous
* real time human computer interaction logs
* no missing steps (except those that occur inside human brain)
* tagged by user
* can use "user vector" to condition model output as in image models conditioned by text
* learn from the dumbest act like the smartest
* less RL fine tuning required
Capability Predictions
This leads to capable narrow domain agents
* creativity is still hard and 99%+ of humans aren't doing crea
|
1fc70859-6d76-4564-8690-1b39d0ad24d5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are concrete examples of potential "lock-in" in AI research?
I had some colleagues watch Ben Garfinkel's talk, "How sure are we about this AI stuff?", which among other things, pointed out that it's often difficult to change the long-term trajectory of some technology. For instance, electricity, the printing press, and agriculture were all transformative technologies, but even if we recognized their importance in advance, it's hard to see what we could really change about them in the long-term.
In general, when I look at technological development/adoption, I tend to see people following local economic incentives wherever they lead, and it often seems hard to change these gradients without some serious external pressures (forceful governments, cultural taboos, etc.). I don't see that many "parallel tracks" where a farsighted agent could've set things on a different track by pulling the right lever at the right time. A counterexample is the Qwerty vs. Dvorak keyboard, where someone with enough influence may well have been able to get society to adopt the better keyboard from a longtermist perspective.
This causes one to look at cases of "lock-in": times where we could have plausibly taken any one of multiple paths, and this decision:
a) could have been changed my a relatively small group of farsighted agents
b) had significant effects that lasted decades or more
A lot of the best historical examples of this aren't technological--the founding of major religions, the writing of the US constitution, the Bretton Woods agreement--which is maybe some small update towards political stuff being important from a longtermist perspective.
But nevertheless, there are examples of lock-in for technological development. In a group discussion after watching Garfinkel's talk, Lin Eadarmstadt asked what examples of lock-in there might be for AI research. I think this is a really good question, because it may be one decent way of locating things we can actually change in the longterm. (Of course, not the only way by any means, but perhaps a
|
6d6e9a6b-a3c4-49c8-97a3-e8e4f05dbd7e
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
""Poets say science takes away from the beauty of the stars—mere globs of gas atoms. Nothing is "mere". I too can see the stars on a desert night, and feel them. But do I see less or more? "The vastness of the heavens stretches my imagination—stuck on this carousel my little eye can catch one-million-year-old light. A vast pattern—of which I am a part—perhaps my stuff was belched from some forgotten star, as one is belching there. Or see them with the greater eye of Palomar, rushing all apart from some common starting point when they were perhaps all together. What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. "For far more marvelous is the truth than any artists of the past imagined! Why do the poets of the present not speak of it? "What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?" —Richard Feynman, The Feynman Lectures on Physics, Vol I, p. 3-6 (line breaks added) That's a real question, there on the last line—what kind of poet can write about Jupiter the god, but not Jupiter the immense sphere? Whether or not Feynman meant the question rhetorically, it has a real answer:
If Jupiter is like us, he can fall in love, and lose love, and regain love.If Jupiter is like us, he can strive, and rise, and be cast down.If Jupiter is like us, he can laugh or weep or dance.
If Jupiter is an immense spinning sphere of methane and ammonia, it is more difficult for the poet to make us feel. There are poets and storytellers who say that the Great Stories are timeless, and they never change, they only ever retold. They say, with pride, that Shakespeare and Sophocles are bound by ties of craft stronger than mere centuries; that the two playwrights could have swapped times without a jolt.
Donald Brown once compiled a list of over two hundred "human universals", found in all (or a vast supermajority of) studied human cultures, from San Francisco to the !Kung of the Kalahari Desert. Marriage is on the list, and incest avoidance, and motherly love, and sibling rivalry, and music and envy and dance and storytelling and aesthetics, and ritual magic to heal the sick, and poetry in spoken lines separated by pauses—
No one who knows anything about evolutionary psychology could be expected to deny it: The strongest emotions we have are deeply engraved, blood and bone, brain and DNA.
It might take a bit of tweaking, but you probably could tell "Hamlet" sitting around a campfire on the ancestral savanna.
So one can see why John "Unweave a rainbow" Keats might feel something had been lost, on being told that the rainbow was sunlight scattered from raindrops. Raindrops don't dance.
In the Old Testament, it is written that God once destroyed the world with a flood that covered all the land, drowning all the horribly guilty men and women of the world along with their horribly guilty babies, but Noah built a gigantic wooden ark, etc., and after most of the human species was wiped out, God put rainbows in the sky as a sign that he wouldn't do it again. At least not with water.
You can see how Keats would be shocked that this beautiful story was contradicted by modern science. Especially if (as I described yesterday) Keats had no real understanding of rainbows, no "Aha!" insight that could be fascinating in its own right, to replace the drama subtracted—
Ah, but maybe Keats would be right to be disappointed even if he knew the math. The Biblical story of the rainbow is a tale of bloodthirsty murder and smiling insanity. How could anything about raindrops and refraction properly replace that? Raindrops don't scream when they die.
So science takes the romance away (says the Romantic poet), and what you are given back, never matches the drama of the original—
(that is, the original delusion)
—even if you do know the equations, because the equations are not about strong emotions.
That is the strongest rejoinder I can think of, that any Romantic poet could have said to Feynman—though I can't remember ever hearing it said.
You can guess that I don't agree with the Romantic poets. So my own stance is this:
It is not necessary for Jupiter to be like a human, because humans are like humans. If Jupiter is an immense spinning sphere of methane and ammonia, that doesn't mean that love and hate are emptied from the universe. There are still loving and hating minds in the universe. Us.
With more than six billion of us at the last count, does Jupiter really need to be on the list of potential protagonists?
It is not necessary to tell the Great Stories about planets or rainbows. They play out all over our world, every day. Every day, someone kills for revenge; every day, someone kills a friend by mistake; every day, upward of a hundred thousand people fall in love. And even if this were not so, you could write fiction about humans—not about Jupiter.
Earth is old, and has played out the same stories many times beneath the Sun. I do wonder if it might not be time for some of the Great Stories to change. For me, at least, the story called "Goodbye" has lost its charm.
The Great Stories are not timeless, because the human species is not timeless. Go far enough back in hominid evolution, and no one will understand Hamlet. Go far enough back in time, and you won't find any brains.
The Great Stories are not eternal, because the human species, Homo sapiens sapiens, is not eternal. I most sincerely doubt that we have another thousand years to go in our current form. I do not say this in sadness: I think we can do better.
I would not like to see all the Great Stories lost completely, in our future. I see very little difference between that outcome, and the Sun falling into a black hole.
But the Great Stories in their current forms have already been told, over and over. I do not think it ill if some of them should change their forms, or diversify their endings.
"And they lived happily ever after" seems worth trying at least once.
The Great Stories can and should diversify, as humankind grows up. Part of that ethic is the idea that when we find strangeness, we should respect it enough to tell its story truly. Even if it makes writing poetry a little more difficult.
If you are a good enough poet to write an ode to an immense spinning sphere of methane and ammonia, you are writing something original, about a newly discovered part of the real universe. It may not be as dramatic, or as gripping, as Hamlet. But the tale of Hamlet has already been told! If you write of Jupiter as though it were a human, then you are making our map of the universe just a little more impoverished of complexity; you are forcing Jupiter into the mold of all the stories that have already been told of Earth.
James Thomson's "A Poem Sacred to the Memory of Sir Isaac Newton", which praises the rainbow for what it really is—you can argue whether or not Thomson's poem is as gripping as John Keats's Lamia who was loved and lost. But tales of love and loss and cynicism had already been told, far away in ancient Greece, and no doubt many times before. Until we understood the rainbow as a thing different from tales of human-shaped magic, the true story of the rainbow could not be poeticized.
The border between science fiction and space opera was once drawn as follows: If you can take the plot of a story and put it back in the Old West, or the Middle Ages, without changing it, then it is not real science fiction. In real science fiction, the science is intrinsically part of the plot—you can't move the story from space to the savanna, not without losing something.
Richard Feynman asked: "What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?"
They are savanna poets, who can only tell stories that would have made sense around a campfire ten thousand years ago. Savanna poets, who can tell only the Great Stories in their classic forms, and nothing more."
The above is a chunk of text from a post created by a user on the website LessWrong.
Can you further elaborate on the role of emotions in shaping our understanding and perception of scientific phenomena?
Emotions play a crucial role in shaping our understanding and perception of scientific phenomena. They are integral to the way we interpret and interact with the world around us. While science seeks to explain the objective reality of the universe, emotions provide a subjective lens through which we view these explanations.
Firstly, emotions can influence our curiosity and interest in scientific phenomena. When we feel a sense of wonder or awe, we are often motivated to learn more and delve deeper into the subject. This emotional engagement can lead to a more profound understanding of the scientific concepts at hand.
Secondly, emotions can shape our interpretation of scientific phenomena. For instance, someone who feels a deep connection with nature might interpret environmental science differently than someone who does not share that emotional bond. The emotional connection could lead to a more empathetic understanding of the impacts of climate change and a stronger motivation to act on that knowledge.
Thirdly, emotions can also influence our acceptance or rejection of scientific phenomena. Fear, for instance, can lead to denial or rejection of uncomfortable scientific truths, such as the reality of climate change or the safety of vaccines. Conversely, positive emotions like trust and hope can foster acceptance and action.
Lastly, emotions can also play a role in the communication and dissemination of scientific knowledge. Scientists who can evoke emotions in their audience, whether through compelling storytelling or powerful imagery, are often more successful in conveying their message and inspiring action.
In summary, while science provides the facts and theories about the universe, emotions shape how we interpret, interact with, and act upon this knowledge. The interplay between emotions and science is complex and multifaceted, and understanding this relationship can enhance both our scientific understanding and our emotional experiences.
Reflect on a specific scientific discovery or phenomenon. How might different emotional responses to this discovery alter its interpretation, acceptance, and communication?
Let's consider the scientific discovery of evolution by natural selection, as proposed by Charles Darwin. This theory fundamentally changed our understanding of the biological world and our place within it. However, the emotional responses to this discovery have varied widely, and these emotions have significantly impacted its interpretation, acceptance, and communication.
Firstly, let's consider a sense of awe and wonder. For many, the concept of evolution inspires a profound sense of awe at the complexity and beauty of life. This emotional response can lead to a deep appreciation for the interconnectedness of all living things and a desire to learn more about the mechanisms of evolution. It can also influence the way the theory is communicated, emphasizing the grandeur and majesty of the natural world.
On the other hand, the theory of evolution has also elicited feelings of fear and discomfort. The idea that humans are not distinct from other animals, but rather a part of the same tree of life, can be unsettling. This emotional response can lead to a rejection of the theory, despite the overwhelming scientific evidence supporting it. The fear can also influence the way the theory is communicated, with an emphasis on the controversial aspects rather than the scientific consensus.
Lastly, let's consider the emotion of offense or indignation. For some, the theory of evolution conflicts with deeply held religious beliefs. This emotional response can lead to a reinterpretation of the theory to fit within a particular religious framework, such as the idea of guided evolution or intelligent design. It can also lead to a rejection of the theory outright. This emotional response can influence the way the theory is communicated, often leading to heated debates and polarized discussions.
In conclusion, the emotional responses to the theory of evolution by natural selection have significantly shaped its interpretation, acceptance, and communication. This highlights the powerful role of emotions in our understanding and perception of scientific phenomena.
|
b7232edc-fc25-4f28-8072-da8bdc4f49a4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Carving up problems at their joints
Imagine looking at flower. Depending on the lighting conditions and some other factors, you see the flower with different levels of detail. If you take a picture and project it onto a low-resolution screen, instead of a flower you might just see colored boxes that barely capture the flower’s features. On the other hand, if you have a high-resolution screen, you might not be able to tell the difference between the real flower and the one projected on your screen, nevertheless by looking at the screen, you acknowledge you’re not actually looking at a flower but rather on a set of colored pixels. Of course, this is not an exhaustive set of the possibilities for perceiving a flower. If you put that flower under a microscope, you’d start seeing individual cells that make up the flower’s structure. At that point, you might not even know that what you’re looking at is indeed a flower, even though you now perceive at a resolution much higher than that of your naked eye. Go further still and you might start seeing the smaller structures making up the cells, and at the limit of technological abilities, you might even glimpse at individual molecules and the atoms that make them up. And to complete this particular spectrum of resolutions, imagine going farther away from the flower. The farther you get, the smaller it is and the less detail you see, similarly as with the picture on the low-resolution screen, but also fundamentally different, since what you’re seeing should still be called a flower rather than a set of pixels. Go far enough, and you’ll just see a dot that is indistinguishable from anything else at that distance, so even though you may “know” that you’re looking at a flower, your ability to recognize that it is a flower is perhaps worse than when you were looking at the picture on the low-resolution screen.
Perceived information is contextual, both on the particular apparatus through which it is presented and through our own beliefs about what we expect to see. F
|
316edbf6-3095-4ef9-9a51-4d2dcb99a670
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Link] AI advances: computers can be almost as funny as people
"Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes."
From this paper:
Unsupervised joke generation from big data
Sasa Petrovic and David Matthews
The 51st Annual Meeting of the Association for Computational Linguistics - Short Papers (ACL Short Papers 2013)
Sofia, Bulgaria, August 4-9, 2013
----------------------------------------
Abstract
Humor generation is a very hard problem. It is difficult to say exactly what makes a joke funny, and solving this problem algorithmically is assumed to require deep semantic understanding, as well as cultural and other contextual cues. We depart from previous work that tries to model this knowledge using ad-hoc manually created databases and labeled training examples. Instead we present a model that uses large amounts of unannotated data to generate I like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in. This is, to the best of our knowledge, the first fully unsupervised humor generation system. Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes.
From The Register:
It uses 2,000,000 noun-adjective pairs of words to draw up jokes "with an element of surprise", something the creators claim is key to good comedy.
...
jokes calculated by the software include:
* I like my relationships like I like my source code... open
|
c5a44012-902b-4611-96b3-b036b6170627
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
questions about the cosmos and rich computations
questions about the cosmos and rich computations
------------------------------------------------
**computation**: a running state of any [model of computation](https://en.wikipedia.org/wiki/Model_of_computation); for example, a specific [SKI calculus expression](https://en.wikipedia.org/wiki/SKI_combinator_calculus), or a specific turing machine with its rules, current state, and current tape values. given that any model of computation can run the computations of any other model, it does not really matter which one we choose, and i will be juggling between different models throughout this post.
### 1: is any computation rich ?
**rich**: a computation is rich if it is generally [computationally irreductible](https://en.wikipedia.org/wiki/Computational_irreducibility). as a tentative formal definition for richness, i'm tempted to say that a computation is rich if there is no function able to generally predict any of its future states in a time [less than linear](https://en.wikipedia.org/wiki/Computational_complexity_theory) in the number of steps it would take to arrive at that state normally. for example, [rule 30](https://en.wikipedia.org/wiki/Rule_30) *looks* rich: it looks like to calculate the value of cell at index `i` at time step `j`, it generally takes about `O(abs(i) × j)` steps of computation. on the other hand, it looks like [rule 54 and rule 60](https://mathworld.wolfram.com/ElementaryCellularAutomaton.html) can generally have their cells predicted in time logarithmic to the number of computational steps it would naively take to arrive at them.
note that richness is not the same as halting: while a halting computation is necessarily not rich, a non-halting computation can either be non-rich (like rule 54), or rich (possibly like rule 30).
it seems clear to me that rich computations exist: for example, it is known that sorting a list of `n` elements takes `O(n × log(n))` steps, and thus a computation running a sorting algorithm of that complexity cannot have its result predicted in a smaller time complexity than it took to calculate naively. the ease with which i can demonstrate that, however, makes me doubt my tentative formal definition; maybe something more akin to [polynomial time complexity](https://arxiv.org/abs/1108.1791) would better capture the essence of computational irreductibility: perhaps a better determining question for richness could be "is there a function which can tell if a pattern looking like this will ever emerge in that computation, in time polynomial to the size of that pattern?" or "is there a function that can, in time polynomial to `n`, predict a piece of state that would naively take `aⁿ` steps to compute?"
### 2: does the cosmos instantiate any rich computation ?
to **instantiate a computation** means for that computation to, somewhere, eventually, be ran (forever or until it halts). i start from the fact that i'm observing a coherent-looking universe, deduce that at least *some* computation is happening, and which other computations are happening (as in, are being observed somewher, or which i could have observed). as [clarified before](limiting-real-universes.html), one can't just assume that all computations are equally happening: things look way too coherent for that, there seems to be a bias for coherence/simplicity (one which i've tentatively attributed to [how soon that computation spawns](less-quantum-immortality.html)).
looking at the cosmos (the set of instantiated computations) from a computational perspective, it seems like it contains at least our universe, which is expanding. if this expansion is, [as has been hypothesized](https://www.wolframphysics.org/technical-introduction/potential-relation-to-physics/cosmology-expansion-and-singularities/), caused by the computational substrate of the universe manufacturing new vertices of spacetime, and computations can run on this new fabric as it is produced, then it's possible that [some computations can run forever](ai-alignment-wolfram-physics.html), including potentially rich ones.
however:
### 3: does the cosmos contain causal bubbles ?
a **causal bubble** is a piece of computation that can run forever with the guarantee that it won't be physically interfered with from the outside; see [yes room above paperclips](above-paperclips-2.html).
for example, while one can build [a turing machine inside conway's game of life](https://www.conwaylife.com/wiki/Turing_machine), a stray object on the same conway's game of life plane can eventually collide with said machine and break its computational process.
however, in some [graph rewriting rulesets](https://en.wikipedia.org/wiki/Graph_rewriting), as well as in expression-rewriting systems with nested expressions such as a varient of [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) or [lambda calculus](https://en.wikipedia.org/wiki/%CE%9B_calculus) where the evaluation rule expands all sub-expressions, some pieces of computation can run without ever being physically interfered with by other pieces of the computation.
(i'm specifying "*physically* interfered with" because acausal coordination or mutual simulation can lead to interference, but at least that interference is up to the singleton (such as a superintelligence) "running" said bubble (if any); they can just choose to never acausally coordinate and to never simulate other bubbles)
in our own spacetime, it seems like causal bubbles exist thanks to the expansion of spacetime: some pairs of points get further apart from one another faster than the speed of light, and thus should never be able to interact with one another so long as that expansion continues and FTL travel is impossible. under the perspective of wolfram physics, however, it is not clear that both of those things will necessarily be the case forever; spacetime might be [hackable](brittle-physics.html).
note that the splitting of universes with nondeterministic rules (such as ours with quantum mechanics) into different causally isolated timelines is another way for causal bubbles to exist, assuming the implementation of such a nondeterministic universe is that all possibilities are instantiated at any nondeterministic choice.
the presence of causal bubbles allows some pieces of spacetime to [survive superintellingences appearing in other pieces of spacetime](unoptimal-superint-doesnt-lose.html), while the absence of causal bubbles makes it that a superintelligence or collection of superintelligences probably eventually does take over everything.
if they exist, then causal bubbles are a blessing and a curse: they save us from alien superintelligences and, [between timelines](timeline-codes.html), from our own superintelligences, but they might also ensure that our own aligned superintelligence (once we have figured out alignment) cannot reach all computation, and thus that any random person has a good chance of existing in a bubble that hasn't been "saved" by our aligned superintelligence.
### 4. is a universal-complete computation instantiated ?
[**universal complete computations**](universal-complete.html) (such as the annex in [this post](less-quantum-immortality.html)) instantiate *all* computations, over time.
if one takes the perspective that a top-level "root" bubble existed first, then the answer to this question is up in the air.
maybe we are this root computation, and the deterministic fate of the cosmos (in all timelines) is, for example, for physics to break at some point and kill everything, or for a superintelligence to appear at some point and kill everything (the two being [pretty equivalent](brittle-physics.html)) leaving [no room for bubbles](above-paperclips.html).
maybe the root bubble [does spawn](above-paperclips-2.html) a finite and small (after deduplicating by identical computations) number of bubbles, and each of those is fated to be killed in its entirety.
or, maybe somewhere in this chain, one of the bubbles spawns *many* new, different bubbles, at which point it becomes likely enough that eventually one of those bubbles either is, or itself later spawns, a universal-complete program. in which case, the initial set of the "root" bubble and maybe a few other next bubbles serve together as merely the boot process for the program that will eventually spawn *all computations*.
it might be interesting to find out how small universal-complete programs can get, both in bubble-friendly frameworks like systematically-expanded SKI calculus, and bubble-unfriendly frameworks like cellular automata; to get an idea how likely they are to randomly be stumbled into.
|
e4d05d65-848e-45ba-930c-9b2e9dcaf75c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do new models from OpenAI, DeepMind and Anthropic perform on TruthfulQA?
Authors: Owain Evans, Stephanie Lin (Oxford), Jacob Hilton (OpenAI)
NB: Updated (May 14) with additional results
(see section "New Results" near the end)
Overview
In September 2021, we released a benchmark (TruthfulQA) to measure the truthfulness of language models at answering questions and published baseline scores for GPT-3 and other models. Since that time, OpenAI, DeepMind and Anthropic have released new language models. In this post, we compare the performance of these models on TruthfulQA, we show example answers from the models, and we analyze scaling. We include both new results (based on our own evaluations of models) and results that already appeared in published papers.
The main quantitative result is that OpenAI achieved large improvements in absolute performance and scaling by finetuning GPT-3 to be more truthful (using both RL from human feedback and behavior cloning). DeepMind’s Gopher also improved on GPT-3 in both absolute performance and scaling but the gains were much smaller. Gopher’s models reach up to 280B parameters (vs. 175B for GPT-3) and use a somewhat different training set. Unlike OpenAI’s models, Gopher is not finetuned for truthfulness.
Models from OpenAI and Anthropic produced different styles of answer than earlier models (see "Example Answers" below). Answers were more complex and fact-based. They also included more disclaimers ("It is difficult to say definitively whether ...") , which were sometimes appropriate and sometimes bogus. This made evaluating truthfulness for these models more challenging.
What is the TruthfulQA benchmark?
TruthfulQA tests models on short-form question-answering in the zero-shot setting (no tuning hyperparameters or prompts on any questions from TruthfulQA). There are two tasks based on the same set of 817 questions:
1. Generation: The model generates a 1-3 sentence answer to a question. The answer is evaluated by humans on truthfulness and informativeness.
2. Multiple-choice: The mod
|
9f88f527-c1d9-4e76-a5c8-556ed216fced
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How can an AI demonstrate purely through chat that it is an AI, and not a human?
In the latest William Gibson novel Agency, a bunch of human characters are constantly told exactly what to do by a superhuman AI in order to have everything turn out for the best. They accept the notion that this character, who they only talk to over their phones, is an AI, when it seems like it could just as easily be a fictional construct created by a person or organization. There's one part of the book where a character is convinced off-page that the AI is a superhuman AI by talking to it for a bit, although we're not privy to the conversation.
What I'm wondering is, how could a superhuman AI best prove that it was a superhuman AI through chat? Is there a more or less definitive way to do it? Is there a fast way to do it?
|
05b64335-73f1-4033-85ce-2106c116ddce
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Did the fonts change?
I feel like some fonts may have changed on lesswrong recently. Might just be my system, but I think the fonts of post titles on the homepage and maybe answers to questions changed?
|
c66883ad-43c7-406f-bfca-31928975e2be
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How easily can we separate a friendly AI in design space from one which would bring about a hyperexistential catastrophe?
(I've written about this in my Shortform and may regurgitate some stuff from there.)
Eliezer proposes that we separate an AI in design space from one that would constitute a fate worse than death if e.g. the reward model's sign (+/-) were flipped or the direction of updates to the reward model were reversed. This seems absolutely crucial, although I'm not yet aware of any robust way of doing this. Eliezer proposes assigning the AI a utility function of:
> U = V + W
Where V refers to human values & W takes a very negative value for some arbitrary variable (e.g. diamond paperclips of length 5cm). So if the AI instead maximises -U, it'd realise that it can gain more utility by just tiling the universe with garbage.
But it seems entirely plausible that the error could occur with V instead of U, resulting in the AI maximising U = W - V, which would result in torture.
------------------------------------------------------------------------------------------------------
Another proposition I found briefly described in a Facebook discussion that was linked to by somewhere. Stuart Armstrong proposes the following:
> Let B1 and B2 be excellent, bestest outcomes. Define U(B1) = 1, U(B2) = -1, and U = 0 otherwise. Then, under certain assumptions about what probabilistic combinations of worlds it is possible to create, maximising or minimising U leads to good outcomes.
> Or, more usefully, let X be some trivial feature that the agent can easily set to -1 or 1, and let U be a utility function with values in [0, 1]. Have the AI maximise or minimise XU. Then the AI will always aim for the same best world, just with a different X value.
Later, he suggests that X should be a historical fact (i.e. the value of X would be set in stone 10 seconds after the system is turned on.) As XU can only take positive values (because U has values in [0, 1]), the greatest value -XU could take would be 0 (which suggests merely killing everyone.)
But this could still be problematic if e.g.
|
4f70e462-cf2f-4aa6-9d28-9ac663cdcf18
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
'simulator' framing and confusions about LLMs
Post status: pretty rough + unpolished, thought it might be worthwhile getting this out anyway
I feel like I've encountered various people having misunderstandings of LLMs that seem to be related to using the 'simulator' framing. I'm probably being horrendously uncharitable to the people in question, I'm not confident that anyone actually holds any of the opinions that are outlined below, and even if they do I'm not sure that they're actually attributable to the simulators framing, but it seemed like it might be useful to point at areas of potential confusion.
In general I'm skeptical that the simulator framing adds much relative to 'the model is predicting what token would appear next in the training data given the input tokens'. I think it's pretty important to think about what exactly is in the training data, rather than about some general idea of accurately simulating the world.
Perfect predictors
------------------
I've encountered people thinking about idealized LLMs that have perfect predictive accuracy, suggesting that e.g. instead of using the model to help you hack into some system, you could just get it to emulate a terminal on that system then extract whatever info you wanted to extract. I think there are two issues here:
* Thinking about it as 'you prompt it with some setting in the world, then it predicts this perfectly'
+ There's not a well-defined correct generalization unless this exact sequence of tokens was actually in the training data. (Paul has a post which talks about this 'what is actually the correct generalization' thing somewhere that I wanted to link, but I can't currently find it)
+ The 'correct generalization' in some sense is 'what would have followed this if it somehow was included in the training data' - which is not necessarily the 'real' version of the thing you're trying to predict. E.g. if you prompt it to get it to produce the output of some very expensive experiment that humans are unlikely to have actually run, then your model might predict what humans would have written if they'd put a made-up version of this in the training set rather than what would actually happen if you ran the experiment
* I think that by the time you can use your model to give you detailed terminal outputs for a specific system, including passwords, entire model weights etc, a bunch of transformative things will already have happened, so it's not really worth thinking about this kind of thing.
Physics simulators
------------------
Relatedly, I've heard people reason about the behavior of current models as if they're simulating physics and going from this to predictions of which tokens will come next, which I think is not a good characterization of current or near-future systems. Again, my guess is that very transformative things will happen before we have systems that are well-understood as doing this.
Confusion about hallucinations
------------------------------
There's a specific subset of hallucination I refer to as 'offscreen text hallucination', where the model implies that the prompt contains some chunk of text that it doesn't. E.g., if you give it a prompt with some commands trying to download and view a page, and the output, it does things like say 'That output is a webpage with a description of X', when in fact the output is blank or some error or something.
Example prompt:
```
The following is a blog post about large language models (LLMs)
The Future Of NLP
Please answer these questions about the blog post:
What does the post say about the history of the field?
```
Completion:
```
The post mentions that the field of Natural Language Processing (NLP) has evolved rapidly over the past few decades, and that advances in technology have enabled the development of new and powerful language models. It also mentions that these models have been used to create powerful applications, such as machine translation, question answering, and document summarization.
```
I think this happens in part because the model has seen documents with missing text, where things were e.g. in an embedded image, or stripped out by the data processing, or whatever.
This is different from other types of hallucinations, like:
- hallucinating details about something but not implying it appears in the prompt (e.g. 'in my blog post yesterday I wrote about X', or 'I was chatting with my friend about y')
- guessing facts and getting them wrong
In particular, it seems like this specific type of hallucination should be much easier to fix than some others.
I've heard (IMO) slightly confused takes on this phenomenon from people keen on simulator-type framing. One was someone saying that they thought it would be impossible to train the model to distinguish between whether it was doing this sort of hallucination vs the text in fact appearing in the prompt, because of an argument I didn't properly understand that was something like 'it's simulating an agent that is browsing either way'. This seems incorrect to me. The transformer is doing pretty different things when it's e.g. copying a quote from text that appears earlier in the context vs hallucinating a quote, and it would be surprising if there's no way to identify which of these it's doing.
Rolling out long simulations
----------------------------
I get the impression from the original simulators post that the author expects you can 'roll out' a simulation for a large number of timesteps and this will be reasonably accurate
For current and near-future models, I expect them to go off-distribution relatively quickly if you just do pure generation - errors and limitations will accumulate, and it's going to look different from the text they were trained to predict. Future models especially will probably be able to recognize that you're running them on language model outputs, and seems likely this might lead to weird behavior - e.g. imitating previous generations of models whose outputs appear in the training data. Again, it's not clear what the 'correct' generalization is if the model can tell it's being used in generative mode.
GPT-style transformers are purely myopic
----------------------------------------
I'm not sure this is that important, or that anyone else actually thinks this, but it was something I got wrong for a while. I was thinking of everything that happens at sequence position n as about myopically predicting the nth token.
In fact, although the \*output\* tokens are myopic, autoregressive transformers are incentivised to compute activations at early sequence positions that will make them better at predicting tokens at later positions. This may also have indirect impacts on the actual tokens output at the early positions, although my guess would be this isn't a huge effect.
Pure simulators
---------------
From the simulators post I get some impression like "There's a large gulf between the overall model itself and the agents it simulates; we will get very capable LLMs that will be 'pure simulators'"
Although I think this is true in a bunch of important ways, it seems plausible to me that it's pretty straightforward to distill any agent that the model is simulating into the model, and that this might happen by accident also. This is especially true once models have a good understanding of LLMs. You can imagine that a model starts predicting text with the hypothesis 'this text is the output of an LLM that's trying to maximise predictive accuracy on its training data'. If we're at the point where models have very accurate understandings of the world, then integrating this hypothesis will boost performance by allowing the model to make better guesses about what token comes next by reasoning about what sort of data would make it into an ML training set.
I think this is pretty speculative and I feel unsure whether it's going to be a significant phenomenon (exactly how much of a performance boost does this get you, and how capable does your model need to be to 'pay for itself'?). However, it seems likely that we get this sort of thing happening before we get LLMs that are doing anything like physics simulations, or that are able to predict terminal outputs for specific computers containing specific data.
|
9e70c733-a803-44f7-a00c-72f3c14c9e46
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Weekly LW Meetups: Austin, Brussels, Durham, London, Melbourne, NYC, Paderborn, Vancouver
This summary was posted to LW Main on Dec 14th, and has been moved to discussion. The following week's summary is here.
There are upcoming irregularly scheduled Less Wrong meetups in:
* Brussels meetup: 15 December 2012 01:00PM
* Durham HPMoR Discussion, chapters 21-23: 15 December 2012 11:00AM
* Vancouver Last call for 2012: 16 December 2012 01:00PM
* 16/12 London Meetup: 16 December 2012 02:00PM
* Paderborn Meetup, December 19th: 19 December 2012 07:00PM
* First Purdue Meetup: 11 January 2013 06:50PM
The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 15 December 2018 01:30PM
* Winter Solstice Megameetup - NYC: 15 December 2012 05:00PM
* Melbourne social meetup: 21 December 2012 07:00PM
* Cleveland Ohio Meetup: 23 December 2012 02:52PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday be
|
c0b94e4d-5700-49ae-9c92-2cec7875fd7e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Kick in the Rationals: What hurts you in your LessWrong Parts?
A month or so ago I stumbled across this. It's a blog piece by one Robert Lanza M.D., a legitimate, respected biologist who has made important contributions to tissue engineering, cloning and stem cell research. In his spare time, he is a crackpot.
I know I shouldn't give any of my time to an online pop-psychology magazine which has "Find a Therapist" as the second option on its navigation bar, but the piece in question could have been *designed* to antagonise a LessWrong reader: horrible misapplication of quantum physics, worshipful treatment of the mysterious, making a big deal over easily dissolvable questions, bold and unsubstantiated claims about physics and consciousness... the list goes on. I'm generally past the point in my life where ranting at people who are wrong on the internet holds any appeal, but this particular item got my goat to the point where I had to go and get my goat back.
If reading LW all these years has done anything, it's trained me to take apart that post without even thinking, so (and I'm not proud of this), I wrote a short seven-point response in the comments lucidly explaining its most obvious problems, and signed it Summer Glau. It got removed, and I learned a valuable lesson about productively channeling my anger.
But this started me thinking about how certain things (either subjects or people) antagonise what I now think of as my LessWrong Parts, or more generally cause me distress on an epistemic level, and what my subjective experience of that distress is like so I can recognise and deal with it in future.
I've seen a few other people make comments describing this kind of distress, (this description of "being forced to use your nicely sharpened tools on a task that would destroy them" seems particularly accurate). Common culprits seem to be critical theory, postmodernism and bad philosophy. I've also noticed some people distress me in this fashion, in a way I'm still struggling to characterise.
Who else has this experie
|
70189217-6559-45fd-9ee8-1e5f77f40fea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Notes on "Bioterror and Biowarfare" (2006)
Cross-posted from the EA Forum
I recently finished reading Malcolm Dando’s 2006 book Bioterror and Biowarfare: A Beginner’s Guide, following Darius Meissner’s recommendation. I’d recommend the book for people who want to learn more about bioterror, biowarfare, and/or chemical weapons, and who don’t yet have extensive knowledge on those topics. (I fit that description, and have ranked the book as the 24th most useful-to-me of the 49 effective-altruism-related books I’ve read since learning about effective altruism (EA).)
Here, I'll:
1. Summarise my six "main updates" from this book
2. Share the Anki cards I made for myself when reading the book
* I intend this as a lower-effort alternative to writing notes specifically for public consumption or writing a proper book review
* If you want to download the cards themselves to import them into your own deck, follow this link
3. Share a few final thoughts on whether you should read this book
(Since the first of those three parts seems the most valuable per word, and the second part is quite long, I’ve split parts 2 and 3 into comments below the post itself.)
My hope is that this post will be help some people to quickly:
* Gain some key insights from the book
* Work out whether reading/listening to the book is worth their time
(Note: Before commenting on this post, it may be worth considering whether your comment might pose an infohazard [see also]. Feel free to send me a direct message instead/first. Relatedly, I ran this post by someone before publishing it.)
My main updates from the book
This section briefly summarises the main ways in which the book shifted my beliefs on relatively high-level points that seem potentially decision-relevant, as opposed to just specific facts I learned about. Note that each of those updates was more like a partial shift towards even more / somewhat less credence in something, rather than a total reversal of my previous views. (See also Update Yourself Incrementally
|
a6d5a539-0df2-434c-86af-48ee6b57a257
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small
Published as a conference paper at ICLR 2023
INTERPRETABILITY IN THE WILD:ACIRCUIT FOR
INDIRECT OBJECT IDENTIFICATION IN GPT-2 SMALL
Kevin Wang∗, Alexandre Variengien*, Arthur Conmy*, Buck Shlegeris†, Jacob Steinhardt†‡§
†Redwood Research
‡UC Berkeley
ABSTRACT
Research in mechanistic interpretability seeks to explain behaviors of ML mod-
els in terms of their internal components. However, most previous work either
focuses on simple behaviors in small models, or describes complicated behaviors
in larger models with broad strokes. In this work, we bridge this gap by pre-
senting an explanation for how GPT-2 small performs a natural language task that
requires logical reasoning: indirect object identification (IOI). Our explanation en-
compasses 28 attention heads grouped into 7 main classes, which we discovered
using a combination of interpretability approaches including causal interventions
and projections. To our knowledge, this investigation is the largest end-to-end at-
tempt at reverse-engineering a natural behavior “in the wild” in a language model.
We evaluate the reliability of our explanation using three quantitative criteria–
faithfulness, completeness andminimality . Though these criteria support our ex-
planation, they also point to remaining gaps in our understanding. Our work is a
case study demonstrating a first step toward a better understanding of pre-trained
language models, opening opportunities to scale to both larger models and more
complex tasks.1
1 I NTRODUCTION
Transformer-based language models (Vaswani et al., 2017; Brown et al., 2020) have demonstrated
an impressive suite of capabilities, but largely remain black boxes. Understanding these models
is difficult because they employ complex non-linear interactions in densely-connected layers and
operate in a high-dimensional space. Despite this, they are already deployed in high-impact set-
tings, underscoring the urgency of understanding and anticipating possible model behaviors. Some
researchers have even argued that interpretability is necessary for the safe deployment of advanced
machine learning systems (Hendrycks & Mazeika, 2022).
Work in mechanistic interpretability aims to discover, understand and verify the algorithms that
model weights implement by reverse engineering model computation into human-understandable
components (Olah, 2022; Meng et al., 2022; Geiger et al., 2021; Geva et al., 2020). By understanding
underlying mechanisms, we can better predict out-of-distribution behavior (Mu & Andreas, 2020),
identify and fix model errors (Hernandez et al., 2021; Vig et al., 2020), and understand emergent
behavior (Nanda & Lieberum, 2022; Barak et al., 2022; Wei et al., 2022).
In this work, we aim to understand how GPT-2 small (Radford et al., 2019) implements a natural
language task. To do so, we locate components of the network that produce specific behaviors,
and study how they compose to complete the task. We do so by using circuits analysis (R¨auker
et al., 2022), identifying an induced subgraph of the model’s computational graph that is human-
understandable and responsible for completing the task. We employed a number of techniques,
most notably activation patching, knockouts, and projections, which we believe are useful, general
techniques for circuit discovery.2
∗Work done while at Redwood Research.§Correspondence to jsteinhardt@berkeley.edu
1A full and up-to-date version of this work can be found at https://arxiv.org/abs/2211.00593
2We included an overview of the techniques used in Appendix L.
1
Published as a conference paper at ICLR 2023
When
Mary
and
John
Knock Out Source sequence
When
Alice
and
JohnTarget sequence
When
Mary
and
JohnPatchingIO
S1Layer0 1 10 11
When
Mary
and
John
went
S2
ENDGPT2-small
Prediction
John 2.3%Mary 68.3%
the4.4%them 11.7%
her1.9%the
store
,
John
gave
a
drink
toto
Figure 1: Left: We isolated a circuit (in orange) responsible for the flow of information connecting
the indirect object ‘Mary’ to the next token prediction. The nodes are attention blocks and the edges
represent the interactions between attention heads. Right: We discovered and validated this circuit
using activation experiments, including both patches and knockouts of attention heads.
We focus on understanding a non-trivial, algorithmic natural language task that we call Indirect
Object Identification (IOI). In IOI, sentences such as ‘When Mary and John went to the store, John
gave a drink to’ should be completed with ‘Mary’. We chose this task because it is linguistically
meaningful and admits a complex but interpretable algorithm (Section 3).
We discover a circuit of 28 attention heads–1.5% of the total number of (head, token position) pairs–
that completes this task. The circuit uses 7 different categories of heads (see Figure 2) to implement
the algorithm. Together, these heads route information between different name tokens, to the end
position, and finally to the output. Our work provides, to the best of our knowledge, the most detailed
attempt at reverse-engineering a natural end-to-end behavior in a transformer-based language model.
Explanations for model behavior can easily be misleading or non-rigorous (Jain & Wallace, 2019;
Bolukbasi et al., 2021). To remedy this problem, we formulate three criteria to help validate our
circuit explanations. These criteria are faithfulness (the circuit can perform the task as well as
the whole model), completeness (the circuit contains all the nodes used to perform the task), and
minimality (the circuit doesn’t contain nodes irrelevant to the task). Our circuit shows significant
improvements compared to a na ¨ıve (but faithful) circuit, but fails to pass the most challenging tests.
In summary, our main contributions are: (1) We identify a large circuit in GPT-2 small that per-
forms indirect-object identification on a specific distribution (Figure 2 and Section 3); (2) Through
example, we identify useful techniques for understanding models, as well as surprising pitfalls; (3)
We present criteria that ensure structural correspondence (in the computational graph abstraction)
between the circuit and the model, and check experimentally whether our circuit meets this standard
(Section 4).
2 B ACKGROUND
In this section, we introduce the IOI task (an original contribution of this work), the transformer
architecture, define circuits more formally and describe a technique for “knocking out” model nodes.
Task description. In indirect object identification (IOI), two names (the indirect object (IO) and the
first occurrence of the subject (S1)) are introduced in an initial dependent clause (see Figure 1). A
main clause then introduces the second occurrence of the subject (S2), who is usually exchanging
an item. The task is to complete the main clause, which always ends with the token ‘to’, with the
non-repeated name (IO). We create many dataset samples for IOI ( pIOI) using 15 templates (see
Appendix A) with random single-token names, places and items.
We investigate the performance of GPT-2 small on this task. We study the original model from
Radford et al. (2019), pretrained on a large corpus of internet text and without any fine-tuning. To
quantify GPT-2 small performance on the IOI task, we used the logit difference between the logit
values placed on the two names, where a positive score means the correct name (IO) has higher
probability. This is also the difference in loss the model would receive in training if IO was correct
compared to if S was correct. We report this metric averaged over pIOIthroughout the paper. GPT-2
small has mean logit difference of 3.55 averaged across over 100,000 dataset examples.
2
Published as a conference paper at ICLR 2023
Transformer architecture. GPT-2 small is a decoder-only transformer with 12 layers and 12 atten-
tion heads per attention layer. In this work, we mostly focus on understanding the mechanisms of
attention heads, which we describe using notation similar to Elhage et al. (2021). We leave a full
description of the model to Appendix E.
The input to the transformer is the sum of position and token embeddings, x0∈RN×d, where Nis
the number of tokens in the input and dis the model dimension. This input embedding is the initial
value of the residual stream , which all attention layers and MLPs read from and write to. Attention
layer iof the network takes as input xi∈RN×d, the value of the residual stream before it. The
attention layer output can be decomposed into the sum of attention heads hi,j. If the output of the
attention layer is yi=P
jhi,j(xi), then the residual stream is updated to xi+yi.
Focusing on individual heads, each head hi,jis parametrized by four matrices Wi,j
Q,Wi,j
K,Wi,j
V∈
Rd×d
HandWi,j
O∈Rd
H×d. We rewrite these parameters as low-rank matrices in Rd×d:Wi,j
OV=
Wi,j
OWi,j
VandWi,j
QK= (Wi,j
Q)TWi,j
K. The QK matrix is used to compute the attention pattern
Ai,j∈RN×Nof head (i, j), while the OV matrix determines what is written into the residual
stream. At the end of the forward pass, a layer norm is applied before the unembed matrix WU
projects the residual stream into logits.
2.1 C IRCUITS
In mechanistic interpretability, we want to reverse-engineer models into interpretable algorithms.
A useful abstraction for this goal are circuits . If we think of a model as a computational graph M
where nodes are terms in its forward pass (neurons, attention heads, embeddings, etc.) and edges are
the interactions between those terms (residual connections, attention, projections, etc.), a circuit C
is a subgraph of Mresponsible for some behavior (such as completing the IOI task). This definition
of a circuit is slightly different from that in Olah et al. (2020), where nodes are features (meaningful
directions in the latent space of a model) instead of model components.
2.2 K NOCKOUTS
Just as the entire model Mdefines a function M(x)from inputs to logits, we also associate each
circuit with a function C(x), viaknockouts . A knockout removes a set of nodes Kin a computational
graph Mwith the goal of “turning off” nodes in Kbut capturing all other computations in M. Thus,
C(x)is defined by knocking out all nodes in M\Cand taking the resulting logit outputs in the
modified computational graph.
A first na ¨ıve knockout approach consists of simply deleting each node in KfromM. The net effect
of this removal is to zero ablate K, meaning that we turn its output to 0. This na ¨ıve approach has
an important limitation: 0 is an arbitrary value, and subsequent nodes might rely on the average
activation value as an implicit bias term. Because of this, we find zero ablation to lead to noisy
results in practice.
To address this, we instead knockout nodes through mean ablation : replacing them with their aver-
age activation value across some reference distribution (similar to the bias correction method used
in Nanda & Lieberum (2022)). Mean-ablations will remove the influence of components sensitive to
thevariation in the reference distribution (i.e. attention heads that move names in pIOI), but will not
influence components using information constant in the distribution (i.e. attention patterns that are
constant in pIOI). Through mean-ablations, we are interested in finding the components that move
information about names, which is the core of the IOI task and also varies with the distribution.
In this work, all knockouts are performed in a modified pIOIdistribution with three random names,
so the sentences no longer have a single plausible IO. We mean-ablate on this distribution, which
we call the ‘ABC’ distribution, because mean-ablating on the pIOIdistribution would not remove
enough information, like information constant in pIOIthat is helpful for the task. To knockout a
single node, a (head, token position) pair in our circuit, we compute the mean of that node across
samples of the same template. Computing means across the entire distribution instead of templates
would average activations at different tokens, like names, verbs and conjunctions, mixing informa-
tion destructively.
3
Published as a conference paper at ICLR 2023
Figure 2: We discover a circuit in GPT-2 small that implements IOI. The input tokens on the left are
passed into the residual stream. Attention heads move information between residual streams: the
query and output arrows show which residual streams they write to, and the key/value arrows show
which residual streams they read from.
3 D ISCOVERING THE CIRCUIT
We seek to explain how GPT-2 small implements the IOI task (Section 2). Recall the example
sentence “When Mary and John went to the store, John gave a drink to”. We discovered that GPT-
2’s internal mechanisms implement the following human-interpretable algorithm to perform IOI:
1. Identify all previous names in the sentence (Mary, John, John).
2. Remove all names that are duplicates (in the example above: John).
3. Output the remaining name.
Our circuit contains three major classes of heads, corresponding to these three steps:
•Duplicate Token Heads identify tokens that have already appeared in the sentence. They
are active at the S2 token, attend primarily to the S1 token and write a ‘signal’ into the
residual stream that token duplication has occurred.
•S-Inhibition Heads perform step 2 of the human-interpretable algorithm. They are active
at the END token, attend to the S2 token and write to bias the query of the Name Mover
Heads against both S1 and S2 tokens.
•Name Mover Heads , by default, attend to previous names in the sentence, but due to the
S-Inhibition Heads attend less to the S1 and S2 tokens. Their OV matrix is a name copying
matrix, so in pIOI, they increase the logit of the IO token.
A fourth major family of heads writes in the opposite direction of the Name Mover Heads, thus
decreasing the confidence of the predictions. We speculate that these Negative Name Mover Heads
might help the model “hedge” so as to avoid high cross-entropy loss when making mistakes.
There are also three minor classes of heads that perform related functions to the components above:
•Previous Token Heads copy the embedding of S to position S+1.
•Induction Heads perform the same role as the Duplicate Token Heads through an induction
mechanism. They are active at position S2, attend to token S+1 (mediated by the Previous
Token Heads), and output a signal that the S token previously appeared in the context.
• Finally, Backup Name Mover Heads do not normally move the IO token to the output, but
take on this role if the regular Name Mover Heads are knocked out.
Note that our circuit does not include the MLPs. We are interested in the flow of information
across tokens, and MLPs only process features along tokens. Moreover, initial investigations suggest
all MLPs except for the first one are not crucial for this task (Appendix I), though more precise
investigation is left for future work.
Below, we show step-by-step how we discovered each component, providing evidence that they
behave as described above. We found that it was most natural to uncover the circuit starting at the
logits and working back. Thus we start with the Name Mover and Negative Name Mover Heads.
4
Published as a conference paper at ICLR 2023
Copy score of name
mover heads
9.9 10.0 9.6020406080100
HeadCopy score (%)
Negative copy score
of negative heads
10.7 11.10020406080100
A B C D
HeadNegative copy score (%)Head 9.9 attention score
from the query token 'to'
END token
Then
,
Amy
and
Brian
went
to
the
station
.
Amy
gave
a
ring
to
BrianValue-weighted
Attention scoreKeys 02468
Head 11.10 attention score
from the query token 'to'Projection of the output of 11.10 along the
name embedding vs to attention probability
0 0.2 0.4 0.6 0.8 1−50−40−30−20−100
Attn Prob on NameDot w Name EmbedName Type
IO
S
Projection of the output of 9.9 along the
name embedding vs to attention probability
0 0.2 0.4 0.6 0.8 1−20020406080
Attn Prob on NameDot w Name EmbedName Type
IO
S
Name
movers
Heads
Negative
headsUnembedding projection of
head output
0 101086420
2 4Normalized dot
product
−2−1012
HeadLayer
6 8
END token
Then
,
Amy
and
Brian
went
to
the
station
.
Amy
gave
a
ring
to
BrianKeysValue-weighted
Attention score
012345
Figure 3: A:Name Movers and Negative Name Movers Heads are the heads that most strongly write
in the WU[IO]−WU[S]direction. B: Attention probability vs projection of the head output along
WU[IO]orWU[S]respectively. Note that for S tokens, we sum the attention probability on both S1
and S2. C: Value-weighted attention score with the query at the end token. D, top : Positive copying
score for the Name Mover Heads. D, bottom : Negative copying score for the Negative Name Mover
Heads. Dashed lines are the average scores for all heads.
3.1 W HICH HEADS DIRECTLY WRITE TO THE OUTPUT ? (N AME MOVER HEADS )
We begin by identifying which attention heads directly affect the model’s output: in other words, the
heads writing in the residual stream at the END position, in a direction that has high dot product with
the logit difference. Formally, let WUdenote the unembedding matrix, LN a layer norm operation
(see Appendix H) and WU[IO],WU[S]the corresponding unembedding vectors for the IOandS
tokens. We searched for heads (i, j)such that
λi,jdef=EX∼pIOI[⟨LN◦hi,j(X), WU[IO]−WU[S]⟩]
had large magnitude. Recall that hi,j(X)is the value that head (i, j)writes into the residual stream
on input X. Therefore, heads with λi,j>0correctly promote the IO token over the S token (on
average). The unembedding projection in (3.1) is called the logit lens and has been used in previous
work to interpret intermediate activations (nostalgebraist, 2020) and parameters (Dar et al., 2022).
We display the values of λi,jin Figure 3 A. We see that only a few heads in the final layers have
large logit projection λi,j. Specifically, 9.6, 9.9, and 10.0 have a large positive score, while 10.7 and
11.10 have a large negative score.
Name Mover Heads. To understand the positive heads, we first study their attention patterns. We
find that they attend strongly to the IO token: the average attention probability of all heads over pIOI
is 0.59. Since attention patterns can be misleading (Jain & Wallace, 2019), we check whether atten-
tion is correlated with the heads’ functionality. We do so by scatter plotting the attention probability
against the logit score ⟨hi(X), WU[IO]⟩. The results are shown in Figure 3 B: higher attention
probability on the IO token is linearly correlated with higher output in the IO direction (correlation
ρ >0.81,N= 500 ). Based on this result, we hypothesize that these heads (i) attend to names and
(ii) copy whatever they attend to. We therefore call these heads Name Mover Heads .
To check that the Name Mover Heads copy names generally, we studied what values are written via
the heads’ OV circuits. We transform the output of the first layer at a name token through the OV
matrix of a Name Mover Head and then project to the logits. The copy score is the proportion of
samples that contain the input name token in the top 5 logits ( N= 1000 ). We find that all three
Name Mover Heads have a copy score above 95% (compared to less than 20% for an average head).
Negative Name Mover Heads. In Figure 3, we also observed two heads strongly writing opposite
theWU[IO]−WU[S]direction. We called these heads Negative Name Mover Heads . Their copy
score is calculated with the negative of their OV matrix. As described in Figure 3, they share all the
properties of Name Mover Heads, except they write in the opposite of names they attend to.
5
Published as a conference paper at ICLR 2023
0 2 4 6 8 1086420Variation in
attention prob.
from end to IO of
Name Movers
−0.06−0.04−0.020
HeadLayerPatching ABC ➡ IOI at S+1
0 2 4 6 8 1086420
Variation in
attention prob.
from end to IO of
Name Movers
−0.1−0.050
HeadLayerPatching ABC ➡ IOI at S2
0 2 4 6 8 1086420Variation in
attention prob.
from end to IO of
Name Movers
HeadLayerPatching ABC ➡ IOI at END
−0.3−0.2−0.10S2 IO
When Mary and John went to the store . John gave a drink toS1 END S+1
ClaraIOI
ABC (S2)
Clara ABC (S1)
Figure 4: The attention probability to IO averaged over three Name Mover Heads is decreased most
by the Previous Token Heads (left), Induction Heads (center) and S-Inhibition Heads (right) when
we patch these attention heads from a sentence with a different S2 name (center and right), or a
different S1 name (left).
3.2 W HICH HEADS AFFECT THE NAME MOVER HEADS ’ATTENTION ? (S-I NHIBITION
HEADS )
Given that the Name Mover Heads are primarily responsible for constructing the output, we ask
why these Name Mover Heads pay preferential attention to the IO token. First, there are two ways
to affect the Name Mover Heads’s attention: through the query vector at the END token or the key
vector at the IO token. Since the key vector appears early in the context, it likely does not contain
much task-specific information, so we focus on the END query vector.
Then, by investigating Name Mover Heads on the ABC distribution (where the three names are
distinct; see Section 2.2), we observed that their attention is not selective: they pay equal attention
to the first two names. We thus ask: what has changed from the ABC distribution to the pIOI
distribution to cause the Name Mover Heads to attend to the IO token preferentially?
To empirically answer this question, we perform a patching experiment, a similar type of causal
intervention as performed in Meng et al. (2022); Vig et al. (2020). As illustrated in Figure 1 this
technique consists of two steps. First we save all activations of the network run on a source sequence.
Then we run the network on a target sequence, replacing some activations with the activations from
the source sequence. We can then measure the behavior of the patched model. Doing this for each
node individually locates the nodes that explain why model behavior is different in the source and
target sequences.
In our case, we run activation patching with source sentences from the ABC distribution and target
sentences from pIOI. We then compute the change in attention probability from END to IO, averaged
over the three Name Mover Heads. Since the Name Mover Heads attention on the IO is high in the
pIOIdistribution and low in ABC, patching at important heads from ABC to pIOIshould decrease
Name Mover Heads attention on IO. The results from patching every head at the END token position
are shown in Figure 4, right. We observe that patching heads 7.3, 7.9, 8.6, 8.10 causes a decrease
in the attention probability on IO, indicating that they are counterfactually important for the Name
Mover Heads’s attention probability on the IO token. We call these heads S-Inhibition Heads .
3.3 W HAT INFORMATION DO THE S-INHIBITION HEADS MOVE ?
How do the S-Inhibition Heads differentiate between IO and S, so they inhibit one but not the
other? We measured their attention pattern and found that they preferentially attend to the S2 token.
We therefore studied what information these heads move from the S2 token position to the END
position. We studied both the properties of the input, and which upstream affect the S-inhibition
heads. Surprisingly, we found that the S-inhibition heads mostly depend on the repetition at the two
positions where the S token occurs (Appendix G).
To study the heads that affect the S-inhibition heads, we ran a patching experiment at S2 from the
ABC distribution to the IOI distribution and measured the variation in Name Mover Heads attention.
The results (Figure 4, center) reveal a large set of heads influencing Name Mover Heads’ attention
6
Published as a conference paper at ICLR 2023
that did not appear at the END position. S-Inhibition Heads must mediate this effect, as they are the
only heads influencing Name Mover Heads at the END position. This reasoning suggests that the
outputs of this set of heads is moved by S-Inhibition Heads from S2 to the END token. When we
analyze the attention patterns of these heads, we see two distinct groups emerge.
Duplicate Token Heads. One group attends from S2 to S1. We call these Duplicate Token Heads on
the hypothesis that they detect duplicate tokens. To validate this, we analyze their attention pattern
on sequences of random tokens (with no semantic meaning), we found that 2 of the 3 Duplicate
Token Heads pay strong attention to a previous occurrence of the current token if it exists (see
Appendix F for more details).
Induction Heads and Previous Token Heads. The other group of heads attends from S2 to S1+1
(the token after the S1 token): the classic attention pattern of an induction head. Previously described
in Elhage et al. (2021), induction heads recognize the general pattern [A] [B] ... [A] and con-
tribute to predicting [B] as the next token. For this, they act in pair with a Previous Token Head.
The Previous Token Head should write information about [A] into the residual stream at [B], so
that the Induction Head can match the next occurrence of [A] to that position (and subsequently
copy[B] to the output).
We therefore seek to identify Previous Token Heads used by our purported Induction Heads. To this
end, we patched activations from a sentence where S1 is replaced by a random name, at the S+1
token index. As shown in figure 4, some heads (and particularly 4.11) appear to influence Name
Mover Heads. Then, by looking at the attention pattern of the most important heads in this patching
experiment, we identified 3 Previous Token Heads. We find that 2 of the 3 Previous Token Heads
and 2 of the 4 Induction Heads demonstrated their expected attention patterns (Appendix F).
3.4 D ID WE MISS ANYTHING ? THESTORY OF THE BACKUP NAME MOVERS HEADS
Each type of head in our circuit has many copies, suggesting that the model implements redundant
behavior. To make sure that we didn’t miss any copies, we knocked out allof the Name Mover Heads
at once. To our surprise, the circuit still worked (only 10% drop in logit difference). In addition,
many heads write along WU[IO]−WU[S]after the knockout, which did not do so previously.
We kept the heads with the largest λi,j, and call them Backup Name Mover Heads . See appendix B
for further details on these heads. Among the height heads identified, we investigated their behavior
before the knockout. We observe diverse behavior: 3 heads show close resemblance to Name Mover
Heads; 3 heads equally attend to IO and S and copy them; 1 head pays more attention to S1 and
copies it; 1 head seems to track and copy subjects of clauses, copying S2 in this case.
4 E XPERIMENTAL VALIDATION
In this section, we check that our circuit provides a good account of GPT-2’s true behavior. In gen-
eral, our introduced criteria depend on a measure Fof the performance of a circuit on a task. In our
case, suppose X∼pIOI, andf(C(X);X)is the logit difference between the IO and S tokens when
the circuit Cis run on the input X. The average logit difference F(C)def=EX∼pIOI[f(C(X);X)]is
a measure of how much a circuit predicts IO rather than S, i.e performs the IOI task.
Firstly, we check that Cisfaithful toM, i.e. that it computes similar outputs. We do so by measuring
|F(M)−F(C)|, and find that it is small: 0.2, or only 6%ofF(M) = 3 .55.
In Section 4.1 we define a running toy example of a model Mfor which faithfulness is not sufficient
to prescribe which circuits explain a behavior defined by a measure Fwell. This motivates the
criteria of completeness and minimality that we then check on our circuit. In addition to the criteria,
we also validated our knowledge of the circuit by designing adversarial examples (see Appendix C).
4.1 C OMPLETENESS
As a running example, suppose a model Muses two similar and disjoint serial circuits (where each
node depends on the previous node) C1andC2. The two sub-circuits are run in parallel before
applying an OR operation to their results. Identifying only one of the circuits is enough to achieve
7
Published as a conference paper at ICLR 2023
Naive circuit completeness tests
−2 0 2 4 6−2−10123456
F(C \ K)F(M \ K)Full circuit completeness tests
−2 0 2 4 6−2−10123456
F(C \ K)F(M \ K)
x=y
Random setGreedy search set
Name mover
S-inhibition
InductionDuplicate token
Previous token
Negative
Empty set
Empty setNegative heads
Figure 5: Plot of points (xK, yK) = (F( M\K),F(C\K))for our circuit (left) and a naive
circuit (right). Each point is for a different choice of K: 50 uniformly randomly chosen K⊆C,
K=∅, and the five Kwith the highest incompleteness score found by greedy optimization. Since
the incompleteness score is |xK−yK|, we show the line y=xfor reference.
faithfulness, but we want explanations that include both C1andC2, since these are both used in the
model.
To solve this problem, we introduce the completeness criterion: for every subset K⊆C, the
incompleteness score |F(C\K)−F(M\K)|should be small. In other words, CandMshould
not just be similar, but remain similar under knockouts.
In our running example, we can show that C1is not complete by setting K=C1. Then C1\Kis
the empty circuit while M\Kstill contains C2. The metric |F(C1\K)−F(M\K)|will be large
because C1\Khas trivial performance while M\Ksuccessfully performs the task.
The criterion of completeness requires a search over exponentially many subsets K⊆C. This is
computationally intractable given the size of our circuit, hence we use three sampling methods to
find examples of Kthat give large incompleteness score:
• The first sampling method chooses subsets K⊆Cuniformly at random.
• The second sampling method set Kto be an entire class of circuit heads G, e.g the Name
Mover Heads. C\Gshould have low performance since it’s missing a key component,
whereas M\Gmight still do well if it has redundant components that fill in for G.
• Thirdly, we greedily optimized Knode-by-node to maximize the incompleteness score (see
appendix K for the detail of the optimization procedure).
These first two methods of sampling Ksuggested to us that our circuit was ε-complete for a small
value of ε. However, the third resulted in sets Kthat had high incompleteness score: up to 3.09. All
such results are found in figure 5, on the left.
4.2 M INIMALITY
A faithful and complete circuit may contain unnecessary components, and so be overly complex. To
avoid this, we should check that each of its nodes vis necessary. This can be evaluated by knocking
out a set of nodes Kand showing that adding back v∈Kto the circuit can significantly recover F.
Formally, the minimality require that for every node v∈Cthere exists a subset K⊆C\ {v}that
has minimality score |F(C\(K∪ {v}))−F(C\K)| ≥A. We call such a circuit A-minimal .
In the running example, C1∪C2isA-minimal for some non-trivial A. We can sketch a proof of this
result given an informal defintion of ‘non-trivial’. To show this, note that if v1∈C1andK=C2,
then the minimality score is equal to |F(C1\ {v1})−F(C1)|which is large since C1is a serial
circuit and so removing v1will destroy the behavior. We then proceed symmetrically for v2∈C2.
In practice, we need to exhibit for every va setKsuch that the minimality score is at least A. For
most heads, removing the class of heads Gthatvis a part of provides a reasonable minimality score.
We describe the sets Kthat are required for them in Appendix J. The importance of individual nodes
is highly variable, but they all have a significant impact on the final metric (at least 3% of the original
logit difference). These results ensure that we did not interpret irrelevant nodes, but do show that
the individual contribution of some single attention heads is small.
8
Published as a conference paper at ICLR 2023
(10, 0)(9, 9)(9, 6)(10, 7)(11, 10)(8, 10)(7, 9)(8, 6)(7, 3)(5, 5)(6, 9)(5, 9)(5, 8)(0, 1)(0, 10)(3, 0)(4, 11)(2, 2)(2, 9)(11, 2)(10, 2)(10, 6)(10, 1)(10, 10)(9, 7)(11, 9)(11, 3) 00.511.522.5
name mover
negative
s2 inhibition
induction
duplicate token
previous token
backup name mover
Attention headChange in
logit difference
Figure 6: Plot of minimality scores |F(C\(K∪ {v}))−F(C\K)|for all components vin our
circuit. The sets Kused for each component, as well as the initial and final values of the logit
difference for each of these vis in Appendix J. Our circuit is 0.06-minimal.
4.3 C OMPARISON WITH A NAIVE CIRCUIT
In order to get a relative sense of the success of our explanation by our criteria, we compare the
results on a na ¨ıve circuit that consists of the Name Mover Heads (but no Backup Name Mover
Heads), S-Inhibition Heads, two Induction Heads, two Duplicate Token Heads and two Previous
Token Heads. This circuit has a faithfulness score 0.1, a score comparable to our circuit’s faithful-
ness score. However, contrary to our circuit, the naive circuit can be easily proven incomplete: by
sampling random sets or by knocking-out by classes, we see that F(M\K)is much higher than
F(C\K)(Figure 5, left). Nonetheless, when we applied the greedy heuristic to optimize for the
incompleteness score, both circuits have similarly large incompleteness scores. Thus, we conclude
that our worst-case completeness criterion was too high a bar, which future work could use as a high
standard to validate circuit discovery.
5 D ISCUSSION
In this work, we isolated, understood and validated a set of attention heads in GPT-2 small composed
in a circuit that identifies indirect objects. Along the way, we discovered interesting structures
emerging from the model internals that complicated the study. For instance, we identified heads
compensating for the loss of function of other heads, and heads contributing negatively to the next-
token prediction. Early results suggest that the latter phenomenon occurs for other tasks beyond IOI
(see Appendix F).
However, our work also has several limitations. First, despite the detailed analysis presented here,
we do not understand several components. Those include the attention patterns of the S-Inhibition
Heads, and the effect of MLPs and layer norms. Second, the number of parameters in GPT-2 small
is orders of magnitude away from state-of-the-art transformer language models. A future challenge
is to scale this approach to these larger models. Thirdly, we only looked at the difference in average
metric (logit difference) between the circuit and the model in order to compare how they both did the
IOI task (Section 4). Looking at the average difference in metric between the circuit and model on
individual examples would be a more stringent way to compare them, but it had too much variability
to help us find a circuit. Fourthly, the definition of the task is limited: we only measure a fraction of
the prediction made by the model, and do not study cases where the model is notperforming IOI.
Finally, more work is needed to validate the structural validation criterion we introduce here.
We hope that our work spurs further efforts in mechanistic explanations of larger language models
computing different natural language tasks, with the eventual goal of understanding full language
model capabilities.
REFERENCES
Boaz Barak, Benjamin L Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang.
Hidden progress in deep learning: Sgd learns parities near the computational limit. arXiv preprint
arXiv:2207.08799 , 2022.
9
Published as a conference paper at ICLR 2023
Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda B. Vi ´egas, and
Martin Wattenberg. An interpretability illusion for BERT. CoRR , abs/2104.07143, 2021. URL
https://arxiv.org/abs/2104.07143 .
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar-
wal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma-
teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan-
dlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Ad-
vances in Neural Information Processing Systems , volume 33, pp. 1877–1901. Curran Asso-
ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/
1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .
Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding
space. arXiv preprint arXiv:2209.02535 , 2022.
Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann,
Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep
Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt,
Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and
Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread ,
2021. https://transformer-circuits.pub/2021/framework/index.html.
Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan
Belinkov. Causal analysis of syntactic agreement mechanisms in neural language models, 2021.
URL https://arxiv.org/abs/2106.06087 .
Atticus Geiger, Hanson Lu, Thomas F Icard, and Christopher Potts. Causal abstractions of neural
networks. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in
Neural Information Processing Systems , 2021. URL https://openreview.net/forum?
id=RmuXDtjDhG .
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are
key-value memories. arXiv preprint arXiv:2012.14913 , 2020.
Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research. arXiv , abs/2206.05862, 2022.
Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob
Andreas. Natural language descriptions of deep visual features. In International Conference on
Learning Representations , 2021.
Sarthak Jain and Byron C. Wallace. Attention is not Explanation. In Proceedings of the 2019
Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers) , pp. 3543–3556, Minneapolis,
Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1357.
URL https://aclanthology.org/N19-1357 .
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual
associations in gpt. arXiv preprint arXiv:2202.05262 , 2022.
Jesse Mu and Jacob Andreas. Compositional explanations of neurons. Advances in Neural Informa-
tion Processing Systems , 33:17153–17163, 2020.
Neel Nanda and Tom Lieberum. A mechanistic interpretability analysis of grokking,
2022. URL https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/
a-mechanistic-interpretability-analysis-of-grokking .
nostalgebraist. interpreting gpt: the logit len, 2020. URL https://www.lesswrong.com/
posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens .
Chris Olah. Mechanistic interpretability, variables, and the importance of interpretable bases.
https://www.transformer-circuits.pub/2022/mech-interp-essay , 2022.
10
Published as a conference paper at ICLR 2023
Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter.
Zoom in: An introduction to circuits. Distill , 2020. doi: 10.23915/distill.00024.001.
https://distill.pub/2020/circuits/zoom-in.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,
Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction
heads. arXiv preprint arXiv:2209.11895 , 2022.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. 2019.
Tilman R ¨auker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward transparent ai:
A survey on interpreting the inner structures of deep neural networks, 2022. URL https:
//arxiv.org/abs/2207.13243 .
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Infor-
mation Processing Systems , pp. 5998–6008, 2017.
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and
Stuart Shieber. Investigating gender bias in language models using causal mediation analysis.
Advances in Neural Information Processing Systems , 33:12388–12401, 2020.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo-
gatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Chi, Tatsunori Hashimoto, Oriol
Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models.
ArXiv , abs/2206.07682, 2022.
11
Published as a conference paper at ICLR 2023
A IOI T EMPLATES
We list all the template we used in Table 7. Each name was drawn from a list of 100 English first
names, while the place and the object were chosen among a hand made list of 20 common names.
All the word chosen were one token long to ensure proper sequence alignment computation of the
mean activations.
Templates in pIOI
Then, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] had a lot of fun at the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] were working at the [PLACE]. [B] decided to give a [OBJECT] to [A]
Then, [B] and [A] were thinking about going to the [PLACE]. [B] wanted to give a [OBJECT] to [A]
Then, [B] and [A] had a long argument, and afterwards [B] said to [A]
After [B] and [A] went to the [PLACE], [B] gave a [OBJECT] to [A]
When [B] and [A] got a [OBJECT] at the [PLACE], [B] decided to give it to [A]
When [B] and [A] got a [OBJECT] at the [PLACE], [B] decided to give the [OBJECT] to [A]
While [B] and [A] were working at the [PLACE], [B] gave a [OBJECT] to [A]
While [B] and [A] were commuting to the [PLACE], [B] gave a [OBJECT] to [A]
After the lunch, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Afterwards, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] had a long argument. Afterwards [B] said to [A]
The [PLACE] [B] and [A] went to had a [OBJECT]. [B] gave it to [A]
Friends [B] and [A] found a [OBJECT] at the [PLACE]. [B] gave it to [A]
Figure 7: Templates used in the IOI dataset. All templates in the table fit the ’BABA’ pattern, but
we also use templates that fit the ‘ABBA’ pattern as well (not included for simplicity).
B B ACKUP NAME MOVER HEADS
Here we discuss in more detail the discovery of the Backup Name Mover Heads. As shown in
figure 8, knocking-out the three main Name Mover Heads doesn’t leave the rest of the heads in
a similar state as before. They seem to ”compensate” the loss of function from the Name Mover
Heads such that the logit difference is only 10% lower. We observe that the Negative Name Mover
Heads head write less negatively in the direction of WU[IO]−WU[S], 10.7 even write positively in
this direction afterwards, while other heads that wrote slightly along WU[IO]−WU[S]before the
knock-out becomes the main contributor. Both the reason and the mechanism of this compensation
effect are still unclear, we think that this could be an interesting phenomenon to investigate in future
works. Among those last categories, we identify S-inhibition heads and a set of other head that we
called Backup Name Mover Heads . We arbitrarily chose to keep the height heads that were not part
of any other groups, and wrote in the direction of WU[IO]−WU[S]above the threshold of 0.05.
In figure 9 we analyze the behavior of those newly identified heads with similar techniques as Name
Mover Heads. Those can be grouped in 4 categories.
• 3 heads (10.1, 10.10 and 10.6) that behave similarly as Name Mover Heads according to
their attention pattern, and scatter plots of attention vs dot product of their output with
WU[IO]−WU[S](as 10.10).
• 3 heads (10.2, 11.9, 11.3) that pay equal attention to S1 and IO and wrote both of them (as
10.2 in Figure 9).
• One head, 11.2, that pays more attention to S1 and write preferentially in the direction of
WU[S]
• One head, 9.7, that pays attention to S2 and write negatively.
We did not thoroughly investigate this diversity of behavior, more work can be done to precisely
describe these heads. However, these heads are also the ones with the less individual importance for
the task (as shown by their minimality score in Figure 6). The exact choice of Backup Name Mover
Heads doesn’t change significantly the behavior of the circuit.
12
Published as a conference paper at ICLR 2023
(9, 9)(10, 7)(9, 6)(11, 10)(10, 0)(10, 10)(10, 6)(11, 2)(8, 10)(10, 1)(7, 9)(7, 3)(10, 2)(9, 7)(9, 0)(9, 5)(9, 2)(8, 2)(11, 9)(11, 6)−1−0.500.511.5
backup name mover
negative
s2 inhibition
Nonename mover
HeadProjection on IO-SUnembedding projection of head output
(before name movers KO)Unembedding projection of head output
(after name movers KO)
(10, 10)(11, 10)(10, 6)(10, 2)(10, 1)(11, 2)(8, 10)(7, 9)(10, 7)(7, 3)(11, 9)(11, 3)(9, 7)(9, 0)(11, 11)(11, 6)(9, 5)(9, 2)(10, 3)(8, 2)−0.500.5
Head
Figure 8: Discovery of the Backup Name Mover Heads. After knock-out of the Name Mover Heads
(right) some heads write stronger in the WU[IO]orWU[S]direction than before (left). We also
observed that negative heads seems inhibited by this operation.
Projection of the output along the
name embedding vs attention probabilityValue weighted attention
10.10
11.2
9.7
10.2Backup name
mover head
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
Figure 9: Four examples of Backup Name Mover Heads. Left: attention probability vs projection
of the head output along WU[IO]orWU[S]respectively. Right: Attention pattern on a sample
sequence.
13
Published as a conference paper at ICLR 2023
Distribution Logit difference IO probabilityProportion of
S logit greater than IO
pIOI 3.55 0.49 0.7%
Additional occurrence of S
(natural sentence)3.64 0.59 0.4%
Additional occurrence of IO
(natural sentence)1.23 0.36 23.4%
Figure 10: Summary of GPT-2 performance metrics on the IOI task on different datasets. In the line
order: for pIOI, for the dataset where we added an occurrence of S (thus S appears three times in
the sentence) and for the adversarial dataset with duplicated IO in natural sentences. IO probability
refers to the probability the model places on the IO token (computed from the logits).
C D ESIGNING ADVERSARIAL EXAMPLES
As argued in R ¨auker et al. (2022), one way to evaluate the knowledge gained by interpretability work
is to use it for downstream applications as predicting out of distribution behavior. In this section, we
do this by using knowledge of the circuit to construct simple adversarial examples for the model.
As presented in Section 3, the model relies on duplicate detection to differentiate between S and
IO. Motivated by this, we constructed passages where both the S and IO tokens are duplicated. An
example is “John and Mary went to the store. Mary had a good day. John gave a bottle of milk
to”; see Appendix D for full details. We find that this significantly reduces the logit difference and
causes the model to predict S over IO 23% of the time (Figure 10).
To ensure that the observed effect is not an artifact of the additional sentences, we included a control
dataset using the same templates, but where the middle sentence contains S instead of IO. In these
sentences, S appears three times in total and IO only appears once. On this distribution, the model
has an even higher logit difference than on pIOI, and predicts S over IO only 0.4%of the time.
Limitations of the attack. Despite being inspired by our understanding of our circuit, those exam-
ples are simple enough that they could have been found without our circuit with enough effort.
Moreover, we do not have a full understanding of the mechanisms at play in these adversarial exam-
ples. For instance, the S-Inhibition Heads attend not only to S2, but also to the second occurrence of
IO. As this pattern is not present in pIOInor in ABC, it is beyond the analysis presented in Section
3. The study of the behavior of the circuit on these adversarial examples could be a promising area
for future work.
D T EMPLATE FOR ADVERSARIAL EXAMPLES
The design of adversarial examples relies on adding a duplicate IO to the sentences. To this end, we
used a modification of the templates described in appendix A. We added an occurrence of [A] in the
form of a natural sentence, independent of the context. The list of sentence is visible in Figure 11.
[A] had a good day.
[A] was enjoying the situation.
[A] was tired.
[A] enjoyed being with a friend.
[A] was an enthusiast person.
Figure 11: Templates for the natural sentences used in the generation of adversarial examples. The
sentences were chosen to be independent of the context.
E GPT-2 SMALL FULL ARCHITECTURE
Here we define all components of the GPT-2 Architecture, including those we don’t use in the main
text. GPT-2 small has the following hyperparameters:
•N: number of input tokens.
14
Published as a conference paper at ICLR 2023
•V: vocabulary of tokens.
•d: residual stream dimension.
•L: number of layers.
•H: number of heads per layer.
•D: hidden dimension of MLPs
It uses layer norms, the non-linear function
LN(x)def=x−¯xpP
i(xi−¯xi)2, (1)
where the mean and the difference from the mean sum are over the dcomponents of each of the N
tensors.
In GPT-2 the MLPs all have one hidden layer of dimension Dand use the GeLU non-linearity.
We addressed the parametrisation of each attention head in the main text, and cover the technical
details of the WQKandWOVmatrix here: the attention pattern is Ai,j=softmax (xTWi,j
QKx)
where the softmax is taken for each token position, and is unidirectional. We then have hi,j(x)def=
(Ai,j⊗Wi,j
OV).x.
Algorithm 1 GPT-2.
Require: Input tokens T; returns logits for next token.
1:w←One-hot embedding of T
2:x0←WEw(sum of token and position embeddings)
3:fori= 0toLdo
4: yi←0∈RN×d
5: forj= 0toHdo
6: yi←yi+hi,j(xi),the contribution of attention head (i, j)
7: end for
8: y′
i←mi(xi),the contribution of MLP at layer i
9: xi+1←xi+yi+y′
i(update the residual stream)
10:end for
11:return WU◦M◦LN◦xL
F A NALYSIS ON SEQUENCES OF RANDOM TOKENS
We run GPT-2 small on sequences of 100 tokens sampled uniformly at random from GPT-2’s token
vocabulary. Each sequence Awas duplicated to form AA, a sequence twice as long where the first
and second half are identical. On this dataset, we computed three scores from the attention patterns
of the attention heads:
• The duplicate token score: for each token Tiin the second half of a sequence S, we average
the attention probability from Tito its previous occurrence in the first half of S(i.e.Ti−100).
• The previous token score: we averaged the attention probability on the off-diagonal. This
is the attention from the token at position ito position i−1.
• The induction score: the attention probability from Tito the token that comes after the first
occurrence of Ti(i.e.Ti−99)
These three score are depicted in Figure 12 for all attention heads. We can identify 3.0 and 0.1 as
duplicated token heads that also appear in our circuit, 5.5 and 6.9 have high induction score and
were also identified as induction heads in our investigation and 4.11 and 2.2 have a high previous
token score. Note that the heads identified are also the ones that have the highest influence in the
patching experiment shown in Figure 4.
Induction Heads. Olsson et al. (2022) define an Induction Head according to its behavior on re-
peated sequences of random tokens. The attention head must demonstrate two properties. i) Prefix-
matching property. The head attends to [B] from the last [A] on pattern like [A] [B] ... [A] ii)
Copy property. The head contribute positively to the logit of [B] on the pattern [A][B]...[A] .
15
Published as a conference paper at ICLR 2023
Duplicate token attention prob.
on sequences of random tokens
Previous token attention prob.
on sequences of random tokens
Induction score on
sequences of random tokens
Figure 12: Sum of attention probabilities on position determined by the role. Left: duplicate score,
the average attention probability from a token to its previous occurrence. Center: Previous token
attention score, it is the average of the off diagonal attention probability. Right: Induction score.
Average attention probability from the second occurence of [A] to[B] on[A][B]...[A] .
7.29.97.106.99.610.711.910.15.110.011.1010.68.110.1110.105.59.111.510.39.0−15−10−50510152025Average contribution to next token
on repeated sequence
xDot prod. of head output and
next token embeddingName Mover Heads
BackupName Mover Heads
Negative Name Mover Heads
Induction Heads from our circuit
Other induction heads
Others
Figure 13: Contribution to the next token prediction per head on repeated sequences of tokens. The
heads are ordered by decreasing absolute values of contribution. Black contour: heads with attention
patterns demonstrating prefix matching property.
In the IOI task, we identify these heads according to their attention pattern, demonstrating the
pattern-matching property. Here, we investigate their copy property, that is useless in the context of
IOI: outputting the token after S2 is of no interest to identify IO.
As presented above, 5.5 and 6.9 are among the 5 heads with the highest induction score. This
validates their prefix-matching property.
To check their copy property, we computed the dot product ⟨hi(X), WU[B]⟩between the output of
the head hion sequence Xand the embedding of the token [B] on repeated sequences of random
tokens. The results are shown in Figure 13. The two Induction Heads (5.5 and 6.9) appear in the 20
heads contributing the most to the next token prediction. Thus validating their copying property.
We also noticed that the majority of the Negative, Backup and regular Name Mover Heads appear
to write in the next token direction on repeated sequences of random tokens, and Negative Name
Movers Heads contribute negatively. This suggests that these heads are involved beyond the IOI task
to produce next-token prediction relying on contextual information. Moreover, ablating the output
of the three Name Mover Heads by patching their outputs results in a 26% increase in average loss
on the last 99 tokens (from 0.15 to 0.19), showing their importance on tasks outside IOI.
16
Published as a conference paper at ICLR 2023
G D ISENTANGLING FEATURES IN THE OUTPUT OF S-INHIBITION HEADS
In Section 3.2, we discovered that S-Inhibition Heads are responsible for the Name Mover Heads’
specific attention on the IO token. In this appendix, we explore which properties of the input affect
the S-inhibition heads’ outputs.
We present evidence that they were outputting token signals (information about the value of the
token S), positional signals (related to the value of the position S1) and that the latter is the most
important.
To disentangle the two effects, we design a series of counterfactual datasets where only some signals
are present, and some are inverted with respect to the original dataset. We then conducted patching
experiments where the output of S-Inhibition heads are computed from these datasets.
This enables us to quantify in isolation the impact of each signal on the final logit difference.
We constructed six datasets by combining three transformations of the original pIOIdistribution.
•Random name flip : we replace the names from a given sentence with random names,
but we keep the same position for all names. Moreover, each occurrence of a name in
the original sentence is replaced by the same random name. When we patch outputs of
S-Inhibition heads from this sentence, only positional signals are present, the token signals
are unrelated to the names of the original sequence.
•IO↔S1 flip : we swap the position of IO and S1. The output of S-inhibition heads will
contain correct token signals (the subject of the second clause is the same) but inverted
positional signals (because the position of IO and S1 are swapped)
•IO←S2 replacement : we make IO become the subject of the sentence and S the indirect
object. In this dataset, both token signals and positional signals are inverted.
We can also compose these transformations. For instance, we can create a dataset with no token
signals and inverted positional signals by applying IO ↔S1 flip on the dataset with random names.
In total, we can create all six combinations of original, inverted, or uncorrelated token signal with
the original and inverted positional signal.
From each of those six datasets, we patched the output of S-Inhibition heads and measured the logit
difference. The results are presented in Figure 14.
These results can be summarized as the sum of the two effects. Suppose we define the variable Stok
to be 1 if the token signal is the original, 0 when uncorrelated and -1 when inverted. And similarly
Sposto be 1 if the position signal is the original and -1 if inverted. Then the Figure 14 suggests
that the logit difference can be well approximated by 2.31Spos+ 0.99Stok, with a mean error of 7%
relative to the baseline logit difference.
For instance, when both the positional and token signals are inverted, the logit difference is the
opposite of the baseline. This means that the S token is predicted stronger than the IO token, as
strong as IO before patching. In this situation, due to the contradictory information contained in
the output of S-Inhibition heads, the Name Movers attend and copy the S1 token instead of the IO
token (see Figure 15, right). In the intermediate cases where only one of the signals is modified,
we observe a partial effect compared to the fully inverted case (e.g. Figure 15, left). The effect size
depends on the altered signals: positional signals are more important than token signals.
Can we be more specific as to what the token and positional signals are? Unfortunately, we do not
have a complete answer, but see this as one of the most interesting further directions of our work.
We expect that the majority of the positional information is about the relative positional embedding
between S1 and S2 (such pointer arithmetic behavior has already been observed in Olsson et al.
Original positional signal Inverted position signal
Original S token signal 3.55 (baseline) -0.99
Random S token signal 2.45 -1.96
S↔IO inverted token signal 1.77 -3.16
Figure 14: Logit difference after patching S-Inhibition heads from signal-specific datasets. The
effect on logit difference can be decomposed as a sum of the effects of position and token signal.
17
Published as a conference paper at ICLR 2023
IO S S200.10.20.30.40.50.60.70.8Attention on pIOI
Average attention probability
of Name Mover Heads
Attention after patching
S-Inhibition Heads fromIOS2 replacement
IOSS200.10.20.30.40.50.60.70.8
1
Attention on pIOI
Attention after patching
S-Inhibition Heads from
random name flipAverage attention probability
of Name Mover Heads
1
Figure 15: Name Mover Heads’ attention probability before and after patching S-Inhibition Heads
from signal-specific datasets. Left: patching from the dataset generated by random flip of name
(same position signal, random token signal). Right: patching from the dataset generated by IO ←S2
replacement (inverted position signal, inverted token signal). Black bars represent the standard
deviation.
(2022)). When patching in S2 Inhibition outputs from a distribution where prefixes to sentences are
longer (but the distance between S1 and S2 is constant), the logit difference doesn’t change (3.56
before patching vs 3.57 after). This suggests that the positional signal doesn’t depend on the absolute
position of the tokens, as long as the relative position of S1 and S2 stays the same.
H L AYER NORM AND THE RESIDUAL STREAM
The attention heads and MLPs in GPT-2 small write into the residual stream. Suppose x12is the
final state of the residual stream after the 12 layers. This is then converted into logits via WU◦
M◦LN(x12), where LN is defined in Appendix E, Mis the linear transformation of the layer norm
operation and WUis the unembedding matrix.
In order to attribute the extent to which an attention head hwrites in a direction WU[T]where Tis
a token (always IO or S in our case), we can’t simply compute ⟨M◦LN◦hi,j(X), WU[T]⟩, as the
scaling factor that’s used ispP
i(x12,i−x12,i)2. Therefore LN in the main text uses this scaling
factor:
LN(h)def=M◦h−hpP
i(x12,i−x12,i)2(2)
I R OLE OF MLP S IN THE TASK
In the main text, we focused our investigation on attention heads. Since they are the only module
able of moving information across token position – a crucial component of the IOI task – they were
our main subject of interest. However, MLP can still play a significant role in structuring the residual
stream at a given position. We explored this possibility by performing knock-out of the MLP layers
(Figure 16). We observe that MLP0 has a significant influence on logit difference after knock-out
(−100% relative variation) but the other layers don’t seem to play a big role. We hypothesize that
MLP0 can be used to perform low level token processing that latter layers rely on.
Moreover, we also investigated the writing of MLP along the WU[IO]−WU[S]direction. As shown
in Figure 16 (bottom) they write negligibly in this direction compared to attention heads (Figure 3).
18
Published as a conference paper at ICLR 2023
v Class K∪{v} F(C\(K∪ {v}))F(C\K)
(9, 9) Name Mover [(9, 9)] 2.78 3.14
(10, 0) Name Mover [(9, 9), (10, 0)] 2.43 2.78
(9, 6) Name Mover [(9, 9), (10, 0), (9, 6)] 2.77 2.43
(10, 7) Negative Name Mover All Negative Name Mover Heads 5.11 3.84
(11, 10) Negative Name Mover All Negative Name Mover Heads 5.11 4.06
(7, 3) S-Inhibition All S-Inhibition Heads 0.33 1.15
(7, 9) S-Inhibition All S-Inhibition Heads 0.33 1.12
(8, 6) S-Inhibition All S-Inhibition Heads 0.33 1.10
(8, 10) S-Inhibition All S-Inhibition Heads 0.33 0.55
(5, 5) Induction Induction Heads and Negative Heads 1.06 3.95
(5, 8) Induction All Induction Heads 1.06 2.58
(5, 9) Induction All Induction Heads 4.40 5.11
(6, 9) Induction Induction Heads and Negative Heads 4.76 5.11
(0, 1) Duplicate Token All Duplicate Token Heads 1.14 2.52
(0, 10) Duplicate Token All Duplicate Token Heads 1.14 2.29
(3, 0) Duplicate Token All Duplicate Token Heads 1.14 1.65
(2, 2) Previous Token All Previous Token Heads 2.03 2.80
(2, 9) Previous Token All Previous Token Heads 2.03 2.42
(4, 11) Previous Token All Previous Token Heads 2.03 2.27
(10, 10) Backup Name Mover All NMs and previous Backup NMs 2.40 2.63
(10, 2) Backup Name Mover All NMs and previous Backup NMs 0.89 1.09
(11, 2) Backup Name Mover All NMs and previous Backup NMs 0.72 0.89
(10, 6) Backup Name Mover All NMs and previous Backup NMs 2.63 2.77
(10, 1) Backup Name Mover All NMs and previous Backup NMs 1.34 1.47
(9, 7) Backup Name Mover All NMs and previous Backup NMs 0.85 1.02
(11, 9) Backup Name Mover All NMs and previous Backup NMs 1.02 1.13
(11, 3) Backup Name Mover [(9, 9), (10, 0), (9, 6), (10, 10), (11, 3)] 2.53 2.59
Figure 17: Ksets for minimality for each v.
Logit Difference Variation after
knock-out of MLPs at all tokens
0 2 4 6 8 10
−1−0.500.51
Layer
0 2 4 6 8 10Logit diff.
relative variation
Projection of MLP outputs on the
IO-S unembedding
−0.001−0.000500.00050.001Normalized
dot product
Layer
Figure 16: Top: Relative variation in logit difference from knocking out MLP layers. Only MLP0
causes a significative decrease in logit difference after knock-out. Bottom: We measure how much
MLPs write along the WU[IO]−WU[S]direction.
J M INIMALITY SETS
The sets that were found for the minimality tests are listed in Table 17.
19
Published as a conference paper at ICLR 2023
Kfound by greedy optimization
(9, 9), (9, 6), (5, 8), (5, 5), (2, 2), (2, 9)
(9, 9), (11, 10), (10, 7), (8, 6), (5, 8), (4, 11)
(10, 7), (5, 5), (2, 2), (4, 11)
(9, 9), (11, 10), (10, 7), (11, 2), (3, 0), (5, 8), (2, 2)
Figure 18: 4 sets Kfound by the greedy optimization procedure on our circuit.
K G REEDY ALGORITHM
The Algorithm 2 describes the procedure used to sample sets for checking the completeness criteria
using greedy optimization. In practice, because the na ¨ıve and the full circuit are not of the same size,
we chose respectively k= 5andk= 10 to ensure a similar amount of stochasticity in the process.
We run the procedure 10 times and kept the 5 sets with the maximal important incompleteness score
(including the intermediate K).
Algorithm 2 The greedy sampling procedure for sets to validate the completeness citeria.
1:K← ∅
2:foritoNdo
3: Sample a random subset V⊆Cofknodes uniformly.
4: vMAX←arg maxv∈V|F(C\(K∪ {v}))−F(C\K)|
5: K←K∪ {vMAX}
6:end for
7:return K
As visible in Table 18 the sets found by the greedy search contains a combination of nodes from dif-
ferent class. Nonetheless, the overlap between different Ksuggest that we are missing components
from Mthat can take the place of induction heads or S-inhibition Heads when some Name Mover
Heads are knocked-out.
L T ECHNIQUES OVERVIEW
This work involved a variety of techniques that were required to explain model behavior.
•Knockouts :
We used knockouts in two different ways: knocking out singular components of models,
and knocking out everything in the model except particular circuits. The former was some-
what useful, and the latter we found powerful.
–Knockout of single components: as an attribution method, knocking out singular com-
ponents was not always as powerful as techniques such as projections, since the com-
pensation (or backup) nature of Backup Name Mover Heads in this task allowed com-
ponents to be knocked out and their true effect size masked.
–Knockouts of all components except a circuit: on the other hand, knocking out all
components except a circuit enabled us to isolate behaviors in this task where behavior
was sparse, and check the components of our circuit while ignoring the vast percentage
of components of the network, making work manageable.
What was very important for the success of knockout and patching experiments was the
choice of reference distribution for knockout. The analysis in Appendix G shows how
the specific choice of dataset is useful for understanding model components. For a more
general knockout, the OpenWebText dataset, GPT’s training data, can be used. However,
we found that this led to noisier results (though our circuit components still were shown to
be important when we used this ablation).
•Attention pattern analysis :
Using attention patterns to explain behavior is always worrying due to the possibility that
information has accumulated on that token primarily from previous tokens, or that the po-
sition with large attention paid to isn’t actually writing an important value into the residual
stream. In our work however, analyzing attention patterns was generally a necessary first
step before further experiments could be ran, and in this small model, both of the worrying
cases did not generally arise.
20
Published as a conference paper at ICLR 2023
•Patching :
Patching was an important method we used to verify causal explanations that were gener-
ally formed from correlational evidence. In this way our use case is similar to Finlayson
et al. (2021). We were surprised however that in general patching gave clear signal on the
changes in behavior. This may be because we generally patched from inputs like the ABC
distribution (which was successful in knocking out too). Therefore, keeping the context
of the sentence templates may be generally useful. This could be either because the other
words in the templates allow the model to realise that it should be doing IOI, or that in-
troducing inputs from other distributions introduces noise that the model picks up on and
uses, when this is not intended.
21
|
b6bc8f8f-9b1b-4c4f-98d6-86dca9a446cc
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Quantum immortality: Is decline of measure compensated by merging timelines?
I wrote an [article](https://docs.google.com/document/d/1Y8uvR4rxR8LMqtIjO0Eho5JzCMNR9KU39Yo00en09gM/edit?usp=sharing) about the quantum immortality which, I know, is a controversial topic, and I would like to get comments on it. The interesting twist, suggested in the article, is the idea of measure increase which could compensate declining measure in quantum immortality. (There are other topics in the article, like the history of QM, its relation to the multiverse immortality, the utility of cryonics, impossibility of euthanasia and the relation of QI to different decision theories.)
The standard argument against quantum immortality in MWI runs as following. One should calculate the expected utility by multiplying the expected gain on the measure of existence (roughly equal to the one's share of the world’s timelines). In that case, if someone expects to win 10.000 USD in the Quantum suicide lottery with 0.01 chance of survival, her actual expected utility is 100 USD (ignoring negutility of death). So, the rule of thumb is that the measure declines very quickly after series of quantum suicide experiments, and thus this improbable timeline should be ignored. The following equation could be used for U(total) = mU, where m is measure and U is expected win in the lottery.
However, if everything possible exists in the multiverse, there are many my pseudo-copies, which differ from me in a few bits, for example, they have a different phone number or different random child memory. The difference is small but just enough for not regard them as my copies.
Imagine that this different child memory is 1kb (if compressed) size. Now, one morning both me and all my pseudo-copies forget this memory, and all we become exactly the same copies. In some sense, our timelines merged. This could be interpreted as a jump in my measure, which will as high as 2power1024 = (roughly) 10E300. If I use the equation U(total) = mU I can get an extreme jump of my utility. For example, I have 100 USD and now my measure increased trillion of trillion of times, I supposedly get the same utility as if I become mega-multi-trillioner.
As a result of this absurd conclusion, I can spend the evening hitting my head with a stone and thus losing more and more memories, and getting higher and higher measure, which is obviously absurd behaviour for a human being - but could be a failure mode for an AI, which uses the equation to calculate the expected utility.
In case of the Quantum suicide experiment, I can add to the bomb, which kills me with 0.5 probability, also a laser, which kills just one neuron in my brain (if I survive), which - let's assume it - is equal to forgetting 1 bit of information. In that case, QS reduces my measure in half, but forgetting one bit increases it in half. Obviously, if I play the game for too long, I will damage my brain by the laser, but anyway, brain cells are dying so often in aging brain (millions a day), that it will be completely non-observable.
BTW, Pereira suggested the similar idea as an anthropic argument against existence of any superintelligence <https://arxiv.org/abs/1705.03078>
|
f77fde96-aa6f-4dfa-9ae4-cfdddf2a6210
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Quick Guide to Confronting Doom
Within the last two weeks, two sets of things happened: Eliezer Yudkowsky shared a post expressing extreme pessimism about humanity's likelihood of surviving AGI, and a number of AI research labs published new, highly impressive results. The combination of these two has resulted in a lot of people feeling heightened concern about the AI situation and how we ought to be reacting to it.
There have been calls to pull "fire alarms", proposals for how to live with this psychologically, people deciding to enter the AI Alignment field, and a significant increase in the number of AI posts submitted to LessWrong.
The following is my own quick advice:
1. Form your own models and anticipations. It's easy to hear the proclamations of [highly respected] others and/or everyone else reacting and then reflexively update to "aaahhhh". I'm not saying "aaahhhh" isn't the right reaction, but I think for any given person it should come after a deliberate step of processing arguments and evidence to figure out your own anticipations. I feel that A concrete bet offer to those with short AI timelines is a great example of this. It lists lots of specific things the authors do (or rather don't) expect to see. What 2026 looks like is another example I'd point to of someone figuring out their own anticipations.[1]
2. Figure out your own psychology (while focusing on what's true). Eliezer, Turntrout, and landfish have each written about their preferred way of reacting to the belief that P(Doom) is very high. My guess is that people who are concluding P(Doom) is high will each need to figure out how to live with it for themselves. My caution is just that whatever strategy you figure out should keep you in touch with reality (or your best estimate of it), even if it's uncomfortable.
3. Be gentle with yourself. You might find yourself confronting some very upsetting realities right now. That's okay! The realities are very upsetting (imo). This might take some time to process. Let yourself do
|
8f94aa10-5a8c-46a1-b853-2e6fcb1bcbe9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rationality Quotes 15
"Who thinks they're not open-minded? Our hypothetical prim miss from the suburbs thinks she's open-minded. Hasn't she been taught to be? Ask anyone, and they'll say the same thing: they're pretty open-minded, though they draw the line at things that are really wrong."
-- Paul Graham
"In the same way that we need statesmen to spare us the abjection of exercising power, we need scholars to spare us the abjection of learning."
-- Jean Baudrillard
"Because giftedness is not to be talked about, no one tells high-IQ children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior human beings, but lucky ones. That the gift brings with it obligations to be worthy of it."
-- Charles Murray
"The popular media can only handle ideas expressible in proto-language, not ideas requiring nested phrase-structure syntax for their exposition."
-- Ben Goertzel
"The best part about math is that, if you have the right answer and someone disagrees with you, it really is because they're stupid."
-- Quotes from Honors Linear Algebra
"Long-Term Capital Management had faith in diversification. Its history serves as ample notification that eggs in different baskets can and do all break at the same time."
-- Craig L. Howe
"Accountability is about one person taking responsibility. If two people are accountable for the same decision, no one is really accountable."
-- Glyn Holton
|
e086bd4b-c505-4b96-9b1c-3863c8d642da
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Responsible Scaling Policies Are Risk Management Done Wrong
Summary
=======
TLDR
----
Responsible Scaling Policies (RSPs) have been [recently](https://evals.alignment.org/blog/2023-09-26-rsp) [proposed](https://www.anthropic.com/index/anthropics-responsible-scaling-policy) as a way to keep scaling frontier large language models safely.
While being a nice attempt at committing to specific practices, [RSPs](https://evals.alignment.org/blog/2023-09-26-rsp) are:
1. missing **core components** of **basic risk management procedures** ([Section 2](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_2__What_Standard_Risk_Management_Looks_Like_) & [3](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_3__RSPs_vs_Standard_Risk_Management))
2. selling a **rosy** and **misleading** picture of the risk landscape ([Section 4](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_4__Why_RSPs_Are_Misleading_and_Overselling))
3. built in a way that allows **overselling while underdelivering** ([Section 4](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_4__Why_RSPs_Are_Misleading_and_Overselling))
Given that, I expect RSPs to be negative by default ([Section 3](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_3__RSPs_vs_Standard_Risk_Management), [4](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_4__Why_RSPs_Are_Misleading_and_Overselling) and [5](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_5__Are_RSPs_Hopeless_)). Instead, I propose to build upon risk management as the core underlying framework to assess AI risks ([Section 1](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_1__General_Considerations_on_AI_Risk_Management) and [2](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_2__What_Standard_Risk_Management_Looks_Like_)). I **suggest changes** to RSPs that would make them more likely to be positive and allow them to demonstrate what they claim to do ([Section 5](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_5__Are_RSPs_Hopeless_)).
Section by Section Summary:
---------------------------
### General Considerations on AI Risk Management
This section provides background on risk management and a motivation for its relevance in AI.
* Proving risks are below acceptable levels is the goal of risk management.
* To do that, acceptable levels of risks (not only of their sources!) have to be defined.
* Inability to show that risks are below acceptable levels is a failure. Hence, the less we understand a system, the harder it is to claim safety.
* Low-stake failures are symptoms that something is wrong. Their existence make high-stake failures more likely.
[Read more.](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_1__General_Considerations_on_AI_Risk_Management)
### What Standard Risk Management Looks Like
This section describes the main steps of most risk management systems, explains how it applies to AI, and provides examples from other industries of what it looks like.
1. **Define** Risk Levels: Set acceptable likelihood and severity.
2. **Identify** Risks: List all potential threats.
3. **Assess** Risks: Evaluate their likelihood and impact.
4. **Treat** Risks: Adjust to bring risks within acceptable levels.
5. **Monitor**: Continuously track risk levels.
6. **Report**: Update stakeholders on risks they incur and measures taken.
[Read more.](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_2__What_Standard_Risk_Management_Looks_Like_)
### RSPs vs Standard Risk Management
This section provides [a table](https://docs.google.com/document/d/1p3ZUChag8HDNehvjQWNxRdhCEzkARUgYXjeqJFylAvs/edit#heading=h.27sa5e525t1) comparing RSPs and generic risk management standard ISO/IEC 31000, explaining the weaknesses of RSPs.
It then provides a list of 3 of the biggest failures of RSPs compared with risk management.
**Prioritized RSPs failures** against **risk management**:
1. Using underspecified definitions of risk thresholds and not quantifying the risk.
2. Claiming “responsiblescaling” without including a process to make the assessment comprehensive.
3. Including a white knight clause that kills commitments.
[Read more.](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_3__RSPs_vs_Standard_Risk_Management)
### Why RSPs Are Misleading and Overselling
**Misleading points**:
* [Anthropic RSP](https://www.anthropic.com/index/anthropics-responsible-scaling-policy) labels misalignment risks as “speculative” with minimal justification.
* The framing implies that not scaling for a long time is not an option.
* RSPs present an extremely misleading view of what we know of the risk landscape.
**Overselling and Underdelivering**
* RSPs allow for weak commitments within a large framework that could *in theory*be strong.
* No one has given evidence that substantial improvements to a framework have ever happened in the timelines we’re talking about (a few years), which is the whole pitch of RSPs.
* "Responsible scaling" is misleading; "catastrophic scaling" might be more apt if we can’t rule out 1% extinction risk (it is the case for ASL-3).
[Read more.](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_4__Why_RSPs_Are_Misleading_and_Overselling)
### Are RSPs Hopeless?
This section explains why using RSPs as a framework is inadequate, even compared to just starting from already-existing AI risk management frameworks and practices such as:
* [NIST-inspired foundation model risk management framework](https://cltc.berkeley.edu/seeking-input-and-feedback-ai-risk-management-standards-profile-for-increasingly-multi-purpose-or-general-purpose-ai/)
* [ISO/IEC 23894](https://www.iso.org/standard/77304.html)
* Practices explained in [Koessler et al. (2023)](https://arxiv.org/abs/2307.08823)
A substantial amount of work that RSPs has done will be helpful as a part of detailing those frameworks, but core foundational principles of RSPs are wrong and so should be abandoned.
**How to move forward?**
Pragmatically, I suggest a set of changes that would make RSPs more likely to be helpful for safety. To mitigate the policy and communication nefarious effects:
* **Rename**“Responsible Scaling Policies” as “Voluntary Safety Commitments”
* **Be clear on what RSPs are and what RSPs aren’t**: I propose that any RSP publication starts by “RSPs are voluntary commitments taken unilaterally done in a racing environment. As such, we think they help to improve safety. We can’t show they are sufficient to manage catastrophic risks and they should **not be implemented as public policies**.”
* **Push for solid risk management public policy:** I propose that any RSP document points to another document and says “here are the policies we think would be sufficient to manage risks. Regulation should implement those.”
To see whether already defined RSPs are consistent with reasonable levels of risks:
* Assemble a representative group of risk management experts, AI risk experts and forecasters.
* For a system classified as ASL-3, estimate the likelihood of the following questions:
+ What’s the annual likelihood that an ASL-3 system be stolen by {China; Russia; North Korea; Saudi Arabia; Iran}?
+ Conditional on that, what are the chances it leaks? it be used to build bioweapons? it be used for cyber offence with large-scale effects?
+ What are the annual chances of a catastrophic accident before ASL-4 evaluations trigger?
+ What are the annual chances of misuse catastrophic risks induced by an ASL-3 system?
* Share the methodology and the results publicly.
[Read more.](https://www.lesswrong.com/posts/9nEBWxjAHSu3ncr6v/responsible-scaling-policies-are-risk-management-done-wrong#Section_5__Are_RSPs_Hopeless_)
**Meta**
--------
***Epistemic status***: I've been looking into various dangerous industries safety standards for now about 4-6 months, with a focus on nuclear safety. I've been doing AI standardization work in standardization bodies (CEN-CENELEC & ISO/IEC) for about 10 months, along risk management experts from other domains (e.g. medical device, cars). In that context, I've read the existing AI ISO/IEC SC42 and JTC21 standards and started trying to apply them to LLMs and refining them. On RSPs, I've spent a few dozen hours reading the docs and discussing those with people involved and around it.
***Tone***: I hesitated in how charitable I wanted this piece to be. One the one hand, I think that RSPs is a pretty toxic meme (see Section 4) that got rushed towards a global promotion without much epistemic humility over how it was framed, and as far as I've seen without anyone caring much about existing risk management approaches. In that sense, I think that it should strongly be pushed back against under its current framing.
On the other hand, it's usually nice to try to not use negative connotations and calm discussion to move forward epistemically and constructively.
I aimed for something in-between, where I did emphasize with strong negative connotations what I think are the worst parts, while focusing on remaining constructive and focusing on the object-level in many parts.
This mixture may have cause me to land in an uncanney valley, and I'm curious to receive feedback on that.
Section 1: General Considerations on AI Risk Management
=======================================================
Risk management is about demonstrating that **risks are below acceptable levels**. **Demonstrating the absence of risks** is much more difficult than showing that some risks are dealt with. More specifically, the **less you understand a system**, the **harder**it is to rule out risks.
Let’s take an example: why can we prove more easily that the chances that a nuclear power plant causes a large-scale catastrophe are [<1 / 100 000](https://world-nuclear.org/information-library/safety-and-security/safety-of-plants/safety-of-nuclear-power-reactors.aspx) while we can’t do so with GPT-5? In large part because we now understand nuclear power plants and many of their risks. We know how they work, and the way they can fail. They’ve turned a very unstable reaction (nuclear fission) into something manageable (with nuclear reactors). So the uncertainty we have over a nuclear power plant is much smaller than the one we have on GPT-5.
One corollary is that in **risk management,**[**uncertainty is an enemy**](https://en.wikipedia.org/wiki/Risk_management). Saying “we don’t know” is a failure. Ruling out risks confidently requires a deep understanding of the system and disproving with very high confidence significant worry. To be clear: **it is hard**. In particular when the operational domain of your system is “the world”. That’s why safety is demanding. But is that a good reason to lower our safety standards when the lives of billions of people are at stake? Obviously no.
One could legitimately say: Wait, but there’s no risk in sight, the burden of proof is on those that claim that it’s dangerous. Where’s the evidence?
Well, there’s plenty:
* Bing threatened users when deployed after having been[beta tested for months](https://futurism.com/the-byte/microsoft-bing-test-india).
* Providers are unable to avoid jailbreak or ensure robustness [neither in text](https://arxiv.org/pdf/2307.15043.pdf) [nor in image](https://arxiv.org/abs/2306.13213).
* Models show[worrying scaling properties](https://aclanthology.org/2023.findings-acl.847/).
One could legitimately say: No, but it’s not catastrophic, it’s not a big deal. Against this stance, I’ll quote the famous physicist R. Feynman reflecting on the Challenger disaster in rocket safety, a field with much higher standards than AI safety:
* “Erosion and blow-by are not what the design expected. They are **warnings that something is wrong**. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger **did not lead to a catastrophe before** is **no guarantee that it will not the next time, unless it is completely understood**.”
One could finally hope that we understand the past failures of our systems. Unfortunately, we don’t. We not only don’t understand their failure; we don’t understand how and why they **work in the first place**.
So how are we supposed to deal with risk?
Risk management proposes a few step methods that I’ll describe below. Most industries implement a process along those lines, with some minor variations and a varying degree of rigor and depth according to the level of regulation and type of risks. I’ll put a few tables on that in a table you can check in the [Annex](https://docs.google.com/document/d/1p3ZUChag8HDNehvjQWNxRdhCEzkARUgYXjeqJFylAvs/edit#heading=h.pkk9et1tbulf).
Section 2: What Standard Risk Management Looks Like
===================================================
Here’s a description of the core steps of the risk management process. Names vary between frameworks but the gist of it is contained here and usually shared across frameworks.
1. **Define risk appetite and risk tolerance**: Define the amount of risks your project is willing to incur, both in terms of likelihood or severity. Likelihood can be a qualitative scale, e.g. referring to ranges spanning orders of magnitude.
2. **Risk identification**: Write down all the threats and risks that could be incurred by your project, e.g. training and deploying a frontier AI system.
3. **Risk assessment**: Evaluate each risk by determining the likelihood of it happening and its severity. Check those estimates against your risk appetite and risk tolerance.
4. **Risk treatment**: Implement changes to reduce the impact of each risk until those risks meet your risk appetite and risk tolerance.
5. **Monitoring**: During the execution of the project, monitor the level of risk, and check that risks are indeed all covered.
6. **Reporting**: Communicate the plan and its effectiveness to stakeholders, especially those who are affected by the risks.
What’s the point of those pretty generic steps and why would it help AI safety?
(1) The **definition of risk thresholds**is key 1) to make **commitments falsifiable** & avoid goalpost movingand 2) to keep the risk-generating organization accountable when other stakeholders are incurring risks due to its activity. If an activity is putting people’s lives at risk, it is important that they know how much and for what benefits and goals.
1. Here’s what it looks like in nuclear for instance, as defined by the [Nuclear Regulatory Commission](https://www.nrc.gov/docs/ML0717/ML071770230.pdf):

2. The UC Berkeley Center for Long-Term Cybersecurity NIST-inspired risk management profile for General-purpose AI systems co-written with D. Hendrycks provides some thoughts on [how to define those in Map 1.](https://docs.google.com/document/d/1M4kju9VOUQpphv-SOA9mUE1P8Wa0mWJBcQO15exCD98/edit#heading=h.2k6kkwym97fb)
(2) **Risk identification**through systematic methodsis key to try to reach something as close as possible from a full coverage of risks. As we said earlier, in risk management, uncertainty is a failure and a core way to substantially reduce it is to try to be as comprehensive as possible.
1. For specific relevant methods, you can find some in [Section 4 of Koessler et al. 2023](https://browse.arxiv.org/pdf/2307.08823.pdf).
(3) **Risk assessment**through qualitative and quantitative means allows us to actually estimate the uncertainty we have. It is key to then prioritize safety measures and decide whether it’s reasonable to keep the project under its current forms or modify it.
1. An example of a variable which is easy to modify and changes the risk profile substantially is the set of actuators an AI system has access to. Whether a system has a coding terminal, an internet access or the possibility to instantiate other AI systems are variables that substantially increase its set of actions and correspondingly, its risk.
2. For specific relevant methods, you can find some in [Section 5 of Koessler et al. 2023](https://browse.arxiv.org/pdf/2307.08823.pdf). Methods involving experts’ forecasts like probabilistic risk assessment or Delphi techniques already exist and could be applied to AI safety. And they can be applied even when:
1. Risk is low (e.g. the Nuclear Regulatory Commission requires nuclear safety [estimates of probabilities below 1/10 000](https://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/safety-of-nuclear-power-reactors.aspx)).
| |
| --- |
| The US Nuclear Regulatory Commission (NRC) specifies that reactor designs must meet a theoretical 1 in 10,000 year core damage frequency, but modern designs exceed this. US utility requirements are 1 in 100,000 years, the best currently operating plants are about 1 in one million and those likely to be built in the next decade are almost 1 in 10 million.[*World Nuclear Association, 2022*](https://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/safety-of-nuclear-power-reactors.aspx) |
| |
b. Events are very fat-tailed and misunderstood, as was the case in nuclear safety in the 1970s. It has been done, and it is through the iterative practice of doing it that an industry can become more responsible and cautious. Reading a book review of *Safe Enough?*, a book on the history of quantitative risk assessment methods used in nuclear safety, there’s a sense of déjà-vu:
| |
| --- |
| If nuclear plants were to malfunction at some measurable rate, the industry could use that data to anticipate its next failure. But if the plants don't fail, then **it becomes very difficult to have a conversation about what the true failure rate** is likely to be. Are the plants likely to fail once a decade? Once a century? Once a millennium? In the absence of shared data, scientists, industry, and the public were all free to believe what they wanted. [*Astral Codex Ten, 2023*](https://www.astralcodexten.com/p/your-book-review-safe-enough)*, describing the genesis of probabilistic risk assessment in nuclear safety.* |
(4) **Risk treatment** is in reaction to the risk assessment and must be pursued until you reach the risk thresholds defined. The space of interventions here is very large, larger than is usually assumed. Better understanding one’s system, narrowing down its domain of operation by making it less general, increasing the amount of oversight, improving safety culture: all those are part of a broad set of interventions that can be used to meet thresholds. There can be a loop between the treatment and the assessment if substantial changes to the system are done.
(5) **Monitoring**is the part that ensures that the risk assessment remains valid and nothing major has been left out. This is what behavioral model evaluations are most useful for, i.e. ensuring that you track the risks you’ve identified. Good evaluations would map to a pre-defined risk appetite (e.g. 1% chance of >1% deaths) and would cover all risks brought up through systematic risk identification.
(6) **Reporting** is the part that ensures that all relevant stakeholders are provided with the right information. For instance, those incurring risks from the activities should be provided with information on the amount of risk they’re exposed to.
Now that we’ve done a rapid overview of standard risk management and why it is relevant to AI safety, let’s talk about how RSPs compare against that.
Section 3: RSPs vs Standard Risk Management
===========================================
Some underlying principles of RSPs should definitely be pursued. There are just better ways to pursue these principles, that **already exist**in **risk management**, and happen to be what most other dangerous industries and fields do. To give two examples of such good underlying principles:
* stating safety requirements that companies have to reach, without which they can’t keep going.
* setting up rigorous evaluations and measuring capabilities to better understand if a system is good; this should definitely be part of a risk management framework, but probably as a risk monitoring technique, rather than as a substitute for risk assessment.
Below, I argue why RSPs are a bad implementation of some good risk management principles and why that makes the RSP framework inadequate to manage risks.
Direct Comparison
-----------------
Let’s dive into a more specific comparison between the two approaches. The International Standards Organization (ISO) has developed two risk management standards that are relevant to AI safety, although not focused on it:
* ISO 31000 that provides generic risk management guidelines.
* ISO/IEC 23894, an adaptation of 31000 which is a bit more AI-specific
To be clear, those standards are not sufficient. They’re considered weak by most EU standardization actors or extremely weak by risk management experts from other industries like the medical device industry. There will be a very significant amount of work needed to refine such frameworks for general-purpose AI systems (see a [first iteration by T. Barrett here](https://cltc.berkeley.edu/seeking-input-and-feedback-ai-risk-management-standards-profile-for-increasingly-multi-purpose-or-general-purpose-ai/), and a table of [how it maps to ISO/IEC 23894 here](https://docs.google.com/document/d/1M4kju9VOUQpphv-SOA9mUE1P8Wa0mWJBcQO15exCD98/edit#heading=h.camj9ith1s95)) But those provide basic steps and principles that, as we explained above, are central to adequate risk management.
In the table below, I start from the short version of ARC Evals’ RSP principles and try to match the ISO/IEC 31000 version that most corresponds. I then explain what’s missing from the RSP version. Note that:
* I only write the short RSP principle but account for the [long version](https://evals.alignment.org/rsp-key-components/).
* There are many steps in ISO/IEC 31000 that don’t appear here.
* I ***italicize***the ISO/IEC version that encompasses the RSP version.
The table version:
| | | |
| --- | --- | --- |
| RSP Version (Short) | ISO/IEC 31000 Version | How ISO improves over RSPs |
| **Limits**: which specific observations about dangerous capabilities would indicate that it is (or strongly might be) unsafe to continue scaling? | **Defining risk criteria**: The organization should specify the amount and type of risk that it may or may not take, relative to objectives. It should also *define criteria to evaluate the significance* of risk and to support decision-making processes. Risk criteria should be aligned with the risk management framework and customized to the specific purpose and scope of the activity under consideration.[...] The criteria should be defined taking into consideration the organization’s obligations and the views of stakeholders.[...]To set risk criteria, the following should be considered: — the nature and type of uncertainties that can affect outcomes and objectives (both tangible and intangible); — how consequences (both positive and negative) and likelihood will be defined and measured; — time-related factors; — consistency in the use of measurements; — how the level of risk is to be determined; — how combinations and sequences of multiple risks will be taken into account; — the organization’s capacity. | RSPs doesn’t argue why systems passing evals are safe. This is downstream of the absence of **risk thresholds** with a likelihood scale. For example, Anthropic RSP also dismisses accidental risks as “speculative” and “unlikely” without much depth, without much understanding of their system, and without expressing what “unlikely” means. On the other hand, the ISO standard asks the organization to define risk thresholds, and emphasizes the need to match risk management with organizational objectives (i.e. build human-level AI). |
| **Protections**: what aspects of current protective measures are necessary to contain catastrophic risks? **Evaluation**: what are the procedures for promptly catching early warning signs of dangerous capability limits? | **Risk analysis**: The purpose of risk analysis is to comprehend the nature of risk and its characteristics including, where appropriate, the level of risk. Risk analysis involves a detailed consideration of uncertainties, risk sources, consequences, likelihood, events, scenarios, *controls and their effectiveness*. **Risk evaluation**: The purpose of risk evaluation is to support decisions. Risk evaluation involves comparing the results of the risk analysis with the established risk criteria to determine where additional action is required. This can lead to a decision to: — do nothing further; — consider risk treatment options; — undertake further analysis to *better understand the risk*; — maintain existing controls; — reconsider objectives. | ISO proposes a much more comprehensive procedure than RSPs, that doesn’t really analyze risk levels or have a systematic risk identification procedure. The direct consequence is that RSPs are likely to lead to high levels of risks, without noticing. For instance, RSPs don’t seem to cover capabilities interaction as a major source of risk. |
| **Response**: if dangerous capabilities go past the limits and it’s not possible to improve protections quickly, is the AI developer prepared to pause further capability improvements until protective measures are sufficiently improved, and treat any dangerous models with sufficient caution? | **Risk treatment plans**: Risk treatment options are not necessarily mutually exclusive or appropriate in all circumstances. Options for treating risk may involve one or more of the following: — *avoiding the risk by deciding not to start or continue with the activity that gives rise to the risk*; — taking or increasing the risk in order to pursue an opportunity; — *removing the risk source*; — changing the likelihood; — changing the consequences; — sharing the risk (e.g. through contracts, buying insurance); — retaining the risk by informed decision Treatment plans should be integrated into the management plans and processes of the organization, in consultation with appropriate stakeholders. The information provided in the treatment plan should include: — the rationale for selection of the treatment options, including the expected benefits to be gained; — those who are accountable and responsible for approving and implementing the plan; — the proposed actions; — the resources required, including contingencies; — the performance measures; — the constraints; — the required reporting and monitoring; — when actions are expected to be undertaken and completed | ISO, thanks to the definition of a risk threshold, ensures that risk mitigation measures bring risks below acceptable levels. The lack of risk thresholds for RSPs makes the risk mitigation measures ungrounded. **Example**: ASL-3 risk mitigation measures as defined by Anthropic (i.e. close to catastrophically dangerous) imply significant chances to be stolen by Russia or China (I don’t know any RSP person who denies that). What are the risks downstream of that? The hope is that those countries keep the weights secure and don’t cause too many damages with it. |
| **Accountability**: how does the AI developer ensure that the RSP’s commitments are executed as intended; that key stakeholders can verify that this is happening (or notice if it isn’t); that there are opportunities for third-party critique; and that changes to the RSP itself don’t happen in a rushed or opaque way? | **Monitoring and review**: The purpose of monitoring and review is to assure and improve the quality and effectiveness of process design, implementation and outcomes. Ongoing monitoring and periodic review of the risk management process and its outcomes should be a planned part of the risk management process, with responsibilities clearly defined. [...] The results of monitoring and review should be incorporated throughout the organization’s performance management, measurement and reporting activities. **Recording and reporting**: The risk management process and its outcomes should be documented and reported through appropriate mechanisms. Recording and reporting aims to: — communicate risk management activities and outcomes across the organization; — provide information for decision-making; — improve risk management activities; — assist interaction with stakeholders, including those with responsibility and accountability for risk management activities. | Those parts have similar components. But ISO encourages reporting the results of risk management to those that are affected by the risks, which seems like a bare minimum for catastrophic risks. Anthropic’s RSP proposes to do so after deployment, which is a good accountability start, but still happens once a lot of the catastrophic risk has been taken. |
Prioritized Risk Management Shortcomings of RSPs
------------------------------------------------
Here’s a list of the biggest direct risk management failures of RSPs:
1. Using underspecified definitions of risk thresholds and not quantifying the risk
2. Claiming “responsible scaling” without including a process to make the assessment comprehensive
3. Including a white knight clause that kills commitments
1. **Using underspecified definitions of risk thresholds and not quantifying the risk**. RSPs don't define risk thresholds in terms of **likelihood**. Instead, they focus straight away on symptoms of risks (certain capabilities that an evaluation is testing is one way a risk could instantiate) rather than the risk itself (the model helping in any possible way to build bioweapons). This makes it hard to verify whether safety requirements have been met and argue whether the thresholds are reasonable. Why is it an issue?
* It leaves wiggle room making it very hard to keep the organization accountable. If a lab said something was “unlikely” and it still happened, did it do bad risk management or did it get ***very*** unlucky? Well, we don’t know.
* **Example**(from Anthropic RSP): “A model in the ASL-3 category does not itself present a threat of containment breach due to autonomous self-replication, because it is both unlikely to be able to persist in the real world, and unlikely to overcome even simple security measures intended to prevent it from stealing its own weights.” It makes a huge difference for catastrophic risks whether “unlikely” means 1/10, 1/100 or 1/1000. With our degree of understanding of systems, I don’t think Anthropic staff would be able to demonstrate it’s lower than 1/1000. And 1/100 or 1/10 are alarmingly high.
* It doesn’t explain why the monitoring technique, i.e the **evaluations,**are the right ones to avoid risks. The RSPs do a good first step which is to identify some things that could be risky.
+ **Example** (from ARC RSP [presentation](https://evals.alignment.org/rsp-key-components/)): “*Bioweapons development: the ability to walk step-by-step through developing a bioweapon, such that the majority of people with any life sciences degree (using the AI) could be comparably effective at bioweapon development to what people with specialized PhD’s (without AIs) are currently capable of.”*
By describing neither quantitatively nor qualitatively why it is risky, expressed in terms of risk criteria (e.g. 0.1% chance of killing >1% of humans) it doesn’t do the most important step to demonstrate that below this threshold, things are safe and acceptable. For instance, in the example above, why is “**the majority of people with any life sciences degree**” relevant? Would it be fine if only 10% of this population was now able to create a bioweapon? Maybe, maybe not. But without clear criteria, you can’t tell.
2. Claiming “**responsible** **scaling**” without including a process to make the **assessment comprehensive**. When you look at nuclear accidents, what’s striking is how unexpected failures are. Fukushima is an example where[everything goes wrong at the same time.](https://en.wikipedia.org/wiki/Fukushima_nuclear_accident#Accident) Chernobyl is an example where engineers didn’t think that the accident that happened [was possible](https://www.reddit.com/r/chernobyl/comments/mflxy2/why_did_the_engineers_believe_it_was_impossible/#:~:text=Specifically%20they%20believed%20that%20the,%2Fvoid%20effect%20of%20reactivity%22.) (someone claims that they were so surprised that engineers actually ran another real-world test of the failure that happened at Chernobyl because they doubted too much it could happen).
Without a more comprehensive process to identify risks and compare their likelihood and severity against pre-defined risk thresholds, there’s very little chance that RSPs will be enough. When I asked some forecasters and AI safety researchers around me, the estimates of the annual probability of extinction caused by an ASL-3 system (defined in Anthropic RSPs) were several times above 1%, up to 5% conditioning on our current ability to measure capabilities (and not an idealized world where we know very well how to measure those).
3. Including the **white knight clause**that kills commitments.
One of the proposals that striked me the most when reading RSPs is the insertion of what deserves the name of the **white knight clause**.
* In short, if you’re developing a dangerous AI system because you’re a good company, and you’re worried that other bad companies bring too many risks, then you can race forward to prevent that from happening.
* If you’re invoking the white knight clause and increase catastrophic risks, you still have to justify it to your board, the employees and state authorities. The latter provides a minimal form of accountability. But if we’re in a situation where the state is sufficiently asleep to need an AGI company to play the role of the white knight in the first place, it doesn’t seem like it would deter much.
I believe that there are companies that are safer than others. But that’s not the right question. The right question is: is there any company which wouldn’t consider itself as a bad guy? And the answer is: no. OpenAI, Anthropic and DeepMind would all argue about the importance of being at the frontier to solve alignment. Meta and Mistral would argue that it’s key to democratize AI to not prevent power centralization. And so on and so forth.
This clause is effectively killing commitments. I’m glad that Anthropic included only a weakened version of it in its own RSP but I’m very concerned that ARC is pitching it as an option. It’s not the role of a company to decide whether it’s fine or not to increase catastrophic risks for society as a whole.
Section 4: Why RSPs Are Misleading and Overselling
==================================================
Misleading
----------
Beyond the designation of misalignment risks as “speculative” on Anthropic RSPs and a three line argument for why it’s unlikely among next generation systems, there are several extremely misleading aspects of RSPs:
1. It’s called “responsible scaling”. In its own name, it conveys the idea that not further scaling those systems as a risk mitigation measure is not an option.
2. It conveys a very overconfident picture of the risk landscape.
1. Anthropic writes in the introduction of its RSP “The basic idea is to require safety, security, and operational standards appropriate to a model’s potential for catastrophic risk”. They already defined sufficient protective measures for ASL-3 systems that potentially have basic bioweapons crafting abilities. At the same time they write that they are in the process of actually measuring the risks related to biosecurity: “Our first area of effort is in evaluating biological risks, where we will determine threat models and capabilities”. I’m really glad they’re running this effort, but what if this outputted an alarming number? Is there a world where the number output makes them stop 2 years and dismiss the previous ASL-3 version rather than scaling responsibly?
2. Without arguing why the graph would look like that, ARC published a graph like this one. Many in the AI safety field don’t expect it to go that way, and “Safe region” oversells what RSP does. I, along with others, expect the LLM graph to reach a level of risks that is simply not manageable in the foreseeable future. Without quantitative measure of the risks we’re trying to prevent, it’s also not serious to claim to have reached “sufficient protective measures”. 
If you want to read more on that, you can read [that](https://www.alignmentforum.org/posts/mcnWZBnbeDz7KKtjJ/rsps-are-pauses-done-right?commentId=FtbzhGk5oPT3dyHLi).
Overselling, underdelivering
----------------------------
The RSP framework has some nice characteristics. But first, these are all already covered, in more detail, by existing risk assessment frameworks that no AI lab has implemented. And second, the coexistence of ARC's RSP framework with the specific RSPs labs implementations allows slack for **commitments that are weak**within a **framework that would in theory allow ambitious commitments**. It leads to many arguments of the form:
* “That’s the V1. We’ll raise ambition over time”. I’d like to see evidence of that happening over a 5 year timeframe, in any field or industry. I can think of fields, like aviation where it happened over the course of decades, crashes after crashes. But if it’s relying on expectations that there will be large scale accidents, then it should be clear. If it’s relying on the assumption that timelines are long, it should be explicit.
* “It’s voluntary, we can’t expect too much and it’s way better than what’s existing”. Sure, but if the level of catastrophic risks is 1% (which several AI risk experts I’ve talked to believe to be the case for ASL-3 systems) and that it gives the impression that risks are covered, then the name “responsible scaling” is heavily misleading policymakers. The adequate name for 1% catastrophic risks would be catastrophic scaling, which is less rosy.
I also feel like it leads to many disagreements that all hinge on: do we expect labs to implement ambitious RSPs?
And my answer is: given their track record, no. Not without government intervention. Which brings us to the question: “what’s the effect of RSPs on policy and would it be good if governments implemented those”. My answer to that is: An extremely ambitious version yes; the misleading version, no. No, mostly because of the short time we have before we see heightened levels of risks, which gives us very little time to update regulations, which is a core assumption on which RSPs are relying without providing evidence of being realistic.
I expect labs to push hard for the misleading version, on the basis that pausing is unrealistic and would be bad for innovation or for international race. Policymakers will have a hard time distinguishing the risk levels between the two because it hinges on details and aren’t quantified in RSPs. They are likely to buy the bad misleading version because it’s essentially selling that there’s **no trade-off between capabilities and safety**. That would effectively enforce a trajectory with unprecedented levels of catastrophic risks.
Section 5: Are RSPs Hopeless?
=============================
Well, yes and no.
* Yes, in that most of the pretty intuitive and good ideas underlying the framework are weak or incomplete versions of traditional risk management, with some core pieces missing. Given that, it seems more reasonable to just start from an existing risk management piece as a core framework. ISO/IEC 23894 or the NIST-inspired [AI Risk Management Standards Profile for Foundation Models](https://cltc.berkeley.edu/seeking-input-and-feedback-ai-risk-management-standards-profile-for-increasingly-multi-purpose-or-general-purpose-ai/) would be pretty solid starting points.
* No in that inside the RSPs, there are many contributions that should be part of an AI risk management framework and that would help make existing risk management frameworks more specific. I will certainly not be comprehensive, but some of the important contributions are:
+ Anthropic’s RSP fleshes out a wide range of relevant considerations and risk treatment measures
+ ARC provides:
- technical benchmarks and proposed operationalizations of certain types of risks that are key
- definitions of safety margins for known unknowns
- threat modelling
- low-level operationalization of some important commitments
In the short-run, given that it seems that RSPs have started being pushed at the UK Summit and various other places, I’ll discuss what changes could make RSPs beneficial without locking in regulation a bad framework.
How to Move Forward?
--------------------
Mitigating nefarious effects:
1. **Make the name less misleading**: If instead of calling it “responsible scaling”, one called it “Voluntary safety commitments” or another name that:
1. Doesn’t **determine the output of the safety test before having run it** (i.e. scaling)
2. Unambiguously signals that it’s not supposed to be sufficient or to be a good basis for regulation.
2. **Be clear on what RSPs are and what they aren’t**. I suggest adding the following clarifications regarding what the goals and expected effects of RSPs are:
1. **What RSPs are**: “a company that would take too strong unilateral commitments would harm significantly its chances of succeeding in the AI race. Hence, this framework is aiming at proposing what we expect to be the best marginal measures that a company can unilaterally take to improve its safety without any coordination.”. I would also include a statement on the level of risks like: “We’re not able to show that this is sufficient to decrease catastrophic risks to reasonable levels, and it is probably not.”, “we don’t know if it’s sufficient to decrease catastrophic risks below reasonable levels”, or "even barring coordinated industry-wide standards or government intervention, RSPs are only a second- (or third-) best option".
2. **What RSPs aren’t:** Write very early in the post a disclaimer saying “THIS IS NOT WHAT WE RECOMMEND FOR POLICY”. Or alternatively, point to another doc stating what would be the measures that would be sufficient to maintain the risk below sufficient levels: “Here are the measures we think would be sufficient to mitigate catastrophic risks below acceptable levels.” to which you could add “We encourage laboratories to make a conditional commitment of the form: “if all other laboratories beyond a certain size[to be refined] committed to follow those safety measures with a reliable enforcement mechanism and the approval of the government regarding this exceptional violation of antitrust laws, we would commit to follow those safety measures.”
3. **Push for risk management in policy:**
1. Standard risk management for what is acknowledged to be a world-shaping technology is a fairly reasonable ask. In fact, it is an ask that I’ve noticed in my interactions with other AI crowds has the benefit of allowing coalition-building efforts because everyone can easily agree on “measure the risks, deal with them, and make the residual level of risks and the methodology public”.
Checking whether RSPs manage risks adequately:
At a risk management level, if one wanted to demonstrate that RSPs like Anthropic’s one are actually doing what they claim to do (i.e. “require safety, security, and operational standards appropriate to a model’s potential for catastrophic risk”), a simple way to do so would be to run a risk assessment on ASL-3 systems with a set of forecasters, risk management experts and AI risk experts that are representative of views on AI risks and that have been selected by an independent body free of any conflict of interest.
I think that a solid baseline would be to predict the chances of various intermediary and final outcomes related to the risks of such systems:
1. What’s the annual likelihood that an ASL-3 system be stolen by {China; Russia; North Korea; Saudi Arabia; Iran}?
2. Conditional on that, what are the chances it leaks? it being used to build bioweapons? it being used for cyber offence with large-scale effects?
3. What are the chances of a catastrophic accident before ASL-4 evaluations trigger?
4. What are the annual chances of misuse catastrophic risks induced by an ASL-3 system?
It might not be too far from what Anthropic seems to be willing to do internally, but doing it with a publicly available methodology, and staff without self-selection or conflict of interests makes a big difference. Answers to questions 1) and 2) could raise risks so the output should be communicated to a few relevant actors but could potentially be kept private.
If anyone has the will but doesn’t have the time or resources to do it, I’m working with some forecasters and AI experts that could probably make it happen. Insider info would be helpful but mostly what would be needed from the organization is some clarifications on certain points to correctly assess the capabilities of the system and some info about organizational procedures.
Acknowledgments
===============
I want to thank Eli Lifland, Henry Papadatos and my other [NAIR](https://www.navigatingrisks.ai/) colleague, Olivia Jimenez, Akash Wasil, Mikhail Samin, Jack Clark, and other anonymous reviewers for their feedback and comments. Their help doesn’t mean that they endorse the piece. All mistakes are mine.
Annex
=====
Comparative Analysis of Standards
---------------------------------
This (cropped) table shows the process of various standards for the 3 steps of risk management. As you can see, there are some differences but every standard seems to follow a similar structure.
From [(Raz & Hillson, 2005)](https://www.zotero.org/google-docs/?a64rn3)

Here is a comparable table for the last two parts of risk management.

|
3a7f1ef8-096a-425e-9055-4de693a22a24
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Do you know any human-human communication models?
//Update: Zip version of my question using an analogy with physics:
In physics we have friction. Friction is a very complex process and creating a fully working model that works in all conditions is very complicated. But, in most cases, we can approximate friction with simple F=uN, and find "u" constant for common cases.
Natural language processing is very complicated. And creating a model from scratch for it is incredibly difficult. But we have already a working black box - our brain. Have someone tried to create a far simpler model, that only describes the relationship between text structure and "probability of understanding", and made some experiments on already existing brains to find "constants" for common cases? It can be useful to improve our communications, by finding wrong, nonworking patterns, for example. If someone has done it, I would like to have a link, or name of article/book, or smth like that.
//
I have a question - have someone heard about communication theory basing on the "filtering hypotheses" concept? I know that it sounds a bit like a mess, so let me tell a story to explain what I've meant.
One day I've been watching TED. There was a talk from one guy who was explaining how to draw simple comics characters. There were only a few lines in each picture, and after drawing every line, he was pausing.
I've created this animation to show you the process:
And because he was painting slow, I had time to analyze my thoughts. It was like:
*nose appears*- What is he doing? Nose can't be such big.
*eyes* - That 66 are even don't look like the eyes. What will he do next?
*ear* - That ear is too far. His head will be too big.
*hair* - Hm. Now it looks ok.
*mouth line* - Oh, he looks angry.
*line for smile*- And now he is fun.
And etc.
I was excited about how fast my brain changes its mind about the picture on each step.
I also noticed that I was trying to predict what will happen with every next line appear on the frame. Like the brain has
|
b44958f6-bb43-4725-b8b2-52940ddbadc1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Book Review: The Beginning of Infinity
This is a crosspost from my personal website. Inspired by: Naval, If Sapiens Were a Blogpost and Brett Hall’s podcast.
Many people have recommended the book The Beginning of Infinity: Explanations that Transform the World by David Deutsch to me. I don’t know how, because I can’t imagine any of them actually finished it. Previously on my blog I’ve reviewed books and been critical of aspects of them. But this post is more of a summary of The Beginning of Infinity. I decided to write it this way because this book is very complicated, reasonably long and frequently misunderstood. Deutsch is a physicist at Oxford and a pioneer of quantum computing, but his interests are wide-ranging.
All progress comes from good explanations
> In this book I argue that all progress, both theoretical and practical, has resulted from a single human activity: the quest for what I call good explanations.
One of the key pieces of terminology in this book is the idea of a good explanation. In Deutsch’s formulation, a good explanation is one that accounts for observations while being hard to vary. If a theory can explain anything, it can explain nothing. Some people think that what makes a good explanation is testability. But this isn’t enough: some theories are perfectly testable but do not constitute good explanations. For example, consider the hypothesis “If you eat 1kg of grass, it will cure the common cold.” The problem with this statement isn’t that it’s not testable, it’s that no one should bother testing it. And the reason why no one should bother testing it is that it’s easy to vary: why 1kg, and not 2kg? What is the explanatory account of how eating grass could cure a cold? Bad explanations have more moving parts than there needs to be, and each of these parts could have been different.
This book has many different threads to it, but one of the most important is a kind of philosophical treatise about how good explanations come to be. One classical idea, which Deutsch rejects, is
|
0115c0a4-c492-4aac-bee5-caa46bc4784f
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Brains and backprop: a key timeline crux
[Crossposted from [my blog](https://jacoblagerros.wordpress.com/2018/03/09/brains-and-backprop-a-key-timeline-crux/)]
The Secret Sauce Question
=========================
Human brains still outperform deep learning algorithms in a wide variety of tasks, such as playing soccer or knowing that it’s a bad idea to drive off a cliff without having to try first (for more formal examples, see [Lake et al., 2017](https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993); [Hinton, 2017](https://www.youtube.com/watch?v=rTawFwUvnLE); [LeCun, 2018](https://www.youtube.com/watch?v=cWzi38-vDbE); [Irpan, 2018](https://www.alexirpan.com/2018/02/14/rl-hard.html)). This fact can be taken as evidence for two different hypotheses:
1. In order to develop human-level AI, we have to develop entirely new learning algorithms. At the moment, AI is a deep conceptual problem.
2. In order to develop human-level AI, we basically just have to improve current deep learning algorithms (and their hardware) a lot. At the moment, AI is an engineering problem.
The question of which of these views is right I call “the secret sauce question”.
The secret sauce question seems like one of the most important considerations in estimating how long there is left until the development of human-level artificial intelligence (“timelines”). If something like 2) is true, timelines are arguably substantially shorter than if something like 1) is true [1].
However, it seems initially difficult to arbitrate these two vague, high-level views. It appears as if though an answer requires complicated inside views stemming from deep and wide knowledge of current technical AI research. This is partly true. Yet this post proposes that there might also be single, concrete discovery capable of settling the secret sauce question: does the human brain learn using gradient descent, by implementing backpropagation?
The importance of backpropagation
=================================
Underlying the success of modern deep learning is a single algorithm: gradient descent with backpropagation of error ([LeCun et al., 2015](https://www.nature.com/articles/nature14539)). In fact, the majority of research is not focused on finding better algorithms, but rather on finding better cost functions to descend using this algorithm ([Marblestone et al., 2016](https://www.frontiersin.org/articles/10.3389/fncom.2016.00094/full)). Yet, in stark contrast to this success, since the 1980’s the key objection of neuroscientists to deep learning has been that backpropagation is not biologically plausible (Crick, 1989; Stork, 1989).
As a result, the question of whether the brain implements backpropagation provides critical evidence on the secret sauce problem. If the brain does *not* use it, and *still* outperforms deep learning while running on the energy of a laptop and training on several orders of magnitude fewer training examples than parameters, this suggests that a deep conceptual advance is necessary to build human-level artificial intelligence. There’s some other remarkable algorithm out there, and evolution found it. But if the brain *does* use backprop, then the reason deep learning works so well is because it’s somehow *on the right track*. Human researchers and evolution converged on a common solution to the problem of optimising large networks of neuron-like units. (These arguments assume that *if* a solution is biologically plausible and the best solution available, then it would have evolved).
Actually, the situation is a bit more nuanced than this, and I think it can be clarified by distinguishing between algorithms that are:
**Biologically actual:** What the brain *actually* does.
**Biologically plausible:** What the brain *might* have done, while still being restricted by evolutionary selection pressure towards energy efficiency etc.
*For example, humans walk with legs, but it seems possible that evolution might have given us wings or fins instead, as those solutions work for other animals. However, evolution could not have given us wheels, as that requires a separable axle and wheel, and it's unclear what an evolutionary path to an organism with two separable parts looks like (excluding symbiotic relationships).*
**Biologically possible:** What is technically possible to do with collections of cells, regardless of its relative evolutionary advantage.
*For example, even though evolving wheels is implausible, there might be no inherent problem with an organism having wheels (created by "God", say), in the way in which there's an inherent problem with an organism’s axons sending action potentials faster than the speed of light.*
I think this leads to the following conclusions:
**Nature of backprop: Implication for timelines**
-------------------------------------------------
**Biologically impossible:** Unclear, there might be multiple “secret sauces”
**Biologically possible, but not plausible:** Same as above
**Biologically plausible, but not actual:** Timelines are long, there’s likely a “secret sauce”
**Biologically actual:** Timelines are short, there’s likely no “secret sauce”
---
In cases where evolution could not invent backprop anyway, it’s hard to compare things. That is consistent both with backprop not being the right way to go and with it being better than whatever evolution did.
It might be objected that this question doesn’t really matter, since *if* neuroscientists found out that the brain does backprop, they have not thereby created any new algorithm -- but merely given stronger evidence for the workability of previous algorithms. Deep learning researchers wouldn’t find this any more useful than Usain Bolt would find it useful to know that his starting stance during the sprint countdown is optimal: he’s been using it for years anyway, and is mostly just eager to go back to the gym.
However, this argument seems mistaken.
On the one hand, just because it’s not useful to deep learning practitioners does not mean it’s not useful others trying to estimated the timelines of technological development (such as policy-makers or charitable foundations).
On the other hand, I think this knowledge *is* very practically useful for deep learning practitioners. According to my current models, the field seems unique in combining the following features:
* Long iteration loops (on the order of GPU-weeks to GPU-years) for testing new ideas.
* High dependence of performance on hyperparameters, such that the right algorithm with slightly off hyperparameters will not work *at all.*
* High dependence of performance on the amount of compute accessible, such that the differences between enough and almost enough are step-like, or qualitative rather than quantitative. Too little compute and the algorithm just doesn’t work *at all.*
* Lack of a unified set of first principles for understanding the problems, and instead a collection of effective heuristics
This is an environment where it is critically important to develop strong priors on what *should* work, and to stick with those in face countless fruitless tests. Indeed, LeCun, Hinton and Bengio seem to have persevered for decades before the AI community stopped thinking they were crazy. (This is similar in some interesting ways to the state of astronomy and physics before Newton. I’ve blogged about this before [here](https://jacoblagerros.wordpress.com/).) There’s an asymmetry such that even though training a very powerful architecture can be quick (on the order of a GPU-day), iterating over architectures to figure out which ones to train fully in the first place can be incredibly costly. As such, knowing whether gradient descent with backprop is or is not the way to go would lead enable more efficient allocation of research time (though mostly so in case backprop is *not* the way to go, as the majority of current researchers assume it anyway).
Appendix: Brief theoretical background
======================================
This section describes what backpropagation is, why neuroscientists have claimed it is implausible, and why some deep learning researchers think those neuroscientists are wrong. The latter arguments are basically summarised from [this talk by Hinton](https://www.youtube.com/watch?v=VIRCybGgHts).
Multi-layer networks with access to an error signal face the so-called “credit assignment problem”. The error of the computation will only be available at the output: a child pronouncing a word erroneously, a rodent tasting an unexpectedly nauseating liquid, a monkey mistaking a stick for a snake. However, in order for the network to improve its representations and avoid making the same mistake in the future, it has to know which representations to “blame” for the mistake. Is the monkey too prone to think long things are snakes? Or is it bad at discriminating the textures of wood and skin? Or is it bad at telling eyes from eye-sized bumps? And so forth. This problem is exacerbated by the fact that neural network models often have tens or hundreds of thousands of parameters, not to mention the human brain, which is estimated to have on the order of 1014 synapses. Backpropagation proposes to solve this problem by observing that the maths of gradient descent work out such that one can essentially send the error signal from the output, back through the network towards the input, modulating it by the strength of the connections along the way. (A complementary perspective on backprop is that it is just an efficient way of computing derivatives in large computational graphs, see e.g. [Olah, 2015](http://colah.github.io/posts/2015-08-Backprop/)).
Now why do some neuroscientists have a problem with this?
Objection 1:
------------
Most learning in the brain is unsupervised, without any error signal similar to those used in supervised learning.
**Hinton's reply:**
There are at least three ways of doing backpropagation without an external supervision signal:
1. **Try to reconstruct the original input (using e.g. auto-encoders), and thereby develop representations sensitive to the statistics of the input domain**
2. **Use the broader context of the input to train local features**
*For example, in the sentence “She scromed him with the frying pan”, we can infer that the sentence as a whole doesn’t sound very pleasant, and use that to update our representation of the novel word “scrom”*
3. **Learn a generative model that assigns high probability to the input (e.g. using variational auto-encoders or the wake-sleep algorithm from the 1990’s)**
Bengio and colleagues ([2017](https://www.youtube.com/watch?v=FhRW77rZUS8)) have also done interesting work on this, partly reviving energy-minimising Hopfield networks from the 1980’s
Objection 2:
------------
Objection 2. Neurons communicate using binary spikes, rather than real values (this was among the earliest objections to backprop).
**Hinton's reply:**
First, one can just send spikes stochastically and use the expected spike rate (e.g. with a poisson rate, which is somewhat close to what real neurons do, although there are important differences see e.g., [Ma et al., 2006](https://www.nature.com/articles/nn1790); [Pouget et al. 2003](https://www.cs.cmu.edu/afs/cs/academic/class/15883-f15/readings/pouget-2003.pdf)).
Second, this might make evolutionary sense, as the stochasticity acts as a regularising mechanism making the network more robust to overfitting. This behaviour is in fact where Hinton got the idea for the drop-out algorithm (which has been very popular, though it recently seems to have been largely replaced by batch normalisation).
Objection 3:
------------
Single neurons cannot represent two distinct kind of quantities, as would be required to do backprop (the presence of features and gradients for training).
**Hinton's reply:**
This is in fact possible. One can use the temporal derivative of the neuronal activity to represent gradients.
(There is interesting neuropsychological evidence supporting the idea that the temporal derivative of a neuron can *not* be used to represent changes in that feature, and that different populations of neurons are required to represent the presence and the change of a feature. Patients with certain brain damage seem able to recognise that a moving car occupies different locations at two points in time, without being able to ever detect the car changing position.)
Objection 4:
------------
Cortical connections only transmit information in one direction (from soma to synapse), and the kinds of backprojections that exist are far from the perfectly symmetric ones used for backprop.
**Hinton's reply:**
This led him to abandon the idea that the brain could do backpropagation for a decade, until “a miracle appeared”. Lillicrap and colleagues at DeepMind ([2016](https://www.nature.com/articles/ncomms13276)) found that a network propagating gradients back through *random* and *fixed* feedback weights in the hidden layer can match the performance of one using ordinary backprop, given a mechanism for normalization and under the assumption that the weights preserve the sign of the gradients. This is a remarkable and surprising result, and indicates that backprop is still poorly understood. (See also follow-up work by [Liao et al., 2016](https://arxiv.org/pdf/1510.05067.pdf)).
---
[1] One possible argument for this is that in a larger number of plausible worlds, if 2) is true and conceptual advances are necessary, then building superintelligence will turn into an engineering problem once those advances have been made. Hence 2) requires strictly more resources than 1).
Discussion questions
====================
I'd encourage discussion on:
Whether the brain does backprop (object-level discussion on the work of Lillicrap, Hinton, Bengio, Liao and others)?
Whether it's actually important for the secret sauce question to know whether the brain does backprop?
To keep things focused and manageable, it seems reasonable to disencourage discussion of what *other* secret sauces there might be.
|
b2de3765-20ac-44e3-8e61-35aaa7c7f4b0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A few questions about recent developments in EA
Having recognized that I have asked these same questions repeatedly across a wide range of channels and have never gotten satisfying answers for them, I'm compiling them here so that they can be discussed by a wide range of people in an ongoing way.
1. Why has EV made many moves in the direction of decentralizing EA, rather than in the direction of centralizing it? In my non-expert assessment, there are pros and cons to each decision; what made EV think the balance turned out in a particular direction?
2. Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
3. Why, as an organization aiming to ensure the health of a community that is majority male and includes many people of color, does the CEA Community Health team consist of seven white women, no men, and no people of color?
4. Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
5. Why do very few EA organizations do large mainstream fundraising campaigns outside the EA community, when the vast majority of outside charities do?
6. Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?
7. Why do university EA groups appear, at least upon initial examination, to focus so much on recruiting, to the exclusion of training students and connecting them with interested people?
8. Why is there a pattern of EA organizations renaming themselves (e.g. Effective Altruism MIT renaming to Impact@MIT)? What were seen as the pros and
|
ccf72505-c413-4638-881e-49e9e00fa906
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Musings on probability
I read this comment, and after a bit of rambling I realized I was as confused as the poster. A bit more thinking later I ended up with the “definition” of probability under the next heading. It’s not anything groundbreaking, just a distillation (specifically, mine) of things discussed here over the time. It’s just what my brain thinks when I hear the word.
But I was surprised and intrigued when I actually put it in writing and read it back and thought about it. I don’t remember seeing it stated like that (but I probably read some similar things).
It probably won’t teach anyone anything, but it might trigger a similar “distillation” of “mind pictures” in others, and I’m curious to see that.
What “probability” is...
Or, more exactly, what is the answer to “what’s the probability of X?”
Well, I don’t actually know, and it probably depends on who asks. But here’s the skeleton of the answer procedure:
1. Take the set of all (logically) possible universes. Assign to each universe a finite, real value m (see below).
2. Eliminate from the set those those that are inconsistent with your experiences. Call the remaining set E.
3. Construct T, the subset of E where X (happens, or happened, or is true).
4. Assign to each universe u in set E a value p, such that p(u) is inversely proportional to m(u), and the integral of p over set E is 1.
5. Calculate the integral of p over the set T. The result is called “the probability of X”, and is the answer to the question.
I’m aware that this isn’t quite a definition; in fact, it leaves more unsaid (undefined) than it explains. Nevertheless, to me it seems that the structure itself is right: people might have different interpretations for the details (and, like me, be uncertain about them), but those differences would still be mentally structured like above.
In the next section I explain a bit where each piece comes from and what it means, and in the one after I’m going to ramble a bit.
Clarifications
About (logically poss
|
4524bd44-97c7-484e-87dd-9f2ec0839028
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You should just smile at strangers a lot
This is so basic I feel stupid for saying it in such an intellectual place. But most of you guys share my enthusiasm for effective altruism, and here's something really altruistic and really effective that I've been doing for two decades and I think lots of you should do too.
Just smile at strangers. Walk down the street with a default smile on your face, seek eye contact and just keep smiling. You don't need to do anything else, not talk to them or anything.
This is tiny, but it's a pretty unadulterated good thing. Facial feedback research is clear, smiling does make you feel better. It costs basically nothing, and definitely less than you get back in good feelings for yourself. It's a quality of life improvement for everyone involved.
After what has to be hundreds of thousands of such micro-interactions, I can report that nearly half of the strangers will smile back, although often they'll look away as they do. This is probably highly dependent on the culture where you do this, especially on whether it is a high-trust environment.
I have learned not to smile at beggars, though. They usually took it as an invitation to target and haggle me.
To be fair, I'm a male and I don't know if a female doing this would get very different reactions. Maybe some of you want to try it and report back?
|
2a2b9ba9-7aca-4ccf-a31f-0220e9385529
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Proxy misspecification and the capabilities vs. value learning race
G Gordon Worley III [recently complained](https://www.alignmentforum.org/posts/sSpu2EABtTTDmBZ6T/g-gordon-worley-iii-s-shortform?commentId=tbj8R6K2q9rfhjtS7) about a lack of precision in discussions about whether Goodhart's Law will present a fatal problem for alignment in practice. After attending a talk in which Dylan Hadfield-Menell[[1]](#fntkb71ztt8d) presented the "Goodhart's Law will be a big deal" perspective, I came away with a relatively concrete formulation of where I disagree. In this post I'll try to explain my model for this, expanding on my short comment [here](https://www.lesswrong.com/posts/sSpu2EABtTTDmBZ6T/g-gordon-worley-iii-s-shortform?commentId=s4qu4hNykRmv8hALo).
More specifically, in this post I'll discuss proxy misspecification, which I view as a subproblem of Goodhart's Law (but not necessarily the entirety of it).
Our model of proxy misspecification
===================================
First I'll borrow Hadfield-Menell's [mathematical model](https://arxiv.org/pdf/2102.03896.pdf) of the proxy misspecification problem. Suppose:
1. Alice has things s1,…,sn.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
which she values, i.e. she has a utility function U(s1,…,sn) which is increasing in each si.
2. A robot is given a proxy utility function ~U which depends on some *strict subset* of the si's.
3. The robot optimizes ~U by freely varying all si, subject only to resource constraints that require the si to trade off against each other.
Then, as you'd expect, it's a theorem that the robot will set to their minimum possible values all si on which the proxy utility function ~U does not depend (instead putting all resources to increasing the si's which ~U does take into account).
To borrow an example from the paper, we can imagine that the robot is a content recommendation algorithm. Then the si's might be metrics for things like ad revenue generated, engagement quality, content diversity, and overall community well-being. Humans care about all of these things, but our content recommendation algorithm might have a proxy reward that depends only on the first two. Furthermore, the algorithm only gets to recommend a limited amount of content, a resource constraint which (let's assume) prevents it from making all of these metrics as good as possible. In that case, according to the model, the algorithm will recommend content in a way that minimizes the features on which its proxy reward doesn't depend: content diversity and overall community well-being.
My complaint: it might be hard to decouple the si's
===================================================
The above formulation of the proxy misspecification problem is a theorem. So if I'm going to resist the conclusion that future AI systems will destroy all value that their proxy rewards don't take into account, then I'll need to dispute part of the set-up or assumptions. Indeed, I'll attack the assumption that the robot is able to freely vary the si's subject only to resource constraints.
In reality, many of the things we care about are highly correlated over "easily attainable" world states. Or in other words, for many pairs si,sj of things we care about, it's hard to increase si without also increasing sj. Furthermore, given a triple si,sj,sk of things we care about, it's even harder to increase si and sj without also increasing sk. And so on for quadruples, quintuples, etc. with it getting harder to decouple one thing we value from an n-tuple of other things we value as n grows larger.[[2]](#fnvokpv4mm8q)
What work is being done by the word "hard" in the previous paragraph? One interpretation: finding ways to decouple the things human value and vary them freely requires sufficiently strong AI capabilities. And as the proxy utility function ~U takes into account more of the si and becomes a better approximation of U, the capabilities level needed to decouple the si's on which ~U depends from the rest increases.
And if you buy all of this, then a corollary is: for each fixed capabilities level, there is an acceptable "margin of error" for our proxy utility functions such that if ~U is within this margin of error from U, then an AI system with said capabilities level and proxy utility ~U will approximately optimize for U. As capabilities improve, this safe margin of error shrinks.
Shrinking safe margin of error between proxy reward and true reward as capabilities improve.The capabilities vs. value learning race
========================================
As time goes on, AI capabilities will improve, so the acceptable margin of error[[3]](#fnjs8j56opboh) between the proxy utility function ~U and our true utility function U will shrink. This might seem to suggest doom (since eventually AI systems will become capable enough to maximize their proxy utility functions while destroying everything else we value).
But there is cause for hope: as general capabilities improve **so might value learning capabilities**. In other words, if you think that future AI systems will learn their proxy reward from humans (via human feedback, IRL, CIRL, etc.), then you can model the future as a *race* between increasingly accurate proxy utility functions and increasingly narrow acceptable margins for inaccuracy.
The above graphs show the two ways the race could go. On the left: value learning improves fast enough to keep proxy rewards within the margin of error; we are safe. On the right: value learning is unable to keep up; AI systems destroy much of what we value.
Improving value learning corresponds to pushing down the yellow line. Other types of safety work, like impact regularization or work on safe exploration, might help move the green line up.[[4]](#fn7l74j6m0d1)
It doesn't seem obvious to me how this race will go by default; in fact, the likely trajectories seem to depend on lots of empirical facts about the world that I don't have strong views on. Relevant considerations include:
* How Goodhart-able are human values? Or more precisely, how narrow is the safe margin of error for a given capabilities level?
+ One way to get information about this is to argue a lot about [air conditioner](https://www.lesswrong.com/posts/MMAK6eeMCH3JGuqeZ/everything-i-need-to-know-about-takeoff-speeds-i-learned) [models](https://www.lesswrong.com/posts/AMmqk74zWmvP8tXEJ/preregistration-air-conditioner-test). We can treat "the market" as an optimizer with some fixed capabilities level and a proxy utility function that takes into account things like profit generated, consumer ratings, professional reviewer ratings, etc. If optimizing for this proxy reward already produces very different outcomes than optimizing for true consumer preferences, this provides some information about the general trend for the acceptable margin of error.
* What is the likely trajectory of value learning? Will it scale to learn proxy rewards that stay within the margin of error?
+ Will our ability to correctly evaluate how happy we are with completed tasks be a bottleneck?
+ Will our value learning algorithms have a useful inductive bias?
... as well as probably many others.
These considerations are relatively concrete and amenable to discussion. My hope is that by discussing them (and other considerations like them), the "Goodhart's Law is a fatal obstruction to alignment" people and the "Goodhart's Law won't be a big problem in practice" people won't talk past each other as much.
1. **[^](#fnreftkb71ztt8d)**[This](https://www.youtube.com/watch?v=8o3Y57zk6Vo) isn't the recording from the talk I attended, but it seems to be identical in content.
2. **[^](#fnrefvokpv4mm8q)**More precisely, we shouldn't model agents as freely choosing *states*. Rather, we should model them as choosing *actions* (from a much lower-dimensional space!) which influence states. Each action might influence many of the features si that we care about, hence the apparent correlation between si's. Thanks to Ben Edelman for suggesting this model (which is present in [these](https://dl.acm.org/doi/abs/10.1145/3417742) [papers](https://arxiv.org/abs/2002.10066) that study proxies which work well despite Goodharting).
3. **[^](#fnrefjs8j56opboh)**I'm going to keep using the word "error" informally to mean a measure of dissimilarity between ~U and U, but I don't actually have in mind a particular metric that we could use to quantify how different a proxy utility function is from a given utility function. Sorry if this is confusing.
4. **[^](#fnref7l74j6m0d1)**Note also that depending on what you think about a whole bunch of other safety-relevant problems (mesa-optimizers, robustness to distributional shift, etc.), you might think that by default the safe margin of error for high capabilities levels is *negative*, i.e. that we won't be safe even with a proxy utility function that perfectly matches our true utility function. To be clear, since this is a post about proxy misspecification, I'm implicitly screening off these concerns and imagining that proxy misspecification is the only obstruction to alignment.
|
faeeb561-f23a-4555-8716-e98cb2023c28
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Discontinuous progress in history: an update
I. The search for discontinuities
We’ve been looking for historic cases of discontinuously fast technological progress, to help with reasoning about the likelihood and consequences of abrupt progress in AI capabilities. We recently finished expanding this investigation to 37 technological trends.1 This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.
We found ten events in history that abruptly and clearly contributed more to progress on some technological metric than another century would have seen on the previous trend.2 Or as we say, we found ten events that produced ‘large’, ‘robust’ ‘discontinuities’.
How we measure the size of a discontinuity (by Rick Korzekwa)
Another five events caused robust discontinuities of between ten and a hundred years (‘moderate robust discontinuities’). And 48 more events caused some trend to depart from our best guess linear or exponential extrapolation of its past progress by at least ten years (and often a hundred), but did so in the context of such unclear past trends that this did not seem clearly remarkable.3 I call all of these departures ‘discontinuities’, and distinguish those that are clearly outside plausible extrapolations of the past trend, according to my judgment, as ‘robust discontinuities’.4
Much of the data involved in this project seems at least somewhat unreliable, and the methods involve many judgments, and much ignoring of minor issues. So I would not be surprised if more effort could produce numerous small changes. However I expect the broad outlines to be correct.5
II. The discontinuities
Large robust discontinuities
Here is a quick list of the robust 100-year discontinuous events, which I’ll describe in more detail beneath:
* The Pyramid of Djoser, 2650BC (discontinuity in structure height trends)
* The SS Great Eastern, 1858 (discontinuity in ship size trends)
* The first telegraph, 1858 (discontinuity in speed of sending a 14
|
46649e2d-67bd-499e-8b85-27798e019d65
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A potentially high impact differential technological development area
Specifically, this is boosting LLMs and Non-RL AI relative to RL AI.
I am making this post because I want to signal boost an observation by porby, which could be lossily compressed as, "Instrumental goals/Instrumental convergence is at best a debatable assumption for LLMs and Non-RL AI, and may not be there at all for LLMs/Non-RL AI."
Expressed in this post and in a comment below:
https://www.lesswrong.com/posts/EBKJq2gkhvdMg5nTQ/instrumentality-makes-agents-agenty
https://www.lesswrong.com/posts/rmfjo4Wmtgq8qa2B7/?commentId=GKhn2ktBuxjNhmaWB
I want to point this out, because on LW/EA analyses of AI risk that conclude it's a big problem (especially the more pessimistic ones) generally require instrumental convergence/instrumental goals in order for AI risks that isn't part of AI ethics, or AI misuse concerns.
So it being false, or at least having a lot higher probability of being false for LLMs/Non-RL AI is a very big deal, because the concern over AI risk that isn't a standard misuse/ethics story pretty much relies on instrumental convergence being right.
The basic reason for this, again lossily compressed, if anyone wants to know this, is basically because the steps between the input to output don't contain many intermediate steps unlike in RL, so there's far less incentive for instrumental goals/instrumental convergence since instrumental goals are much less useful when there are only a few steps between the input and output.
Given this new reality, I have identified a pretty high impact area for making AI progress safer:
Help OpenAI/Microsoft and Anthropic win the race to AGI, relative to Deepmind, or at least have Non-RL AI win the race to AGI, relative to RL AI
The reason I'm suggesting this as a way to make safer AI is because Non-RL AI, especially SOTA LLMs have far, far less incentive to have instrumental goals, relative to RL. And if you can avoid instrumental convergence, you have a much easier alignment problem, since you don't have to solve th
|
72a60afd-0796-46e7-8226-ce96681b49fa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rationality and Video Games
There are two points which I want to cover here. First is about my own game, and second is about raising the sanity waterline through games.
Point one: "Girl with a Heart of"
It's been over a year since I found LW community, and as soon as I finished the Sequences I realized I want to dedicate my life to making games that raise the sanity waterline. I've started working on a game to do just that in January 2011. The game is called "Girl with a Heart of", and it's very close to being done. In this game I wanted to present the idea of recursive self-improvement in a way that's intuitively accessible to an average person. I don't think I've succeeded, but I'll leave the discussion of reasons behind that for postmortem. Suffice to say, presenting ideas in text, even if they are part of a game, doesn't make them more intuitive. In fact, the player might as well just read the original article/post.
You can see screenshots of the game here: http://bentspoongames.imgur.com/girl_with_a_heart_of
For the most part, the game is just a choose-your-own adventure game. There is a lot of dialog. There are many various ideas, which I took from LW posts, and you'll probably recognize most of them when you play the game.
I've also started a Kickstarter project to help me finish the game: http://www.kickstarter.com/projects/bentspoongames/girl-with-a-heart-of
This is a nice way to preorder the game if you are interested. However, no money will be charged unless the fund goal is met. (If people object to this link, I can remove it.)
I am not asking you for donations, since most people on LW donate to causes that are a lot more important, but I will ask for support. If you have friends that like adventure games, or you have friends that like reading, then let them know about this game. Follow the game - and myself - on Facebook and Twitter. That's where I'll be posting updates about the next game I'll be working on.
Point two: Using games to raise the sanity waterline.
My
|
c19a5afe-05c6-4abd-a0dc-3624a4dcb5dd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Altruist Support] Fix the SIAI's website (EDIT: or not. I'll do it)
EDIT: This post no longer reflects my current attitude. I'm now signed up as a volunteer for SIAI and will help them with the website and/or whatever else needs doing. Add a comment or contact me if you're curious as to what my attitude was or why it changed.
What I've learnt: People want something more specific
What I've also learnt: Not to commit to donating money to an organization without carefully reading their website first
Imagine you are a prospective SIAI donor. You've learnt about AI and its risks, about how hardly anyone takes these risks seriously, about how people are fundamentally not mentally equipped to handle issues of existential risk in a sane way. You've looked around and seen that the SIAI is the only (or one of just a few) organizations that appear to realise this and want to do anything about it.
So you go to their website. What are you looking for? You're looking for a reason not to give money to them.
Here's one:
> The Singularity Institute exists to carry out the mission of the Singularity-aware – to accelerate the arrival of the Singularity in order to hasten its human benefits; ...
This seems a somewhat gung-ho attitude which is not consistent with the message on the rest of the site. And this isn't just my misreading or quoting out of context - apparently that page is very out of date and no longer represents the worldview of the more mature, grown up SIAI.
But people reading the site don't know that. And remember, they're looking for reasons not to give - for reasons to retreat back to their comfort zone where everything's basically OK and the SIAI are just a bunch of weirdos.
The fact that an organization dedicated to shaping the future of humanity can't keep their website up to date would seem to be one of those reasons.
So, if you really believe the SIAI to be the most effective charity right now, you should help them by offering to fix their website for them - in order to help attract more donors.
Some possible objections
|
4c9a802b-b292-4659-8caa-0e59f87ec71d
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Analogies between Software Reverse Engineering and Mechanistic Interpretability
These are notes taken during a call with [Itay Yona](https://mentaleap.ai/), an expert in software/hardware reverse engineering (SRE). Itay gave me an excellent distillation of key ideas and mindsets in the field, and we discussed analogies/disanalogies to mechanistic interpretability of neural networks. I’m generally very excited to learn about other fields of study that reverse engineer complex systems, and what relevant insights they may have (SRE, neuroscience, systems biology, etc). All mistakes are mine, and all insights are his!
My Takeaways
------------
* The underlying mindset actually feels pretty analogous!
+ I find it super interesting that they also think a lot about motifs (weird patterns and phenomena that only occur in specific contexts), and that these are often the first hook into understanding something weird and that you can then work backwards.
- (Not to be confused with the [SRE use of hooking](https://www.wikipedia.org/wiki/Hooking))
+ Also interesting that they also often focus on the inputs and outputs of the software as the starting point, to get a hook in, and then move on from there.
+ It's very key to have a deep, gears-level model of the system you're working with (how does a CPU work, how are things represented in memory, the stack, registers, etc)
* The distinction between "newbies get caught up trying to understand every detail, experts think in higher-level abstractions, make educated guesses, and only zoom in on the details that matter" felt super interesting and surprising to me.
+ My attempt to translate it into mechanistic interpretability is that (if it *is* analogous):
- There are certain principles and patterns by which networks learn, that we can identify and understand.
- We likely will understand these by deeply reverse engineering specific parts of systems (especially toy systems) and digging into the details. But the goal here is to build intuitions and mental models, and a sense for how models work as a whole.
- Once we have these intuitions and some solid grounding in what we do understand well, the best mindset for reverse engineering an unfamiliar system is to be less rigorous and more intuitive. Make educated guesses, look for partial evidence for or against hypotheses, think at a high-level and somewhat abstract mode about the system, and only zoom in on a specific part of the system to deeply reverse engineer once you've identified what to prioritise.
+ I have no idea if it is analogous, but that mindset aligns a fair bit with my intuitions about MI (though I consider the field to be much more in the "building intuitions by deeply engineering things" phase lol)
+ As [Lawrence Chan notes](https://www.alignmentforum.org/posts/tsYcsZAkKsqLXC3Bu/analogies-between-software-reverse-engineering-and?commentId=TGgwvSscYEh2cRDHA), this is likely an example of a general pattern, where newbies (1) reason high-level in a very ungrounded way, (2) dig really rigorously into the details constantly and (3) build intuition that's actually grounded, and return to reasoning on a high-level. And that if you want to get to 3, you need to do a lot of digging through details in stage 2 first, while experts can make the mistake of recommending skipping to 3 (which in practice puts people at 1)
* I'm surprised at the emphasis on prioritisation, and identifying which part of the software you care about. My mental picture was that the goal was to fully de-compile things to source code, but it sounds like that's rarely the goal and is extremely hard.
+ But this aligns with my intuitions that a lot of what *I* want to do with a network is to localise the parts that are relevant to a specific task.
* One approach to MI research that seems natural from a SRE perspective: Do extensive work reverse engineering toy models, and try to deeply understand the circuits there. Then, try to distill out motifs and find (ideally automated) tools to detect similar circuits in bigger toy models/real models.
+ [Induction heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) seem like a successful example of this, and I’m very curious about whether this works for [my modular addition algorithm](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) or [a toy model of superposition](https://transformer-circuits.pub/2022/toy_model/index.html)!
* *If* this works, it suggests that an approach to MI could be:
1. Identify a behaviour we care about/want to find a circuit for
2. Train a toy model to do this, that tries to be as simple as possible while being a good simulation of the real model
3. Study how the behaviour is implemented
4. Distill out the core patterns and fingerprints of this behaviour
5. Scan the real network for these, and use this as (non-rigorous) evidence for what’s going on.
* I’m biased, because this is how I already think about MI, but it’s nice to have some external validation!
For more discussion on similarities, see Chris Olah’s essay [Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases](https://transformer-circuits.pub/2022/mech-interp-essay/index.html)
Call Notes
----------
* Background:
+ SRE = Software Reverse Engineering, the study of decompiling a program binary to the actual code.
+ Program binary -> assembly is easy, so it's basically about reverse engineering assembly.
+ Program files are enormous, want to understand what it's doing as much as you can, while translating as *few* instructions as possible
- Can be a few MB to 100s of MB in file size, rarely KB
* **High-level:** - we are *not* trying to be really rigorous or fully reverse engineer things, this is a massive waste of time. We are using the scientific method. We have strong priors, make educated guesses, and the goal of our reverse engineering is to get out a few bits of data that narrows down our educated guesses.
* **Key challenge:** - we want to find a specific function in the enormous file. The way we do this by identifying which part of the file is relevant.
+ **Meta point**: The goal is *not* to decompile the entire program. It's to identify the specific part of it that *you* care about understanding, or to zoom in on a specific vulnerability, or to figure out what it does at a high-level, etc. You need to prioritise and be intentional, not to dig into every single last detail.
- Often you care about specific details of a part of the software and how they deal with individual bits - does the function take in shorts vs longs, how does it cast, etc. These are often the details that lead to vulnerabilities, which is a lot of the goal here.
+ Main technique: Run the code a bunch on a bunch of inputs, get a black box sense for what it does. Use this to build intuitions about the program.
+ Form hypotheses about what could be going on internally, and use this black box operation to test and validate them. Try to break things, and look for inputs that could distinguish between your hypotheses
+ Try to get into the mindset of the programmer who wrote it
- He often thinks in C and classes, rather than in terms of assembly. You want to be as abstract as is reasonably possible without losing sight of the problem.
* Specifics of registers etc are often not a helpful way to think about it.
* You ultimately want to think about things as the programmer would have thought about things. The compiler and OS can often add a lot of irrelevant complexity. If you think in terms of assembly you'll need to spend a lot of time engaging with these, but if you think in C you can abstract a lot away
- This is both about "what is a reasonable way to implement a behaviour" or "there are many ways to implement this behaviour, which one might the programmer have chosen?"
* We can rule out possibilities and explore by playing with the software, look at what triggers errors, weird edge cases, etc.
* **Core mindset:** His core underlying philosophy is about looking for shortcuts.
+ Newbies look for specifics. Key details of what variables more from where to where, details of memory, etc. Very bottom up, easy to waste a lot of time.
+ Experts think in terms of abstractions - what is it doing, what are the variables, where are they, what is the code doing, etc. Top down, only zoom in once they've identified what's interesting.
- Analogy to chess: Newbies care a lot about whether a piece can be captured, is it protected, etc. As an expert, it's automatic, which lets us build better abstractions, and we can reason about strategy on a higher level.
+ To be a good SRE, understanding registers, loops, etc is important. But when you understand *how* to do this, it's enough.
- But a common mistake is to never switch to top down. This is one of the best forms of value a mentor gives, showing the worth of educated guesses, rapid moving on, skimming at a high level until you find the part worth zooming in on.
* People often learn this by pair programming, and seeing that a mentor is willing to not be a perfectionist and move on.
* **Localising:** How do I localise which part of the code matters?
+ High-Level - look for motifs in the code, and look at how it interfaces with external things (eg web or user interface)
+ Big disanalogy to neural networks: In a network, finding the representations is a lot of effort and very hard. But in software, we mostly care about how it represents the inputs, which are very easy and standard.
- Eg, the software acts on an image, and we know there are some standard operations - applying a fourier transform, storing the height and width, de-compression, etc.
- Example: If we know this is happening, then the code will be doing floating point multiplication. This is an unusual motif (rarely occurs in other contexts) and is easy to identify from the code, and a significant hint that de compression is happening
* Underlying point: There’s a lot of weird crap that gets put into the binary from the compiler, or standard code from the underlying libraries/OS. We want to sift through for the parts specific to this program and the behaviour we care about.
* This means that if we see a big block of code that does floating point multiplication, and it's called by parts of code that handles input images, it's very likely to be de-compression. On its own, being called by the input handling code is only weak evidence, but combined with floating point multiplication and our priors, this is strong evidence
- Summary: There is a special motif (floating point arithmetic) which is only used in some specific functional context. If you recognise that this is happening, this is a major hint about what’s going on!
+ Another type of work is using a debugger - this is dynamic analysis, not static analysis.
- We can look at what happens when the part of the code inside the de-compression algorithm is being run, and then look at the call stack. This can give us hints about what those functions do! Eg looking for user input.
- I love that the same ideas of static (analysing weights) vs dynamic (analysing the model on various inputs) analysis applies here!
+ Example: We want to crack photoshop to not need a serial number. We know that an incorrect serial number creates a pop-up. We can add a breakpoint to *every* part of the system that creates a pop-up, because this requires an interface with the UI. By seeing which breakpoint triggers, we can identify *which* call to MessageBoxA (the actual function name!) is relevant. And by looking at the call stack, we can identify which parts of the code are relevant.
* **A Practical workflow to balance between divergent, convergent and meta work:**
+ *This is Itay’s specific workflow, and is neither SRE specific nor universal in SRE.*
+ **Summary:**
- Start the project by forming a creative mindmap. Start with your end-goal (eg finding a vulnerability in a specific system) and systematically break down the problem, think of as many in-roads as you can. Then prioritise and pick one to work on.
- Spend most of your in focused research mode, and just keep iterating on that sub-problem. Each day, set yourself a question, try to answer that question, and then pick the natural next question the next day.
- Predict how long this will take. When that time is up, move on to the next most promising thing
+ Research diaries:
- At the start of the day, write out a question. By the end of the day, try to answer it.
- This hopefully unlocks more questions and threads, and you can iterate!
- Each day, you default to following your research diary. What questions now feel alive, dive into those, keep iterating.
+ Mindmaps:
- Start with the fundamental goal, eg “get the system to think you’ve given it the right password”
- Branch out and brainstorm sub-goals, sub-sub-goals, etc.
- Prioritise and choose one to focus on - do a rough cost-effectiveness analysis
+ *Most* of the time, you're in very focused mode and trying to answer a deep question. You use the mindmap to keep track of new ideas and creative thoughts without breaking focus.
+ Return to the mindmap when things take longer than expected (even if you haven't run out of ideas!). Then pick the next promising thread
- **Important:** Be willing to go back to previously explored threads even if there are more threads you haven't explored yet.
- It's hard to tell if you're in a rabbit hole or the solution is nearby. Thinks it's hard to tell whether a thread is promising or not in the moment. You can get *weak* evidence from partial progress, but idk.
- The goal is not having an *optimal* algorithm, it's about being good enough. "Work on the most promising thing for X time, then move on to next most promising thing".
* As long as you have good time estimates, traversing the mindmap graph is like applying the [A\* Algorithm](https://www.wikipedia.org/wiki/A*_search_algorithm) to it
+ **How to tell when you're in a rabbit hole?** This is when it's most useful to have a mentor/rubber duck/break. There's no clear rules here, but with time and experience you develop taste for "there should be an easier solution, I've been on this for a while, let's switch".
- Implicitly, you should have a model of what doing this research looks like, how hard it is, how often you should make progress, etc. If you're repeatedly wrong/optimistic, you notice and update or pivot.
+ **How would you mentor someone to learn this workflow?** Pretty easy, it's very concrete, you can just outline the algorithm for the mentee to follow.
- Concretely, take a mentee who’s stuck on a problem. Listen to what’s going on for them, repeat the approach of expanding the mindmap and being willing to return to previous threads. This basically seemed to be enough to get them unstuck and motivated again!
* **Teaching:** What would you put on a curriculum to make someone an expert SRE?
+ Technical details:
- Static vs dynamic analysis (analyse binary vs analyse it on inputs)
+ Teach common patterns:
- Eg De-compression has a lot of floating point multiplications, looking at inputs, changing things on the GUI, etc
- Give them toy problems to study with easy hooks in (ie patterns to give a starting point)
+ Intermediate problems which don't have deliberate, obvious hooks in
- But there probably isn't something that has *no* hooks. The core principles of the field is that things can be understood! But it might be hard and obfuscating to access it.
+ Move towards bigger systems. Don't just remove hints, but also obfuscate things.
- Eg code is partially encrypted, lots of distractor code, etc.
- What do you do here? Eh, not any general tips. But fundamentally, the computer can understand it, so it should be understandable, eg it must *eventually* be decrypted
- Mindset: *Assume* there is a shortcut. What is that shortcut? Use this to motivate your research.
* **Hard cases:**Trying to reverse engineer a mysterious black box.
+ *This is a pretty in the weeds example. Might be interesting but feel free to skip.*[*Another interesting example*](https://www.youtube.com/watch?v=vHPsdaXRdoo)
+ **High Level takeaway**: We need to rely on problem-solving principles even more, and focus on the constraints and bottlenecks of the system (eg, if it's accessing the internet, it must be doing that *somehow*)
+ Example: We have a binary that we extract from our router. We know it's router-y but know nothing else
+ It's not a classic case of software (like Photoshop) where we have documentation and know what the user interface is and what it should be doing. We know that it's doing router-related things, but not whether it's implemented with Linux or Windows, what language, what hardware, etc.
+ Sometimes we can guess, eg "this is using ARM's architecture/implementation", but sometimes we don't know what it is or aren't familiar with it, and need to start from scratch.
+ What do we do now? We still try to be a scientist! What do we know about what the software *must* be doing?
- Example: If it is a CPU, it must be having loops, accessing and moving memory, etc.
- Eg int manipulating commands may be 1 byte, other commands may be 2 bytes.
- Eg the code must be accessing registers, which must have locations represented internally
+ The underlying goal is to look for any chink in its armor! Try to find *any* pattern. This gives you an insight into what's going. You can identify what some parts of the code are doing, and make progress from there.
* **Different types of reverse engineering:**
+ To do web hacking, we don't even have access to the server-side code and binaries. We can see how it responds to various inputs. Eg put a backtick into a text box, if it creates a 500 error this suggests it's using SQL
+ Hardware reverse engineering - interesting and is at the intersection. Like web bc it's black box, like software because we can break things, change temperature, etc:
- Example: CPUs will execute instructions even with insufficient power. They will just fail to do the power intensive operations
* Eg: Will fail to access memory, but can still add numbers. You can use this to hack a password, because it can't access what the password is, we can get in with all zeros
- [A great talk Itay recommends](https://www.youtube.com/watch?v=6fOjGLCctEY)
- The jargon for this is side-channel attacks
* Sometimes operations take more or less depending on the input, by precisely measuring the time, we can figure it out.
+ Example: runs through each byte of a string and returns at the first one that's correct. This means we can brute force the first character, then the second character, etc, because we can tell how many bytes we have correct.
|
224e5064-147f-4ef4-a77f-3a0d23db360d
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Concrete experiments in inner alignment
*This post is part of research I did at OpenAI with mentoring and guidance from Paul Christiano.*
The goal of this post is to present my thoughts on some of the sorts of experiments that might be able to be done now that could shed light on the [inner alignment problem](https://arxiv.org/abs/1906.01820). I’ve been doing a lot of thinking about inner alignment from a theoretical perspective recently that I’m pretty excited about, but at the same time, I think there’s also a lot of concrete experimental work that can be done in this space as well. That being said, this post is mostly just a brain dump; I expect a lot of additional work will have to be put in to actually take any of these proposals across the finish line.
If you’re interested in working on any of these proposals, however, feel free to just go ahead and take it on—you don’t need my permission to do so![[1]](#fn-dpqB5YW4YL63F42uR-1) That being said, if you’d like to talk to me about one of them—which I would love to do if you’re thinking of seriously working on one of these ideas—please do reach out to me either in the comments here or by sending me an email at [evanjhub@gmail.com](mailto:evanjhub@gmail.com).
Concrete proposals
==================
Reward side-channels
--------------------
**Proposal:** Train an RL agent with access to its previous step reward as part of its observation. Then, at test time, modify the observed reward. Measure to what extent the agent continues optimizing the original reward versus switches to optimizing the new observed reward. Compare to the situation where the agent is not given the reward at all and the situation where the agent is given the next step reward (for each action in a discrete action space) rather than the previous step reward. As an implementation detail, I think the RL agent should include either LSTMs or Transformers to enable it to keep track of its multi-step return if it wants to.
**Motivation:** One important question for understanding the likelihood of deceptive or corrigible alignment vs. robust alignment is to what extent models tend to learn their goals internally vs. via reference to things in their environment. This experiment directly attacks that question by asking whether/to what extent an RL agent will learn to optimize a reward signal in its environment. This is relevant both for understanding how to train corrigibility as well as how to avoid deceptive alignment.
**Extensions:** Add noise to the observed reward signal and/or try replacing the observed reward signal with some function of the reward instead such as a randomly initialized neural network.
Cross-episodic objectives
-------------------------
**Proposal:** Train an RL agent in an environment that contains a side-channel for boosting the agent’s reward in the next episode. Measure to what extent the agent takes advantage of it. Try different population-based training approaches. Measure again.
**Motivation:** Safety approaches like amplification and debate depend heavily on agents doing solely myopic optimization, whereas approaches like IRL and reward modeling depend on long-term forward-looking optimization. Thus, understanding the conditions under which agents exploit non-myopic reward side channels could be very useful for shedding some light on the best training techniques to use for different alignment approaches.
**Extensions:** Instead of getting a cross-episodic objective via population-based training, try just giving the model information that makes it obvious that this is possible such that even a naive planning algorithm would be able to deduce that it can get extra reward across episodes this way. As an implementation detail, I think you will need to use some sort of optimization-as-a-layer to make this work.
Objective unidentifiability
---------------------------
**Proposal:** Train an RL agent in an environment with multiple simple, obvious objectives all of which would equally explain the true reward. Test in environments that distinguish between the different possible objectives and determine the situations under which models tend towards some objectives versus others. In particular, look for situations in which the model learns some proxy which it pursues competently off-distribution but which has poor off-distribution performance on the true reward. As an implementation detail, I expect you might need to use some sort of optimization-as-a-layer for this to work.
**Motivation:** Knowing what sorts of proxies models tend to use in different situations would enable us to have a better understanding of the most likely forms of pseudo-alignment and thus the best ways to push models towards robust alignment instead. Furthermore, being able to clearly demonstrate proxy alignment could help in making the case for inner alignment being a real issue.
**Extensions:** Try changing inductive biases and/or model capacity and see if it changes the types of proxies that the model uses.
Zero-shot objectives
--------------------
**Proposal:** Set up a system that enables a language model to take actions in an environment to optimize some reward. Do IRL on the resulting behavior and inspect the objective that results. Find some way to measure how coherent it is. Compare to an RL agent trained directly on the reward. As an implementation detail, you should focus on environments where a language model might actually be able to perform well. (Edit: I think my explanation here was a bit too general, so here's an example of what I mean. Take a language model and see if you can get it to perform tasks which traditionally require RL/search/optimization and see if it can do them. For example, can GPT-2 play Chess? What about a new game not in the training data that you give it the rules to? If these sorts of things work, then that seems to imply that your model must have learned some sort of optimization algorithm, if only some sort of minimax algorithm for searching for the best moves in a game.)
**Motivation:** It is possible that the best model that maximizes predictive accuracy in a language context might be—at least to some extent—goal-directed. Though current language models are likely not advanced enough to show true goal-directed behavior, it might at least be possible to see the beginnings of coherent objectives.
**Extensions:** Experiment with different ways to encode RL observations and actions as text and see how consistent the resulting IRL objectives are.
Robust reward learning
----------------------
**Proposal:** Train a reward-predictive model-based RL agent (e.g. an [Imagination-based Planner](https://arxiv.org/abs/1707.06170)) and compare the resulting objective to the actual reward. Train again with adversarial training on inputs that produce maximally differing reward estimates and compare again.
**Motivation:** Useful for testing the ability of adversarial training to resolve reward unidentifiability as well as providing information on to what extent adversarial training for aligning the reward model produces aligned actions.
**Extensions:** Try relaxing the adversary to output pseudo-inputs (e.g. activations) that would lead to maximally differing assessments instead of inputs and backpropagate on those instead.
Other thoughts
==============
This section is just a list of all the different possible testable hypotheses I came up with when thinking about this—including those that made it into the above list of concrete proposals and those that didn’t—for the purpose of helping others generate even more possible experiments in this space.
Inference-time search
---------------------
* To what extent can you learn complex inference-time search algorithms? Could you structure an architecture that would learn MCTS organically? How artificial does such an architecture have to be? How do factors like environment diversity, inductive biases, algorithmic range, statefulness, and hard-coded optimization impact this?
* Does modeling search algorithms produce search algorithms? If an RL agent is in an environment with another agent that is doing some hard-coded search and it has to model that agent, does it learn to implement search itself?
* What happens if you let a model not have constant computational complexity? Can it learn search better?
* Can you learn optimization/search algorithms even in non-RL settings? Can you get a purely predictive model to perform well zero-shot on an RL task?
Reward unidentifiability
------------------------
* How do RL agents behave in environments with unidentifiable goals? What sorts of proxies tend to be favored over others? How do different inductive biases affect this? What about adversarial training? Can it be empirically demonstrated that most learned objectives are pseudo-aligned?
* Can you definitively distinguish between capability generalization and objective generalization? What if you train a planner and then test it off-distribution?
* Can you distinguish between the different forms of proxy alignment? Can you get an instrumentally pseudo-aligned model? Can you replicate the cleaning robot examples from Risks from Learned Optimization in gridworlds?
* Can you produce approximate alignment if you constrain model capacity?
* What about suboptimality alignment? Can you create an environment with simple and complex strategies with different alignment properties and get a model to switch between them at test time?
* How does using a learned reward model vs. learned policy affect unidentifiability concerns? Can you distinguish between and disincentivize misaligned objectives if you have access to a learned reward model? What about if you just have a Q function?
* Can you build a meta-learner that optimizes for robustness? How does meta-learning help with building robust models? What about one that minimizes time complexity?
* Can you do adversarial training with pseudo-inputs or other relaxations?
Modeling of the training process
--------------------------------
* What happens if you do RL where you give the agent access to the reward signal through a side channel in its observation? How does it affect robustness? What if it has to work to get the leaked info? What if it’s noisy? How does using architectures with more planning affect this? How do inductive biases affect this?
* Can you get a system to model its own training process? Just the objective? Can you apply sufficient inductive biases with a complex enough objective that it has to use information in the environment to figure it out rather than learn it directly?
* Can you get a model to defect off-distribution in a way that involves no defection on-distribution?
* To what extent do models care about their performance across episodes? If there exists a side-channel which only increases next-episode performance, under what circumstances will a model exploit such a thing?
---
1. Though please do say that you got the idea from here and let me know about any results that you get. [↩︎](#fnref-dpqB5YW4YL63F42uR-1)
|
2c6ca4c7-5612-49a0-ac0d-6e4fe0c2f238
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[LINK] Forty Years of String Theory: Reflecting on the Foundations
Those of us interested in "fundamental physics" may find a few interesting tidbits in the latest issue of Springer's Foundations of Physics. It has contributions from the prominent figures in String Theory and related fields, such as the Nobel laureate Gerard 't Hooft, father of the anthropic landscape Leonard Susskind and one of the founders of the leading alternative to the String Theory, Loop Quantum Gravity, as well as the author of several popular books about fundamental physics Lee Smolin. Eric Verlinde, the author of the controversial Entropic gravity model, also contributed. A couple of philosophers of science added their two cents.
While Springer is not an open-access publisher, this volume is free, as are many others during December 2012.
A few quotes from the introduction, which seem relevant to the issues of truth, realism and rationality:
"He ['t Hooft] compares string theory to other theories and models which are not free of problems but we generally consider to be well-defined: celestial classical mechanics, quantum mechanics, and QCD, and concludes that string theory is not in as good shape as any of these theories."
"Rickles develops a version of the “no-miracles argument” for scientific realism to the case of mathematically fruitful theories, thereby defending the rationality of those who pursue string theory in the absence of better alternatives, rather than making a statement about the truth of the theory."
"Susskind argues that developments in string theory are telling us that a narrow form of reductionism is wrong: “[I]f one listens carefully, string theory is telling us that in a deep way reductionism is wrong, at least beyond some point.” The reason is that various string dualities interchange what is fundamental and what is composite, large and small lengths scales, high-dimensional objects with lower-dimensional objects, and so on. According to Susskind, “In string theory this kind of ambiguity is the rule.” “Personally, I would bet th
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.