id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
fb8b5f9f-4529-4ac9-bad1-dc685f237cf8 | trentmkelly/LessWrong-43k | LessWrong | How people use LLMs
I've gotten a lot of value out of the details of how other people use LLMs, so I'm delighted that Gavin Leech created a collection of exactly such posts (link should go to the right section of the page but if you don't see it, scroll down).
* https://kajsotala.fi/2025/01/things-i-have-been-using-llms-for/
* https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
* https://www.lesswrong.com/posts/CYYBW8QCMK722GDpz/how-much-i-m-paying-for-ai-productivity-software-and-the
* https://www.avitalbalwit.com/post/how-i-use-claude
* https://andymasley.substack.com/p/how-i-use-ai
* https://benjamincongdon.me/blog/2025/02/02/How-I-Use-AI-Early-2025/
* https://www.jefftk.com/p/examples-of-how-i-use-llms
* https://simonwillison.net/series/using-llms/
* https://signull.substack.com/p/how-to-think-with-ai
* https://alicemaz.substack.com/p/how-i-use-chatgpt
* https://fredkozlowski.com/2024/08/29/youre-using-chatgpt-wrong/
* https://www.lesswrong.com/posts/WNd3Lima4qrQ3fJEN/how-i-force-llms-to-generate-correct-code
* https://www.tumblr.com/nostalgebraist/772798409412427776/even-setting-aside-the-need-to-do
* https://www.jointakeoff.com/
* This is more of a howto than a whatto. I wouldn’t use it for stats or pharma decisions as he does.
* https://www.lesswrong.com/posts/4mvphwx5pdsZLMmpY/recent-ai-model-progress-feels-mostly-like-bullshit
Some additions from me:
* I use NaturalReaders to read my own writing back to me, and create new audiobooks for walks or falling asleep (including from textbooks).
* Perplexity is good enough as a research assistant I'm more open to taking on medical lit reviews than I used to be.
* I used Auren, which is advertised as a thinking assistant and coach, to solve a musculoskeletal issue my physical therapist had whiffed on for weeks (referral code with free tokens, but only after your first payment).
* Note that Auren has some definite whispering earring vibes, and the privacy protections don't seem particularly strong, |
f5ac860b-bd6d-47d1-92ef-fdfcc9b6fcc0 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | I’d like to do experimental work (i.e. ML, coding) for AI alignment. What should I do?
Okay, so you want to do experimental AGI safety research. Do you have an idea you’re already excited about? Perhaps a research avenue, machine learning experiment, or coding project? Or maybe you’d like to get up to speed on existing research, or to learn how to [get a job in alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment)? You can continue to whichever branch seems most relevant to you via the related questions below.
|
a8aea052-6b3e-458e-8ebe-df605ae38d3c | StampyAI/alignment-research-dataset/arbital | Arbital | Uncountability: Intro (Math 1)
[Collections of things](https://arbital.com/p/3jz) which are the same [size](https://arbital.com/p/4w5) as or smaller than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.
All uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity). There is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite collections of things %%note: Specifically, mathematical systems which use the [https://arbital.com/p/69b](https://arbital.com/p/69b), see the [technical](https://arbital.com/p/4zp) page for details.%%. To demonstrate this, we'll use a 2d grid.
[https://arbital.com/p/toc:](https://arbital.com/p/toc:)
## Real and Rational numbers
[Real numbers](https://arbital.com/p/4bc) are numbers with a [decimal expansion](https://arbital.com/p/4sl), for example 1, 2, 3.5, $\pi$ = 3.14159265... Every real number has an infinite decimal expansion, for example 1 = 1.0000000000..., 2 = 2.0000000000..., 3.5 = 3.5000000000... Recall that the rational numbers are [fractions](https://arbital.com/p/fraction) of [integers](https://arbital.com/p/48l), for example $1 = \frac11$, $\frac32$, $\frac{100}{101}$, $\frac{22}{7}$. The positive integers are the integers greater than zero (i.e. 1, 2, 3, 4, ..).
There is a [https://arbital.com/p/-theorem](https://arbital.com/p/-theorem) in math that states that the rational numbers are *countable* %%note: You can see the theorem [here](https://arbital.com/p/511).%%, that is, that the set of rational numbers is the same size as the set of positive integers, and another theorem which states that the real numbers are *uncountable*, that is, that the set of real numbers is strictly bigger. By "same size" and "strictly bigger", we mean that it is possible to match every rational number with some positive integer in a way so that there are no rational numbers, nor positive integers, left unmatched, but that any matching you make between real numbers and positive integers leaves some real numbers not matched with anything.
## Rational grid
If you imagine laying the rational numbers out on a two-dimensional grid, so that the number $p / q$ falls at $(p, q)$, then we may match the positive integers with the rational numbers by walking in a spiral pattern out from zero, skipping over numbers that we have already counted (or that are undefined, such as zero divided by any number). The beginning of this sequence is $\frac01$, $\frac11$, $\frac12$, $\frac{-1}{2}$, $\frac{-1}{1}$, ... Graphically, this is:

This shows that the rational numbers are countable.
## Reals are uncountable
The real numbers, however, cannot be matched with the positive integers. I show this by [contradiction](https://arbital.com/p/46z). %%note:That is to say, I show that if there is such a matching, then we can conclude nonsensical statements (and if making a new assumption allows us to conclude nonsense, then the assumption itself must be nonsense.%%
Suppose we have such a matching. We can construct a new real number that differs in its $n^\text{th}$ decimal digit from the real number matched with $n$.
For example, if we were given a matching that matched 1 with 1.8, 2 with 1.26, 3 with 5.758, 4 with 1, and 5 with $\pi$, then our new number could be 0.11111, which differs from 1.8 in the first decimal place (the 0.1 place), 1.26 in the second decimal place (the 0.01 place), and so on. It is clear that this number cannot be matched with any number under the matching we are given, because, if it were matched with $n$, then it would differ from itself in the $n^\text{th}$ decimal digit, which is nonsense. Thus, there is no matching between the real numbers and the positive integers.
## See also
If you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!
Arbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)! |
15db4248-49eb-4b31-8127-a3d7b940bb13 | trentmkelly/LessWrong-43k | LessWrong | Emails from your Gratitude Journal
Lots has been written here about gratitude journaling.
In spite of knowing about the benefits, I've never managed to make the habit stick. I do, however, already have a very strong habit of responding to email and keeping my inbox under control.
That's what pushed me to build Email Notebook, which I'm hopeful will facilitate other folks' writing habits, too.
It's beta quality at the moment, but my email is on the contact page and I plan to put some effort into feature requests if there are any common themes. Already on the roadmap is to improve the variety of prompts, touching on some of the specific practices outlined by @david_gross.
If you do give it a try, please report any bugs you encounter :) |
bbbf24a9-a7ba-4b5a-a393-175f3fbb0bc8 | trentmkelly/LessWrong-43k | LessWrong | Science of Deep Learning more tractably addresses the Sharp Left Turn than Agent Foundations
Summary
Lots of agent foundations research is motivated by the idea that alignment techniques found by empirical trial-and-error will fail to generalize to future systems. While such threat models are plausible, agent foundations researchers have largely failed to make progress on addressing them because they tend to make very few assumptions about the gears-level properties of future AI systems. In contrast, the emerging field of "Science of Deep Learning" (SDL) assumes that the systems under question are neural networks, and aims to empirically uncover general principles about how they work. Assuming such principles exist and that neural nets will lead to AGI, insights from SDL will generalize to future systems even if specific alignment techniques found by trial-and-error do not. As a result, SDL can address the threat models that motivate agent foundations research while being much more tractable due to making more assumptions about future AI systems, though it cannot help in the construction of an ideal utility function for future AI systems to maximize.
Thanks to Alexandra Bates for discussion and feedback
Introduction
Generally speaking, the "agent foundations" line of research in alignment is motivated by a particular class of threat models: that alignment techniques generated through empirical trial and error like RLHF or Constitutional AI will suddenly break down as AI capabilities advance, leading to catastrophe. For example, Nate Soares has argued that AI systems will undergo a "sharp left turn" at which point the "shallow" motivational heuristics which caused them to be aligned at lower capabilities levels will cease to function. Eliezer Yudkowsky seems to endorse a similar threat model in AGI Ruin, arguing that rapid capabilities increases will cause models to go out of distribution and become misaligned.
I'm much more optimistic about finding robust alignment methods through trial and error than these researchers, but I'm not here to debate that. |
72be267d-85fa-403d-9ae0-87fab24cdb55 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Multi-agent predictive minds and AI alignment
*Abstract: An attempt to map a best-guess model of how human values and motivations work to several more technical research questions. The mind-model is inspired by predictive processing / active inference framework and multi-agent models of the mind.*
The text has slightly unusual epistemic structure:
**1st part:** my current best-guess model of how human minds work.
**2nd part:** explores various problems which such mind architecture would pose for some approaches to value learning. The argument is: if such a model seems at least plausible, we should probably extend the space of active research directions.
**3rd part:** a list of specific research agendas, sometimes specific research questions, motivated by the previous.
I put more credence in the usefulness of research questions suggested in the third part than in the specifics of the model described the first part. Also, you should be warned I have no formal training in cognitive neuroscience and similar fields, and it is completely possible I’m making some basic mistakes. Still, my feeling is even if the model described in the first part is wrong, something from the broad class of “motivational systems not naturally described by utility functions” is close to reality, and understanding problems from the 3rd part can be useful.
How minds work
==============
As noted, this is a “best guess model”. I have large uncertainty about how human minds actually work. But if I could place just one bet, I would bet on this.
The model has two prerequisite ideas: [predictive processing](http://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/) and the active inference framework. I'll give brief summaries and links for elsewhere.
In the predictive processing / the active inference framework, brains constantly predict sensory inputs, in a hierarchical generative way. As a dual, action is also “generated” by the same machinery (changing environment to match “predicted” desirable inputs and generating action which can lead to them). The “currency” on which the whole system is running is prediction error (or something in style of [free energy, in that language](https://en.wikipedia.org/wiki/Free_energy_principle)).
Another important ingredient is [bounded rationality,](https://en.wikipedia.org/wiki/Bounded_rationality) i.e. a limited amount of resources being available for cognition. Indeed, the specifics of hierarchical modelling, neural architectures, principle of reusing and repurposing everything, all seem to be related to quite brutal optimization pressure, likely related to brain’s enormous energy consumption (It is unclear to me if this can be also reduced to the same “currency”. Karl Friston would probably answer "yes").
Assuming this whole, how do motivations and “values” arise? The guess is, in many cases something like a “subprogram” is modelling/tracking some variable, “predicting” its desirable state, and creating the need for action by “signalling” prediction error. Note that such subprograms can work on variables on very different hierarchical layers of modelling - e.g. tracking a simple variable like “feeling hungry” vs. tracking a variable like “social status”. Such sub-systems can be large: for example tracking “social status” seems to require lot of computation.

How does this relate to emotions? Emotions could be quite complex processes, where some higher-level modelling (“I see a lion”) leads to a response in lower levels connected to body states, some chemicals are released, and this [interoceptive](https://en.wikipedia.org/wiki/Interoception) sensation is re-integrated in the higher levels in the form of emotional state, eventually reaching consciousness. Note that the emotional signal from the body is more similar to “sensory” data - the guess is body/low level responses are a way how genes insert a reward signal into the whole system.
How does this relate to our conscious experience, and stuff like [Kahneman's System 1/System 2](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)? It seems for most people the light of consciousness is illuminating only a tiny part of the computation, and most stuff is happening in the background. Also, S1 has much larger computing power. On the other hand it seems relatively easy to “spawn background processes” from the conscious part, and it seems possible to illuminate larger part of the background processing than is usually visible through specialized techniques and efforts (for example, some meditation techniques).

Another ingredient is the observation that a big part of what the conscious self is doing is interacting with other people, and rationalizing our behaviour. (Cf. press secretary theory, [elephant in the brain](https://www.lesswrong.com/posts/BgBrXpByCSmCLjpwr/book-review-the-elephant-in-the-brain).) It is also quite possible the relation between acting rationally and the ability to rationalize what we did is bidirectional, and significant part of motivation for some rational behaviour is that it is easy to rationalize it.
Also, it seems important to appreciate that the most important part of the human “environment” are other people, and what human minds are often doing is likely simulating other human minds (even simulating how other people would be simulating someone else!).
### Problems with prevailing value learning approaches
While the above sketched picture is just a best guess, it seems to me at least compelling. At the same time, there are notable points of tension between it and at least some approaches to AI alignment.
#### No clear distinction between goals and beliefs
In this model, it is hardly possible to disentangle “beliefs” and “motivations” (or values). “Motivations” interface with the world only via a complex machinery of hierarchical generative models containing all other sorts of “beliefs”.
To appreciate the problems for the value learning program, consider a case of someone who’s predictive/generative model strongly predicts failure and suffering. Such person may take actions which actually lead to this outcome, minimizing the prediction error.
Less extreme but also important problem is that extrapolating “values” outside of the area of validity of generative models is problematic and could be fundamentally ill-defined. (This is related to “ontological crisis”.)
**No clear self-alignment**
It seems plausible the common formalism of [agents with utility functions](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) is more adequate for describing the individual “subsystems” than the whole human minds. Decisions on the whole mind level are more like results of interactions between the sub-agents; results of multi-agent interaction are not in general an object which is naturally represented by utility function. For example, consider the sequence of game outcomes in repeated [PD game](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma). If you take the sequence of game outcomes (e.g. 1: defect-defect, 2:cooperate-defect, ... ) as a sequence of actions, the actions are not representing some well behaved preferences, and in general not maximizing some utility function.
Note: This is not to claim [VNM rationality](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) is useless - it still has the normative power - and some types of interaction lead humans to approximate [SEU](https://en.wikipedia.org/wiki/Subjective_expected_utility) optimizing agents better.
One case is if mainly one specific subsystem (subagent) is in control, and the decision does not go via too complex generative modelling. So, we should expect more VNM-like behaviour in experiments in narrow domains than in cases where very different sub-agents are engaged and disagree.
Another case is if sub-agents are able to do some “social welfare function” style aggregation, bargain, or trade - the result could be more VNM-like, at least in specific points of time, with the caveat that such “point” aggregate function may not be preserved in time.
On the contrary, cases where the resulting behaviour is very different from VNM-like may be caused by sub-agents locked in some non-cooperative Nash equilibria.
#### What we are aligning AI with
Given this distinction between the whole mind and sub-agents, there are at least four somewhat different notions of what alignment can mean.

1. Alignment with the outputs of the generative models, without querying the human. This includes for example proposals centered around approval. In this case, generally only the output of the internal aggregation has some voice.

2. Alignment with the outputs of the generative models, with querying the human. This includes for example [CIRL](https://arxiv.org/abs/1606.03137) and similar approaches. The problematic part of this is, by carefully crafted queries, it is possible to give voice to different sub-agenty systems (or with more nuance, give them very different power in the aggregation process). One problem with this is, if the internal human system is not self-aligned, the results could be quite arbitrary (and the AI agent has a lot of power to manipulate)

3. Alignment with the whole system, including the human aggregation process itself. This could include for example some deep NN based black-box trained on a large amount of human data, predicting what would the human want (or approve).
4. Adding layers of indirection to the question, such as defining alignment as a state where the *“A is trying to do what H wants it to do.”*
In practice, options 1. and 2. can collapse into one, as far as there is some feedback loop between the AI agent actions and the human reward signal. (Even in case 1, the agent can take an action with the intention to elicit feedback from some subpart.)
We can construct a rich space of various meanings of "alignment" by combining basic directions.
Now, we can analyze how these options interact with various alignment research programs.
Probably the most interesting case is [IDA](https://www.lesswrong.com/posts/dy6JHE7vzJS8SpSiu/iterated-distillation-and-amplification). IDA-like schemes can probably carry forward arbitrary properties to more powerful systems, as long as we are able to construct the individual step preserving the property. (I.e. one full cycle of distillation and amplification, which can be arbitrarily small).
Distilling and amplifying the alignment in sense #1 (what the human will actually approve) is conceptually easiest, but, unfortunately, brings some of the problems of potentially super-human system optimizing for manipulating the human for approval.
Alignment in sense #3 creates a very different set of problems. One obvious risk are mind-crimes. More subtle risk is related to the fact that as the implicit model of human “wants” scales (becomes less bounded), I. the parts may scale at different rates II. the outcome equilibria may change even if the sub-parts scale at the same rate.

Alignment in sense #4 seems more vague, and moves the burden of understanding the problem in part to the side of the AI. We can imagine that at the end the AI will be aligned with some part of the human mind in a self-consistent way (the part will be a fixed point of the alignment structure). Unfortunately, it is *a priori* unclear if a unique fixed point exists. If not, the problems become similar to case #2. Also, it seems inevitable the AI will need to contain some structure representing what the human wants the AI to do, which may cause problems similar to #3.
Also, in comparison with other meanings, it is much less clear to me how to even establish some system has this property.
#### Rider-centric and meme-centric alignment
Many alignment proposals seem to focus on interacting just with the conscious, narrating and rationalizing part of mind. If this is just a one part entangled in some complex interaction with other parts, there are specific reasons why this may be problematic.
One: if the “rider” (from the rider/elephant metaphor) is the part highly engaged with tracking societal rules, interactions and memes. It seems plausible the “values” learned from it will be mostly aligned with societal norms and interests of memeplexes, and not “fully human”.
This is worrisome: from a [meme-centric](https://en.wikipedia.org/wiki/Memetics) perspective, humans are just a substrate, and not necessarily the best one. Also - a more speculative problem may be - schemes learning human memetic landscape and “supercharging it” with superhuman performance may create some hard to predict evolutionary optimization processes.
#### Metapreferences and multi-agent alignment
Individual “preferences” can often in fact be mostly a meta-preference to have preferences compatible with other people, based on simulations of such people.
This may make it surprisingly hard to infer human values by trying to learn what individual humans want without the social context (necessitating inverting several layers of simulation). If this is the case, the whole approach of extracting individual preferences from a single human could be problematic. (This is probably more relevant to some “prosaic” alignment problems)
#### Implications
Some of the above mentioned points of disagreements point toward specific ways how some of the existing approaches to value alignment may fail. Several illustrative examples:
* Internal conflict may lead to inaction (also to not expressing approval or disapproval). While many existing approaches represent such situation only by the *outcome* of the conflict, the internal experience of the human seems to be quite different with and without the conflict
* Difficulty with splitting “beliefs” and “motivations”.
* Learning inadequate societal equilibria and optimizing on them.
Upside
------
On the positive side, it could be expected the sub-agents still easily agree on things like “it is better not to die a horrible death”.
Also, the mind-model with bounded sub-agents which interact only with their local neighborhood and do not actually care about the world may be a viable design from the safety perspective.
Suggested technical research directions
=======================================
While the previous parts are more in backward-chaining mode, here I attempt to point toward more concrete research agendas and questions where we can plausibly improve our understanding either by developing theory, or experimenting with toy models based on current ML techniques.
Often it may be the case that some research was already done on the topic, just not with AI alignment in mind, and a high value work could be “importing the knowledge” into safety community.
**Understanding hierarchical modelling.**
It seems plausible the human hierarchical models of the world optimize some "boundedly rational" function. (Remembering all details is too expensive, too much coarse-graining decreases usefulness. A good bounded rationality model can work as a principle for how to select models. In a similar way to the minimum description length principle, just taking some more “human” (energy?) costs as cost function.)
**Inverse Game Theory.**
Inverting agent motivations in MDPs is a different problem from inverting motivations in multi-agent situations where game-theory style interactions occur. This leads to the inverse game theory problem: observe the interactions, learn the objectives.
**Learning from multiple agents.**
Imagine a group of five closely interacting humans. Learning values just from person A may run into the problem that big part of A’s motivation is based on A simulating B,C,D,E (on the same “human” hardware, just incorporating individual differences). In that case, learning the “values” just from A’s actions could be in principle more difficult than observing the whole group, trying to learn some “human universals” and some “human specifics”. A different way of thinking about this could be by making a parallel with meta-learning algorithms (e.g. REPTILE) but in IRL frame.
**What happens if you put a system composed of sub-agents under optimization pressure?**
It is not clear to me what would happen if you, for example, successfully “learn” such a system of “motivations” from a human, and then put it inside of some optimization process selecting for VNM-like rational behaviour.
It seems plausible the somewhat messy system will be forced to get more internally aligned; for example, one way how it can happen is one of the sub-agent systems takes control and “wipes out the opposition”.
**What happens if you make a system composed of sub-agents less computationally bounded?**
It is not clear that the relative powers of sub-agents will scale the same with the whole system becoming less computationally bounded. (This is related to MIRI’s sub-agents agenda)
Suggested non-technical research directions
===========================================
**Human self-alignment.**
All other things being equal, it seem safer to try to align AI with humans which are self-aligned.
Notes & Discussion
==================
**Motivations**
Part of my motivation for writing this was an annoyance: there is a plenty of reasons to believe the view
* human mind is a unified whole,
* at first approximation optimizing some utility function,
* this utility is over world-states,
is neither a good model of humans, nor the best model how to think about AI. Yet, it is the paradigm shaping a lot of thoughts and research. I hope if the annoyance surfaced in the text, it is not too distractive.
**Multi-part minds in literature**
There are dozens of schemes describing mind as some sort of multi-part system, so there is nothing original about this claim. Based on a very shallow review, it seems the way how psychologists often conceptualize the sub-agents is as [subpersonalities](https://en.wikipedia.org/wiki/Subpersonality), which are almost fully human. This seems to err on the side of sub-agents being too complex, and anthropomorphising instead of trying to describe formally. (Explaining humans as a composition of humans is not much useful for AI alignment). On the other hand, Minsky’s [“*Society of Mind*”](https://en.wikipedia.org/wiki/Society_of_Mind) has sub-agents which often seem to be too simple (e.g. similar in complexity to individual logic gates). If there is some literature having sub-agent complexity right, and sub-agents being inside predictive processing, I’d be really excited about it!
**Discussion**
When discussion the draft, several friends noted something along the line: “It is overdetermined that approaches like IRL are doomed. There are many reasons for that and the research community is aware of them”. To some extent, I agree this is the case, on the other hand 1. the described model of mind may pose problems even for more sophisticated approaches 2. My impression is many people still have something like utility-maximizing agent as a the central example.
The complementary objection is that while interacting sub-agents may be a more precise model, it seems in practice it is often enough to think about humans as unified agents is good enough, and may be good enough even for the purpose of AI alignment. My intuitions on this is based on the connection of rationality to exploitability: it seems humans are usually more rational and less exploitable when thinking about narrow domains, but can be quite bad when vastly different subsystems are in in play (imagine on one side a person exchanging stock and money, on the other side some units of money, free time, friendship, etc.. In the second case, many people are willing to trade in different situations by very different rates)
*I’d like to thank Linda Linsefors , Alexey Turchin, Tomáš Gavenčiak, Max Daniel, Ryan Carey, Rohin Shah, Owen Cotton-Barratt and others for helpful discussions. Part of this originated in the efforts of the “Hidden Assumptions” team on the 2nd AI safety camp, and my thoughts about how minds work are inspired by CFAR.* |
1a3a48db-625a-4fd4-b2f9-3ed2a206e6a1 | trentmkelly/LessWrong-43k | LessWrong | are "almost-p-zombies" possible?
It's probably not possible to have a twin of me that does everything the same except experiences no qualia, i.e. you can predict 100% accurately, if you expose it to stimulus X and it does Y, that I would also do Y if I was exposed to stimulus X.
But can you make an "almost-p-zombie"? A copy of me, which, while not being exactly like me (besides consciousness), is almost exactly like me. So a function, which, when it takes in a stimulus X, says, not with 100% certainty, but 99.999999999%, what I will do in response. Is this possible to construct within the laws of our universe?
Additionally, is this easier or harder to construct than a fully conscious simulation of me?
Just curious. |
a2b29c77-da15-42b6-80a8-8a97e125361e | trentmkelly/LessWrong-43k | LessWrong | The Opt-Out Clause
(cross-posted from my blog)
Let me propose a thought experiment with three conditions.
First, you're in a simulation, and a really good one at that. Before you went in, the simulators extracted and stored all of your memories, and they went to great lengths to make sure that the simulation is completely faultless.
Second, you can leave any time you like. All you have to do to end the simulation, regain your memories, and return to reality is recite the opt-out passphrase: "I no longer consent to being in a simulation". Unfortunately, you don't know about the opt-out condition: that would kind of ruin the immersion.
Third, although you're never told directly about the opt-out condition, you do get told about indirectly, phrased as a kind of hypothetical thought experiment. Maybe someone poses it to you at a party, maybe you read it on twitter, maybe it's a blog post on some niche internet forum. You're guaranteed to hear about it at least once though, to give you a fair chance of leaving. But it's vague and indirect enough that you can dismiss it if you want, and probably forget about it in a week.
It's not enough to think the opt-out phrase, you have to actually say it or write it. So the question is, hypothetically, would you? |
a4ab6a0f-2b3a-4098-987d-7f67b0677f21 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London
Discussion article for the meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London
WHEN: 18 July 2014 07:00:00PM (-0400)
WHERE: Ottawa, Canada
Hi all LWers and CFAR alumni in the eastern Canada region! We'll be hosting a megameetup in Ottawa, Canada, running from 7:00 pm on Friday, July 18th, until early afternoon on Sunday, July 20th. We have a house available and enough space for everyone to sleep on site for the duration. We'll be eating communally, and there will be lots of snacks stocked up at the house, but please plan on contributing some money to cover food costs.
Friday night will be a fun social. Saturday will have a schedule of talks, activities, and CFAR-style classes. Sunday, we will most likely have an outing to a park or beach, depending on weather.
If you would like to come to this meetup, please fill out the following Google Form for logistics purposes: https://docs.google.com/forms/d/1zAFz-2nFUfQ31aVW6nFl61gsmnsER7PVCSvlJjwU__E/viewform?usp=send_form
If you have any questions, you can message Swimmer963 and I will try to answer them.
Discussion article for the meetup : Upper Canada LW Megameetup: Ottawa, Toronto, Montreal, Waterloo, London |
f305d1af-30dd-41c7-8626-a2e296ffc48c | trentmkelly/LessWrong-43k | LessWrong | Another Iterated Prisoner's Dilemma Tournament?
Last year, there was a lot of interest in the IPD tournament with people asking for regular events of this sort and developing new strategies (like Afterparty) within hours after the results were published and also expressing interest in re-running the tournament with new rules that allowed for submitted strategies to evolve or read their opponent's source code. I noticed that many of the submitted strategies performed poorly because of a lack of understanding of the underlying mechanics, so I wrote a comprehensive article on IPD math that sparked some interesting comments.
And then the whole thing was never spoken of again.
So now I'd like to know: How many LWers would commit to competing in another tournament of this kind, and would someone be interested in hosting it? |
cc115b2e-4f17-464d-b193-5b6ca565bfd7 | trentmkelly/LessWrong-43k | LessWrong | Introduction to Representing Sentences as Logical Statements
TLDR: Most sentences can be translated to simple predicate logic. There are some very important cases where natural languages are recursive, but those cases are limited to a few particular connectives (like causation and attitude verbs). I propose the hypothesis that, contrary to mainstream formal semantics, events should not be treated as basic entities, and rather as ways to organize information which can already be expressed without events.
Motivations for this investigation:
1. Better understanding how language works as an angle for better understanding minds.
2. (To a lesser extent, as a starting angle for creating a good language for reasoning more effectively.)[1]
There's a tight correspondence between sentences and logical statements. Roughly speaking, when we hear a sentence, our minds very likely parse it into a logical statement, along with inferring other details that are not captured by the coarseness of sentences in human languages (like imagining a visual scene), and then apply inferences (e.g. about what is likely going to happen next).
In theory, one could have a language where all the sentences are just statements in predicate logic, bypassing the first parsing step. Creating such a simple system may be a useful investigation that may move us closer to better understanding how language works.
This post proposes a partially-formal system based on predicate logic, which could be seen as a very low-level language. I think everything that can be meaningfully said in natural languages can also be said in this simple system, although a translated statement sometimes look not closely analogous to the original sentence. To be clear, I definitely don't claim that this system explains how most of the meaning of sentences gets parsed, though it is a small step into that direction.
In many respects, the framework presented here aligns with traditions in formal semantics, though it diverges in its treatment of events.
This work focuses on the structur |
bc8ac0de-8316-4648-84c8-df35cb8286e6 | trentmkelly/LessWrong-43k | LessWrong | Ethical dilemmas for paperclip maximizers
(Why? Because it's fun.)
1) Do paperclip maximizers care about paperclip mass, paperclip count, or both? More concretely, if you have a large, finite amount of metal, you can make it into N paperclips or N+1 smaller paperclips. If all that matters is paperclip mass, then it doesn't matter what size the paperclips are, as long as they can still hold paper. If all that matters is paperclip count, then, all else being equal, it seems better to prefer smaller paperclips.
2) It's not hard to understand how to maximize the number of paperclips in space, but how about in time? Once it's made, does it matter how long a paperclip continues to exist? Is it better to have one paperclip that lasts for 10,000 years and is then destroyed, or 10,000 paperclips that are all destroyed after 1 year? Do discount rates apply to paperclip maximization? In other words, is it better to make a paperclip now than it is to make it ten years from now?
3) Some paperclip maximizers claim want to maximize paperclip <i>production</i>. This is not the same as maximizing paperclip count. Given a fixed amount of metal, a paperclip count maximizer would make the maximum number of paperclips possible, and then stop. A paperclip production maximizer that didn't care about paperclip count would find it useful to recycle existing paperclips, melting them down so that new ones could be made. Which approach is better?
4) More generally, are there any conditions under which the paperclip-maximizing thing to do involves destroying existing paperclips? It's easy to imagine scenarios in which destroying some paperclips causes there to be more paperclips in the future. (For example, one could melt down existing paperclips and use the metal to make smaller ones.) |
1497b372-66f5-4b87-aa36-6fa0609c1b34 | trentmkelly/LessWrong-43k | LessWrong | Local Solar Time
Sometimes we talk about how society is "path dependent": what things would be different if history had not followed its particular path? One place that's fun to think about is time. Until the development of railways, places used local solar time. Washington DC is at 77°W while NYC is at 74°, so clocks in NYC would be set ~12min ahead of those in DC.
Initially, this didn't matter: traveling from NYC to DC was a multi day endeavor, so a 12 minute difference in clocks was trivial. As roads got better and then railroads were built, however, travel times decreased enormously:
Atlas of the Historical Geography of the United States, 1932
If you're running a railway system, it's really very useful to choose a single location's time to use internally. Your workers are moving East and West, fast enough for this sort of discrepancy to become a problem, and they certainly don't want to be continuously changing their watches as they move from town to town. Railway schedules are much easier to work with if they use railway time, people start using railway time in their regular life, the government adopts a system of time zones.
Imagine, however, that things had gone differently and we were still on local solar time. At this point it would not be hard to make our phones always report the correct time as we move east and west, since our phones know where we are. Automated calendar software could deal with local solar time with about as much effort as it could with time zones. Do all routes to the present pass through a system of time zones, or is there an alternative history where we really could be on local solar time today?
Railroads need shared time much more than other forms of transportation. There are a small number of high-capacity tracks, and the same with vehicles. Efficient use requires central organization, coordinated operation, and shared scheduling, which is far easier with a shared time system. If instead of railroads we had developed cars, it would still be ann |
06d09836-92d2-4336-8f4e-469b9ccb4ad0 | trentmkelly/LessWrong-43k | LessWrong | Highlights from The Autobiography of Andrew Carnegie
I’ve been reading Andrew Carnegie’s autobiography, published late in his life, in the early 1900s. Here are some interesting themes and quotes. (Emphasis added in all block quotes below.)
Science and steel
One key to Carnegie‘s success in the iron business is that he was one of the first to seriously apply chemistry:
> Looking back to-day it seems incredible that only forty years ago (1870) chemistry in the United States was an almost unknown agent in connection with the manufacture of pig iron. It was the agency, above all others, most needful in the manufacture of iron and steel. The blast-furnace manager of that day was usually a rude bully… who in addition to his other acquirements was able to knock down a man now and then as a lesson to the other unruly spirits under him. He was supposed to diagnose the condition of the furnace by instinct, to possess some almost supernatural power of divination, like his congener in the country districts who was reputed to be able to locate an oil well or water supply by means of a hazel rod. He was a veritable quack doctor who applied whatever remedies occurred to him for the troubles of his patient.
Part of the problem was that the ores and other inputs to smelting were inconsistent in composition:
> The Lucy Furnace was out of one trouble and into another, owing to the great variety of ores, limestone, and coke which were then supplied with little or no regard to their component parts. This state of affairs became intolerable to us.
This is where chemistry was able to help:
> We finally decided to dispense with the rule-of-thumb-and-intuition manager, and to place [Henry Curry] in charge of the furnace….
>
> The next step taken was to find a chemist as Mr. Curry’s assistant and guide. We found the man in a learned German, Dr. Fricke, and great secrets did the doctor open up to us. Ironstone from mines that had a high reputation was now found to contain ten, fifteen, and even twenty per cent less iron than it had bee |
e9bbee9d-67ce-4ffd-9588-432bff2b1bde | StampyAI/alignment-research-dataset/arxiv | Arxiv | Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions
1 Introduction
---------------
In recent years, Artificial Intelligence (AI) has become much more closely connected to human activity. Many tasks that once used to require human labor, are now gradually being automated and shifted to AI. For instance,
in order to cope with the COVID-19 pandemic situation, the use of robot workers is being suggested to minimize physical contact between humans. These robot technologies heavily depend on the accuracy of action recognition/prediction and the consequent interaction between humans and machines.
State-of-the-art action recognition and prediction models are deep neural networks (DNNs), due to their capability of modeling complex problems Si et al. ([2019](#bib.bib43 "An attention enhanced graph convolutional lstm network for skeleton-based action recognition")); Li et al. ([2019a](#bib.bib44 "Spatio-temporal graph routing for skeleton-based action recognition"), [b](#bib.bib45 "Actional-structural graph convolutional networks for skeleton-based action recognition")) in an accurate way. Nonetheless, it has also been shown that these models are prone to adversarial examples (or attacks) Biggio et al. ([2013](#bib.bib1 "Evasion attacks against machine learning at test time")); Szegedy et al. ([2013](#bib.bib2 "Intriguing properties of neural networks")); Goodfellow et al. ([2014](#bib.bib3 "Explaining and harnessing adversarial examples")). DNNs can behave erratically when processing inputs with carefully crafted perturbations, even though such perturbations are imperceptible to humans Carlini and Wagner ([2017](#bib.bib7 "Towards evaluating the robustness of neural networks")); Madry et al. ([2018](#bib.bib4 "Towards deep learning models resistant to adversarial attacks")); Croce and Hein ([2020](#bib.bib25 "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks")); Jiang et al. ([2020](#bib.bib55 "Imbalanced gradients: a new cause of overestimated adversarial robustness")); Wang et al. ([2021](#bib.bib23 "A unified approach to interpreting and boosting adversarial transferability")). This has raised security concerns on the deployment of DNN-powered AI systems in security-critical applications such as autonomous driving Eykholt et al. ([2018](#bib.bib11 "Robust physical-world attacks on deep learning visual classification")); Duan et al. ([2020](#bib.bib10 "Adversarial camouflage: hiding physical-world attacks with natural styles")) and medical diagnosis Finlayson et al. ([2019](#bib.bib56 "Adversarial attacks on medical machine learning")); Ma et al. ([2020](#bib.bib5 "Understanding adversarial attacks on deep learning based medical image analysis systems")). Investigating and understanding these abnormalities is a crucial task before machine learning based AI agents can become practical.
In this work, we investigate the adversarial vulnerability of DNN reaction prediction (i.e., regression) models in skeleton-based interactions. Skeleton signals are among one of the most commonly used representations for human or robot motion Zhang et al. ([2016](#bib.bib53 "RGB-d-based action recognition datasets: a survey")); Wang et al. ([2018](#bib.bib52 "RGB-d-based human motion recognition with deep learning: a survey")). While adversarial attacks have been extensively studied on images Goodfellow et al. ([2014](#bib.bib3 "Explaining and harnessing adversarial examples")); Su et al. ([2019](#bib.bib48 "One pixel attack for fooling deep neural networks")); Brown et al. ([2017](#bib.bib47 "Adversarial patch")); Duan et al. ([2020](#bib.bib10 "Adversarial camouflage: hiding physical-world attacks with natural styles")), very few works have been proposed for skeletons Liu et al. ([2019](#bib.bib39 "Adversarial attack on skeleton-based human action recognition")); Wang et al. ([2019a](#bib.bib40 "SMART: skeletal motion action recognition attack")); Zheng et al. ([2020](#bib.bib41 "Towards understanding the adversarial vulnerability of skeleton-based action recognition")).
In comparison to the image space, which is continuous and where pixels can be perturbed freely without raising obvious attack suspicions, the skeleton space is sparse and discrete. It has a temporal nature that needs to be taken into account. Consequently, attacking skeleton-based models requires many more constraints than the image space.
Existing works on attacking skeleton-based models have only considered the single-person scenario, and have all focused towards recognition (i.e., classification) models Liu et al. ([2019](#bib.bib39 "Adversarial attack on skeleton-based human action recognition")); Wang et al. ([2019a](#bib.bib40 "SMART: skeletal motion action recognition attack")); Zheng et al. ([2020](#bib.bib41 "Towards understanding the adversarial vulnerability of skeleton-based action recognition")). However, interaction scenarios involving two or more characters are essential to the interaction between humans and AI. They should not be overlooked if our ultimate goal is to build AI agents that can fit into our daily life. Neglecting possible attacks might lead to AI agents malfunctioning or behaving aggressively when they are not supposed to.
To close this gap, we propose an Adversarial Interaction Attack (AIA) to test the vulnerability of regression DNNs in skeleton-based interactions involving two characters. Being able to accurately recognize a person’s action is important, but it is equally important to be able go a step further and respond to the action in an appropriate way. In light of this, the usage of regression models is necessary. We hence modified the output layers of two previous state-of-art models on action recognition. One model was based on a Temporal Convolutional Neural Network (TCN) Bai et al. ([2018](#bib.bib32 "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling")) and the other was based on Gated Recurrent Units (GRUs) Maghoumi and LaViola Jr ([2019](#bib.bib33 "DeepGRU: deep gesture recognition utility")). The models were modified to return reactor sequences instead of class labels, and we trained them on skeleton-based interaction data. We examine the performance of AIA attack under both white-box and black-box settings. We show that our AIA attack can easily fool the two regression models to misinterpret the actor’s intentions and predict unexpected reactions. Such reactions have detrimental effects to either the actor or the reactor. Overall, our work reveals potential threats of subtle adversarial attacks on interactions involving AI.
In summary, our contributions are:
* We propose an adversarial attack approach - Adversarial Interaction Attack (AIA), that is domain-independent, and works for general sequential regression models.
* We propose an evaluation metric that can be applied to evaluate the performance of sequential regression attacks. Such a metric is currently missing from the literature.
* We empirically show that our AIA attack can generate targeted adversarial action sequences with small perturbations, which fool DNN regression models into making incorrect (possibly dangerous) predictions.
* We demonstrate via three case studies how our AIA attack may affect human and AI interactions in real scenarios, which motivates the need for effective defense strategies.
We highlight that our work is the first work on targeted sequential regression attack in a strict manner (i.e. purely numerical outputs without labels of any kind). We do not compare our work to previous works on skeleton-based action recognition as the focus of our work is fundamentally different. Specifically, the goal of our work is to design a new type of attack and evaluation metric that is capable of handling any type of regression-based problems in general. We thus leave the compatibility between our work and the previously proposed anthropomorphic constraints Liu et al. ([2019](#bib.bib39 "Adversarial attack on skeleton-based human action recognition")); Wang et al. ([2019a](#bib.bib40 "SMART: skeletal motion action recognition attack")); Zheng et al. ([2020](#bib.bib41 "Towards understanding the adversarial vulnerability of skeleton-based action recognition")) as a future area of interest.
2 Related Work
---------------
###
2.1 Adversarial Attack
Adversarial attacks can be either white-box or black-box depending on the attacker’s knowledge about the target model. White-box attack has full knowledge about the target model including parameters and training details Goodfellow et al. ([2014](#bib.bib3 "Explaining and harnessing adversarial examples")); Zheng et al. ([2019](#bib.bib24 "Distributionally adversarial attack")); Croce and Hein ([2020](#bib.bib25 "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks")); Jiang et al. ([2020](#bib.bib55 "Imbalanced gradients: a new cause of overestimated adversarial robustness")), while black-box attack can only query the target model Chen et al. ([2017](#bib.bib19 "Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models")); Ilyas et al. ([2018](#bib.bib15 "Prior convictions: black-box adversarial attacks with bandits and priors")); Bhagoji et al. ([2018](#bib.bib18 "Practical black-box attacks on deep neural networks using efficient query mechanisms")); Dong et al. ([2019b](#bib.bib16 "Efficient decision-based black-box adversarial attacks on face recognition")); Jiang et al. ([2019](#bib.bib17 "Black-box adversarial attacks on video recognition models")); Bai et al. ([2020](#bib.bib22 "Improving query efficiency of black-box adversarial attack")) or use a surrogate model Liu et al. ([2016](#bib.bib59 "Delving into transferable adversarial examples and black-box attacks")); Tramèr et al. ([2017](#bib.bib58 "The space of transferable adversarial examples")); Dong et al. ([2018](#bib.bib57 "Boosting adversarial attacks with momentum"), [2019a](#bib.bib60 "Evading defenses to transferable adversarial examples by translation-invariant attacks")); Andriushchenko et al. ([2020](#bib.bib61 "Square attack: a query-efficient black-box adversarial attack via random search")); Wu et al. ([2020a](#bib.bib9 "Skip connections matter: on the transferability of adversarial examples generated with resnets")); Wang et al. ([2021](#bib.bib23 "A unified approach to interpreting and boosting adversarial transferability")). Adversarial attacks can also be targeted or untargeted. Under the classification setting, untargeted attacks aim to fool the model such that its output is different from the correct label, whereas targeted attack aims to fool the model to return a target label of the attacker’s interest. White-box attacks can be achieved by solving either the targeted or untargeted adversarial objective using first-order gradient methods Goodfellow et al. ([2014](#bib.bib3 "Explaining and harnessing adversarial examples")); Kurakin et al. ([2016](#bib.bib13 "Adversarial examples in the physical world")). Optimization-based methods have also been proposed to achieve the adversarial objective, and at the same time, minimize the perturbation size Carlini and Wagner ([2017](#bib.bib7 "Towards evaluating the robustness of neural networks")); Chen et al. ([2018](#bib.bib21 "Ead: elastic-net attacks to deep neural networks via adversarial examples")).
Most of the above existing attacks were proposed for images and classification models, and the perturbation is usually constrained to be small (eg. ∥ϵ∥∞=8 for pixel value in [0,255]) so as to be imperceptible to human observers. Defenses against adversarial attacks have also been explored on image dataset Madry et al. ([2018](#bib.bib4 "Towards deep learning models resistant to adversarial attacks")); Zhang et al. ([2019](#bib.bib64 "Theoretically principled trade-off between robustness and accuracy")); Wang et al. ([2019b](#bib.bib8 "On the convergence and robustness of adversarial training")); Bai et al. ([2019](#bib.bib65 "Hilbert-based generative defense for adversarial examples")); Wang et al. ([2020](#bib.bib54 "Improving adversarial robustness requires revisiting misclassified examples")); Wu et al. ([2020b](#bib.bib63 "Adversarial weight perturbation helps robust generalization")); Bai et al. ([2021](#bib.bib66 "Improving adversarial robustness via channel-wise suppressing")).
Attacking Regression Models.
Untargeted regression attacks can be derived from classification attacks by simply attacking the regression loss Balda et al. ([2019](#bib.bib34 "Perturbation analysis of learning algorithms: generation of adversarial examples from classification to regression")).
However, it is more difficult to perform targeted regression attack such that the model outputs a target sequence. This is because, unlike classification models that contain a finite set of discrete labels, regression models can have infinitely many possible outcomes. Hence, most existing attacks on regression models have focused on the untargeted setting.
Meng et al. ([2019](#bib.bib38 "White-box target attack for eeg-based bci regression problems")) proposed a univariate regression loss with the goal of changing the outputs of EEG-based BCI regression models to a value that is at least t away from the natural outcome. This loss function guarantees only that the adversarial output will be at a specified distance away from the natural output. It does not constrain how large or small the output can actually become. Compared to Cheng et al. ([2020](#bib.bib36 "Seq2Sick: evaluating the robustness of sequence-to-sequence models with adversarial examples")), the order of target sequences is more significant for our problem.
In natural language processing (NLP), Cheng et al. ([2020](#bib.bib36 "Seq2Sick: evaluating the robustness of sequence-to-sequence models with adversarial examples")) proposed a targeted attack towards recurrent language models. This work aims to replace arbitrary words in the output sequence with a small set of target adversarial keywords, regardless of their order and occurrence position. While word embedding can be used to evaluate attack performance on language models, an appropriate performance metric is still lacking in the field of interaction prediction, making it difficult to evaluate the effectiveness of an attack.
None of the existing works have implemented an attack that is able to change the whole output sequence completely. In our work, we propose such an attack, which can change the entire output sequence with target frames appearing in our desired order.
Adversarial Attack on Action Recognition.
Previous attacks on skeleton-based action recognition have proposed several constraints based on extensive study of anthropomorphism and motion. These include postural constraints as the maximum changes in joint angles, and inter-frame constraints based on the notion of velocities, accelerations, and jerks Liu et al. ([2019](#bib.bib39 "Adversarial attack on skeleton-based human action recognition")); Wang et al. ([2019a](#bib.bib40 "SMART: skeletal motion action recognition attack")); Zheng et al. ([2020](#bib.bib41 "Towards understanding the adversarial vulnerability of skeleton-based action recognition")). Additionally, Liu et al. ([2019](#bib.bib39 "Adversarial attack on skeleton-based human action recognition")) utilized a Generative Adversarial Network (GAN) loss to model anthropomorphic plausibility.
These constraints are distinct from our work, but could potentially be employed in combination with our proposed attack to improve naturalness of adversarial action sequences.
###
2.2 Interaction Recognition and Prediction
The use of skeleton data has gained its popularity in action recognition and prediction research. Owing to the fact that reliable skeleton data can be easily extracted from modern RGB-D sensors or RGB camera images, these techniques can be easily extended to practical applications Yun et al. ([2012](#bib.bib46 "Two-person interaction detection using body-pose features and multiple instance learning")).
One benchmark interaction dataset is the SBU Kinect Interaction Dataset. Different from most skeleton-based action recognition datasets that focus on studying single-person activities, the SBU Kinect Interaction Dataset captures various activities with two characters involved. Predicting interactions is a much harder task in comparison to predicting single-person activities, due to the complexity and the non-periodicity of the problem Yun et al. ([2012](#bib.bib46 "Two-person interaction detection using body-pose features and multiple instance learning")). Specifically, in the interaction scenario, two characters are involved. However, the contribution from each character may not be equal. For instance, interactions such as approaching and departing have only one active character; another character remains steady over all time frames.
Convolutional Neural Networks (CNNs) Du et al. ([2015a](#bib.bib29 "Skeleton based action recognition with convolutional neural network")); Nunez et al. ([2018](#bib.bib30 "Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition")); Li et al. ([2017](#bib.bib31 "Skeleton-based action recognition with convolutional neural networks")) and Recurrent Neural Networks (RNNs) Du et al. ([2015b](#bib.bib28 "Hierarchical recurrent neural network for skeleton based action recognition")) are two popular choices to tackle the interaction recognition problem. Models from the RNN family such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) are commonly chosen for interaction recognition, because it is natural for them to handle sequential data. Maghoumi and LaViola Jr ([2019](#bib.bib33 "DeepGRU: deep gesture recognition utility")) proposed a recurrent-based model namely DeepGRU, that was able to reach state-of-the-art performance. Temporal Convolutional Networks (TCNs) are also a common choice of model when dealing with spatio-temporal data. TCNs, just like RNNs, can take sequences of any length. TCNs rely on a causal convolution operation to ensure no information leakage from future to the past Bai et al. ([2018](#bib.bib32 "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling")). TCN is also a previous state-of-art model Kim and Reiter ([2017](#bib.bib49 "Interpretable 3d human action analysis with temporal convolutional networks")) and a component adopted by many latest works on skeleton-based action recognition Meng et al. ([2018](#bib.bib50 "Human action recognition based on quaternion spatial-temporal convolutional neural network and lstm in rgb videos")); Yan et al. ([2018](#bib.bib51 "Spatial temporal graph convolutional networks for skeleton-based action recognition")).
In this paper, we will modify the DeepGRU network proposed by Maghoumi and LaViola Jr ([2019](#bib.bib33 "DeepGRU: deep gesture recognition utility")) and the TCN network proposed by Bai et al. ([2018](#bib.bib32 "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling")) for interaction prediction and examine their vulnerability to our proposed attack based on the SBU Kinect Interaction Dataset.
3 Proposed Adversarial Interaction Attack
------------------------------------------
In this section, we first provide a mathematical formulation of the targeted adversarial sequence attack problem. We then introduce the loss functions used by our AIA attack.
Overview. Intuitively, the goal of our AIA attack is to deceive the *reactor* AI agent into thinking that the *actor* is doing a different specific action by making minor changes to the positions of the *actor’s* joints or the angles between joints.
The reactor agent will consequently respond by performing the reaction that is targeted by the attack.
###
3.1 Formal Problem Definition
A skeleton sequence with T frames can be represented mathematically as the vector X=(x1,x2,...,xT) where xi is a skeleton representation of the ith frame, which is a vector consists of 3D-coordinates of the human skeleton joints. More specifically, xi∈RN×3, where N denotes the number of the joints. In our approach, we flattened xi into R3N.
First, we define the formal notion of interaction. Suppose the two characters in a two-person interaction scenario are *actor A* and *reactor B*. The task of an interaction prediction model f is to predict an appropriate reaction (i.e., skeleton) yt at each time step t for reactor B based on the observed skeleton sequence of actor A (x1,⋯,xt). This can be written mathematically as:
| | | |
| --- | --- | --- |
| | f(x1,⋯,xt−1,xt)=yt. | |
Given an input skeleton sequence
X=(x1,x2,...,xT), an adversarial target skeleton sequence Y′=(y′1,y′2,...,y′T), and a prediction model f:RT×3N→RT×3N, the goal of our AIA attack is to find an adversarial input sequence X′=(x′1,⋯,x′T) by solving the following optimization problem:
| | | | | |
| --- | --- | --- | --- | --- |
| | | minX′∑t∈T∥x′t−xt∥∞ | | (1) |
| | | s.t.∑t∈T∥f(x′1,⋯,x′t)−y′t∥2<κ, | |
where, ∥⋅∥p is the Lp norm, and κ≥0 is a *tolerance factor*, which serves as a cutoff that distinguishes whether the output sequence is recognizable as the target reaction. This gives us more flexibility when crafting the adversarial input sequence X′ because the acceptable target sequence is non-singular; the output sequence does not need to be exactly the same as the target sequence to resemble a particular action. We empirically determine this factor based on informal user survey in Section [5.1](#S5.SS1 "5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions").
Intuitively, the above objective is to find a sequence X′ with minimum perturbation from X, such that the distance between the output and the target is less than κ/T on average for each time step.
###
3.2 Adversarial Loss Function
Our goal is to develop a mechanism that crafts an adversarial input sequence which solves the above optimization problem given any target output sequence, while also maintaining the naturalness of the adversarial input sequence.
In order to achieve this goal, we propose the following adversarial loss function:
| | | | |
| --- | --- | --- | --- |
| | Ladv=Lspatial+λLtemporal, | | (2) |
where the Lspatial loss term minimizes the spatial distance between the output sequence and the target sequence, and the Ltemporal loss term maximizes the coherence of the perturbed input sequence so as to maintain the naturalness of the adversarial input sequence.
Spatial Loss.
The spatial loss term aims to generate adversarial output sequences that are visually similar to the target reaction sequences; that is, its objective is to minimize the spatial distance between the output joint locations and the neighbourhood of the target joints for every time step. Following the formulation of the relaxed optimization problem in ([1](#S3.E1 "(1) ‣ 3.1 Formal Problem Definition ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")), we use the L2 norm to measure the distance between two sets of joint locations:
| | | | |
| --- | --- | --- | --- |
| | Lspatial=∑t∈Tinf{∥f(x′1,⋯,x′t)−pt∥2|pt∈St} | | (3) |
with St being an (N-1)-sphere defined by:
| | | | |
| --- | --- | --- | --- |
| | St(y′t,η)={pt∈R3N|∥pt−y′t∥2=η}. | | (4) |
Here, η=κ/T is the mean of the enabling tolerance factor κ in equation ([1](#S3.E1 "(1) ‣ 3.1 Formal Problem Definition ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) over time T.
Temporal Loss.
The temporal loss term is to guarantee the naturalness of the generated adversarial input sequence. Specifically, the movement of each joint should be continuous in time, and motions with abrupt huge change or teleportation should be penalised. The Ltemporal term achieves this goal by maximizing the coherence of each element in the perturbed input sequence with respect to its neighboring elements in the temporal dimension. This gives:
| | | | |
| --- | --- | --- | --- |
| | Ltemporal=∑t∈T(∥x′t−x′t−1∥2+∥x′t−x′t+1∥2) | | (5) |
Note that a scaling factor 0≤λ≤1 is introduced in front of Ltemporal to balance the two loss terms.
We use the first-order method Project Gradient Descent (PGD) Madry et al. ([2018](#bib.bib4 "Towards deep learning models resistant to adversarial attacks")) to minimize the combined adversarial loss iteratively as follows:
| | | | | |
| --- | --- | --- | --- | --- |
| | | X′0=X | | (6) |
| | | | |
where, ΠX,ϵ(⋅) is the projection operation that clips the perturbation back to ϵ-distance away from X when it goes beyond, ∇X′mLadv(X′m,Y′) is the gradient of the adversarial loss to the input sequence, m is the current perturbation step for a total number of M steps, α is the step size and ϵ is the maximum perturbation factor. The sequence Y′ for a target reaction can be either customized or sampled from the original dataset.
| | |
| --- | --- |
| Side-by-side comparison of Case Study 1 ‘handshaking’ to ‘punching’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. | Side-by-side comparison of Case Study 1 ‘handshaking’ to ‘punching’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. |
Figure 1: Side-by-side comparison of Case Study 1 ‘handshaking’ to ‘punching’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.
| | |
| --- | --- |
| Side-by-side comparison of Case Study 2 ‘punching’ to ‘handshaking’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. | Side-by-side comparison of Case Study 2 ‘punching’ to ‘handshaking’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. |
Figure 2: Side-by-side comparison of Case Study 2 ‘punching’ to ‘handshaking’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.
| | |
| --- | --- |
| Side-by-side comparison of Case Study 3 ‘approaching’ to ‘remaining’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. | Side-by-side comparison of Case Study 3 ‘approaching’ to ‘remaining’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output. |
Figure 3: Side-by-side comparison of Case Study 3 ‘approaching’ to ‘remaining’. Top-Bottom: original prediction, adversarial prediction. Blue character: input, green character: output.
4 Overview on Several Case Studies
-----------------------------------
In this section, we conduct case studies on three selected sets of attack objectives that can be easily associated with real scenarios and can serve as motivations behind our approach. Detailed experimental settings can be found in Section [5](#S5 "5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"). The dynamic versions of the case studies and more examples are provided in the supplementary materials.
###
4.1 Case Study 1: ‘handshaking’ to ‘punching’
Figure [1](#S3.F1 "Figure 1 ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions") illustrates a successful AIA attack that fools the model to predict a ‘punching’ action for the reactor (the green character) as a response to the adversarially perturbed ‘handshaking’ action of the actor (the blue character). Note that the perturbation only slightly changed the actor’s action.
This reveals an important safety risk that needs to be carefully addressed before machine learning based AI agents can be widely used in human daily life.
Suppose that we are at an AI interactive exhibition, a participant would like to shake hands with an AI robot agent. He gradually extends his hand, sending out an interaction request to the AI agent and is expecting the AI agent to respond to his handshaking invitation by shaking hand with him. However, instead of reaching its hands out gently, the AI agent decided to punch the participant in the face because the participant’s body does not stay straight. It would be extremely hazardous if the human character unintentionally wiggled his body in a pattern similar to the adversarial perturbation introduced in this case study. While the actual chance of this happening is extremely low due to the high complexity of data in both the spatial and the temporal dimensions, this threat might nevertheless happen if AI workers become widely deployed worldwide. In this case, the human is a victim by inadvertently performing an adversarial attack (wiggling their body).
###
4.2 Case Study 2: ‘punching’ to ‘handshaking’
In this case study we consider a case opposite to the previous one, where human exploiters are capable of attacking AI agents actively and derive benefit from being active attackers.
In the future, it could become a common practice to utilize AI agents to complete dangerous tasks so as to lower the chance of human operators incurring injuries or fatalities. Security guard is one such job that might be taken over by an AI agent. Imagine a secret agency that hires AI security guards is invaded by intruders and is placed in a scenario where combat becomes necessary. The AI guard will fail in its role if the invaders know how to apply effective adversarial attacks towards it. This is the case in Figure [2](#S3.F2 "Figure 2 ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions") where the model was fooled to suggest ‘handshaking’ for the reactor (the green character) rather than ‘punching’.
###
4.3 Case Study 3: ‘approaching’ to ‘remaining’
Finally, Case Study 3 demonstrated in Figure [3](#S3.F3 "Figure 3 ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions") examines the case of how a cheater might be able to bypass an AI agent’s detection.
Whilst automatic ticket checkers have been widely adopted, manual ticket checking is still required for numerous situations. For instance, public transportation companies may want to check whether a passenger has paid for the upgrade fee if he or she is in a first class seat. Now suppose that
a public transportation company decides to hire AI agents to do the ticket checking job. The public transportation company will lose a huge amount of income if passengers know how to stop the ticket checkers from ‘approaching’ as in Figure [3](#S3.F3 "Figure 3 ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"), or even change their ‘approaching’ response to ‘departing’.
5 Empirical Understanding of AIA
---------------------------------
###
5.1 Tolerance Factor κ
The objective of AIA attack is defined with respect to a tolerance factor κ (see ([1](#S3.E1 "(1) ‣ 3.1 Formal Problem Definition ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")), ([3](#S3.E3 "(3) ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) and ([4](#S3.E4 "(4) ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"))), which is a flexible metric that distinguishes whether the output sequence is close to the targeted adversarial reaction.
Because there are many factors involved, such as the character’s height, handedness, and the direction the character is facing, conventional distance metrics such as L1 and L2 norms are not suitable to define precisely what the pattern of a specific action should look like. Therefore, we determine the value of κ based on human perception via an informal user survey.
In order to obtain appropriate values for κ to evaluate whether an attack is successful, we randomly sampled 5 out of 8 sets of attack objectives and presented them to 82 human judges, including computer science faculties and students. Each objective set is composed of an action-reaction pair and contains output sequences generated from 6 different values of ϵ (from left to right in ascending order). For each sample set, we asked the human judges to choose the leftmost sequence they believe is performing the target reaction. Sampled objectives and the responses from the 82 human judges are recorded in Table [1](#S5.T1 "Table 1 ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions").
Based on the responses from the 82 human judges, we computed the tolerance factor κ in the optimization problem defined in ([1](#S3.E1 "(1) ‣ 3.1 Formal Problem Definition ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) based on the average of
| | | | |
| --- | --- | --- | --- |
| | ∑t∈T∥f(x′1,⋯,x′t)−y′t∥2 | | (7) |
over the 5 sample objective sets. The calculation of ([7](#S5.E7 "(7) ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) for each objective set is based on the minimum ϵ polled from the 82 human judges, and the corresponding value of κ is then selected as the optimal value (boldfaced in Table [1](#S5.T1 "Table 1 ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")).
Note that, κ serves as a topological boundary between the natural and the adversarial outputs, whereas ϵ is a maximum perturbation constraint that we don’t want the input perturbation to go beyond.
| ϵ= | 0.075 | 0.15 | 0.225 | 0.3 | 0.375 | 0.45 |
| --- | --- | --- | --- | --- | --- | --- |
| Handshaking | 1 (κ=90.9) | 4 (κ=84.28) | 44 (κ=79.52) | 3 (κ=74.49) | 12 (κ=45.04) | 14 (κ=35.03) |
| Punching | 58 (κ=52.04) | 13 (κ=47.63) | 6 (κ=43.97) | 3 (κ=41.76) | 0 (κ=39.14) | 2 (κ=34.91) |
| Kicking | 3 (κ=100.61) | 71 (κ=93.17) | 7 (κ=86.57) | 1 (κ=80.68) | 0 (κ=47.47) | 0 (κ=35.36) |
| Departing | 0 (κ=85.03) | 7 (κ=76.78) | 26 (κ=71.77) | 12 (κ=67.58) | 1 (κ=41.78) | 10 (κ=32.70) |
| Pushing | 6 (κ=28.66) | 3 (κ=26.55) | 2 (κ=25.16) | 14 (κ=23.98) | 49 (κ=22.77) | 5 (κ=21.31) |
Table 1: Responses from the 82 human judges. The optimal κ for each attack objective is highlighted in bold.
| | |
| --- | --- |
| Adversarial input action sequences generated by our AIA attack with (bottom row, and | Adversarial input action sequences generated by our AIA attack with (bottom row, and |
Figure 4: Adversarial input action sequences generated by our AIA attack with (bottom row, and λ=0.1) or without (top row) the temporal constraint Ltemporal.
###
5.2 Effect of the Temporal Constraint
Here, we study the effect of the temporal constraint Ltemporal defined in ([5](#S3.E5 "(5) ‣ 3.2 Adversarial Loss Function ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) on the naturalness of the generated adversarial input action sequence.
Specifically, we investigate how the input skeleton sequence changes in the depth axis as that is the only perturbed dimension throughout our experiments. Our hypothesis is that this additional factor will enable our AIA attack to find adversarial input sequences that change more smoothly with respect to time.
We demonstrate visually a comparison between adversarial sequences generated with and without the temporal constraint in Figure [4](#S5.F4 "Figure 4 ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"). The top sequence is an adversarial input sequence generated with the Ltemporal term removed, whereas the bottom sequence is an adversarial input sequence generated with λ=0.1 scaling factor applied to the Ltemporal term. In comparison to the previous experiment, we plot the skeletons from the depth-y point of view as we are more interested in visualizing the perturbation.
As shown in Figure [4](#S5.F4 "Figure 4 ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"), it is observable that in general, the top sequence has more abrupt changes in body position between each time step. This almost never happens in the bottom sequence. More specifically, in the bottom sequence, when a larger change to the body posture is necessary, the change is always preceded by smaller changes in the same direction. In contrast, in the top sequence, any large changes can take place in just one time step. This type of aggressive change should be avoided as much as possible, as it could make the attack more easily detectable.
6 Performance Evaluation
-------------------------
We conduct two sets of experiments to evaluate the effectiveness (white-box attack success rate) and the transferability (black-box attack success rate) of our AIA attack.
###
6.1 Experimental Settings
Dataset.
We conduct our experiments on the benchmark SBU Kinect Interaction Dataset, which is composed of interactions of eight different categories, namely ‘approaching’, ‘departing’, ‘kicking’, ‘punching’, ‘pushing’, ‘hugging, ‘handshaking’, and ‘exchanging’. It contains 21 sets of data sampled from 7 participants using a Microsoft Kinect sensor, with approximately 300 interactions in total. Each character’s information is encoded into 15 joints with the x, y, and depth dimensions. The values of x and y fall within [0,1], and depth in [0,7.8125].
In order to extract action sequences and reaction sequences, we partitioned each interaction into two individual sequences corresponding to each character respectively. One sequence will be used as the action (input) and another will be used as the reaction (output). Due to the lack of data in this dataset, we trained our response predictors from the perspectives of both characters. With this belief, we used the skeleton sequences of both characters as input data independently. That is, for each interaction sequence x=x1⌢x2, we create two input/target pairs (x1,x2) and (x2,x1).
Models and Training.
We adopted one convolutional model, TCN Bai et al. ([2018](#bib.bib32 "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling")), and one recurrent model, DeepGRU Maghoumi and LaViola Jr ([2019](#bib.bib33 "DeepGRU: deep gesture recognition utility")), and modified them such that the models predict sequences instead of categorical labels.
Our TCN model has 10 hidden layers with 256 units in each layer and our DeepGRU model follows Maghoumi and LaViola Jr ([2019](#bib.bib33 "DeepGRU: deep gesture recognition utility")) exactly with the output being a linear layer instead of the attention-classifier framework.
We trained each model on the preprocessed dataset for 1,000 epochs using the Adam optimizer with a learning rate of 0.001. We held out sets s01s02, s03s04, s05s02, s06s04 in the original dataset as our test set.
Attack Setting.
In all experiments, we used the same step size of α=0.03 and run our AIA attack for M=400 iterations. In addition, we used the Adam optimizer with a learning rate of 1e−3 to maximize the adversarial loss function Ladv. The scaling factor λ for the temporal loss term Ltemporal was set to 0.1. The tolerance factor κ was selected for each target reaction based on our previous informal user survey in Section [5.1](#S5.SS1 "5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions") (the exact values can be found in Table [1](#S5.T1 "Table 1 ‣ 5.1 Tolerance Factor κ ‣ 5 Empirical Understanding of AIA ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")).
###
6.2 Effectiveness of our AIA Attack
In this experiment, we examine the effectiveness of our AIA attack under the white-box setting with different values of maximum perturbation ϵ allowed.
In order for an attack to be considered successful, it has to satisfy two conditions: 1) the adversarial output sequences need to be recognizable as the target reaction (related to κ), and 2) the adversarial input sequences need to be visually similar enough compared to the natural input sequences such that it can circumvent security detection (related to ϵ). Hence, the smaller the ϵ the attack can work under, the more effective the attack is.
Without loss of generality and in order to control the overall change to the input sequence, we perturbed only the depth dimension for each joint. This makes it much easier to visualize perturbations. On a side note, this is a stricter optimization problem with constraints in comparison to the original proposed problem. The outcome of this experiment is thus applicable to the original problem as well.
#### Adversarial Targets.
We created 8 sets of target reactions, corresponding to all 8 interactions in the SBU Kinect Interaction Dataset. The objective of each set of targets is to change the output reactions of all test data into one specific target reaction. We then perform targeted adversarial attacks based on these objectives over a range of ϵ values.
We consider an attack to be successful if the sum term in ([1](#S3.E1 "(1) ‣ 3.1 Formal Problem Definition ‣ 3 Proposed Adversarial Interaction Attack ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions")) computed on the test datum is less than the human-determined κ based on the sample sets. Otherwise we consider the attack to have failed. The average attack success rates over all 8 target sets under various ϵ are reported for both models in the left subfigure of Figure [5](#S6.F5 "Figure 5 ‣ Adversarial Targets. ‣ 6.2 Effectiveness of our AIA Attack ‣ 6 Performance Evaluation ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"). We used the κ sampled from human judges to evaluate attack success rates for objectives 1 to 5. We expect the κ to be generalizable to unseen reactions, so we used the average κ over 5 objective sets to evaluate the remaining 3 attack objectives.
| | |
| --- | --- |
| Average white-box (left) and black-box (right) attack success rate of our AIA attack on TCN and DeepGRU. | Average white-box (left) and black-box (right) attack success rate of our AIA attack on TCN and DeepGRU. |
Figure 5: Average white-box (left) and black-box (right) attack success rate of our AIA attack on TCN and DeepGRU.
#### Results.
On average, with a perturbation factor ϵ of 0.225 to 0.3, our AIA attack is able to alter almost all output sequences of the DeepGRU model into any target sequence. On the other hand, a larger ϵ of 0.375 to 0.45 is necessary for AIA to achieve a similar level of performance on the TCN model. In general, the TCN model is more robust to our attack than the DeepGRU model. However, under this white-box setting, we were able to achieve a 100% attack success rate on almost all target sets for both models.
We close up this experiment with a conclusion that when model parameters are available, our AIA attack is very effective towards deep sequential regression models. Note that the depth value falls within [0, 7.8125]. This indicates that our AIA algorithm is able to accomplish most attack objectives with small perturbations of 2% to 5% to natural input sequences. More generally, our attack works for any target sequences, not only confined to specific interactions from the dataset. This enables our attack method to work for both targeted and untargeted goals. For untargeted goals, the attacker simply needs to pick an arbitrary target sequence that is far enough from the original sequence.
###
6.3 Black-Box Transferability
In addition to white-box effectiveness, we examine how transferable our attack is. An adversarial example generated based on one model is said to be transferable if it can also fool other independently trained models. In this experiment, we examine robustness of the TCN model and the DeepGRU model towards adversarial examples generated based on each other.
#### Black-box Setting.
We employed the same metric established in Section 5.1 to determine an attack to be successful or not. To evaluate how strong our attack is under the black-box setting, we reused the adversarial input sequences in the previous experiment. We feed all adversarial sequences generated based on one model into another and inspect their effectiveness when used to attack unseen model. In other words, we use adversarial sequences generated based on the DeepGRU model into the TCN model and vice versa. The average black-box attack success rates over a range of ϵ are reported for both models in the right subfigure of Figure [5](#S6.F5 "Figure 5 ‣ Adversarial Targets. ‣ 6.2 Effectiveness of our AIA Attack ‣ 6 Performance Evaluation ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions").
#### Results.
Surprisingly, adversarial examples generated from the TCN model are remarkably strong. With an ϵ value of 0.375 to 0.45, adversarial actions generated from the TCN model successfully fooled the DeepGRU model more than 80% of the time for almost all attack objectives.
Along with the results in Section [6.2](#S6.SS2 "6.2 Effectiveness of our AIA Attack ‣ 6 Performance Evaluation ‣ Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions"), this substantiates that our AIA attack has high transferability in addition to being effective.
We also observed that adversarial actions generated from the DeepGRU model are rather weak on the TCN model under the black box setting. It is only able to achieve an average success rate of 30% irrespective to the maximum perturbation ϵ permitted. The TCN model is more robust than DeepGRU in the white-box setting.
We suspect that this is because the convolutional layers used in TCN are more robust than the gated recurrent units of DeepGRU.
Specifically, in order to fool the TCN model, the attack needs to take into account the high level feature maps between the convolutional layers.
However, adversarial examples generated from the DeepGRU model might not be able to fool the convolutional layers of TCN because these high level features were not taken into consideration in the first place. Note that, while being relatively more robust, TCN also leads to more transferable attacks.
We leave further inspection to this disparity as a future work.
7 Conclusion
-------------
In this paper, we presented a framework for attacking general spatio-temporal regression models. We proposed the first targeted sequential regression attack that is capable of altering the entire output sequence completely - Adversarial Interaction Attack (AIA). On top of that, we also defined an evaluation metric that can be adopted to evaluate the performance of adversarial attacks on sequential regression problems. We demonstrated on variants of two previous state-of-art action recognition models, TCN and DeepGRU, that our AIA attack is very effective. Additionally, we showed that our AIA attacks are highly transferable if referenced from proper models. We also discussed through three case studies, how AIA might impact interactions between human and AI in real scenarios. We hope this serves to motivate careful consideration about how to effectively incorporate AI based agents into human daily life. |
b7f7ccd7-8c2a-4a6a-a4ed-214b742ebccc | trentmkelly/LessWrong-43k | LessWrong | The crucible — how I think about the situation with AI
The basic situation
The world is wild and terrible and wonderful and rushing forwards so so fast.
Modern economies are tremendous things, allowing crazy amounts of coordination. People have got really very good at producing stuff. Long-term trends are towards more affluence, and less violence.
The enlightenment was pretty fantastic not just for bringing us better tech, but also more truthseeking, better values, etc.
People, on the whole, are basically good — they want good things for others, and they want to be liked, and they want the truth to come out. This is some mix of innate and socially conditioned. (It isn’t universal.) But they also often are put in a tight spot and end up looking out for themselves or those they love. The hierarchy of needs bites. Effective altruism often grows from a measure of privilege.
The world is shaped by economics and by incentives and by institutions and by narratives and by societal values and by moral leadership and by technology. All of these have a complex interplay.
AI enters the picture
“AI” is a cluster of powerful technologies which are likely to reshape the world. Each of economics and incentives and institutions and narratives and societal values and moral leadership will (I expect) be profoundly impacted by advanced AI. And, of course, AI will be used to shape advanced technologies themselves.
From a zoomed-out perspective, there are three impacts of this AI transition which matter most[1]:
1. Technological progress will accelerate — automation of research means the world will get even faster and crazier
2. New kinds of entities could be making the choices that matter — either purely artificial agents, or new hybrid institutions which incorporate AI deeply in their decision-making
3. It will become far easier to centralize control — a single entity (AI system, or organization, or person) could end up with robust and enduring power over a large domain, or even the entire world
This is … kind of wild. 2) and 3 |
cc83d3d7-7233-4fc3-931c-1458b37347ca | trentmkelly/LessWrong-43k | LessWrong | Let’s use AI to harden human defenses against AI manipulation
Views my own not my employers.
Summary
tldr: AI may manipulate humans; we can defend against that risk better by optimising AIs to manipulate humans, seeing what manipulation techniques they use, and learning to detect those techniques.
It’s critical that humans can detect manipulation from AIs for two reasons. Firstly, so that we don’t reward AIs for manipulative behaviour (outer alignment). Secondly, so that we can block attempts at AI takeover that run through manipulating humans.
Many standard techniques in alignment can be directed towards this goal. Using debate, we can reward one AI for persuading a human that another AI is being manipulative. The first AI could use techniques from interpretability and cross examination.
This post discusses a complementary approach, where AIs do “gain of function” research to i) discover techniques for manipulating humans and ii) harden human defenses against these techniques (with AI assistance). In other words, AIs are optimised to find “human vulnerabilities” and then patch those vulnerabilities.
The positive outcomes are:
* Humans (with AI assistance) have stronger defenses against manipulation.
* AIs are trained to avoid being manipulative and to instead use persuasion techniques that favour finding true conclusions, e.g. deductive reasoning.
* As a consequence, AI debates favour truth more strongly.
By getting AI to research manipulation techniques, this approach takes advantage of the fact that, as we approach TAI, AIs will be able to do lots of cognitive work.
How might this look concretely? Here’s a very brief example:
1. Optimize a (possibly superhuman) “persuader-AI” to convince humans of true and false statements, where we know the ground truth.
1. The persuader-AI should manipulate humans if this makes it more convincing.
2. This step should be conducted in a secure environment to ensure that the manipulative persuader-AI doesn’t escape.
2. Optimize a (possibly superh |
cf85c103-a2fb-4fde-a912-682a4f142d16 | trentmkelly/LessWrong-43k | LessWrong | Meetup : LessWrong Hamburg - Report from Berlin
Discussion article for the meetup : LessWrong Hamburg - Report from Berlin
WHEN: 26 April 2014 05:00:00PM (+0200)
WHERE: Einstein Bistro, Grindelberg 81-83, 20144 Hamburg, Germany
Main topic for this meetup is a report of my (Gunnars) participation in the European LW Community Event in Berlin. It is expected that this will lead to lots of side track discussions so no other topics or agenda but only games.
New participants are explicitly invited to this meetup! Even if you don't plan repeated participation you may want to know how the Berlin event went.
The location is the same as for the first meetup, directly next to the train and bus station Hoheluft, easily reachable from campus, central station and the homes of some participants. We will have a sign so you can find us. Sorry to say but it being a restaurant you will have to pay for a dish or drink.
Einstein Bistro: http://www.einstein-bistro.de/index.php?option=com_content&view=article&catid=36%3Akontakt&id=74%3Ahoheluft&Itemid=60
The mailing list: https://groups.google.com/forum/#!forum/lesswrong-hamburg
Discussion article for the meetup : LessWrong Hamburg - Report from Berlin |
7493287f-dbe5-4215-9338-ad55527806c0 | trentmkelly/LessWrong-43k | LessWrong | Natural Intelligence is Overhyped
Like this piece? It's cross-posted from by blog: https://collisteru.net/writing/
This is a work of fiction and parody. I have done my best to get the scientific details right as far as they are known today, but my real goal is social commentary, not scientific accuracy.
NOAA DISCOVERS INSCRIBED METEOR ARTIFACT UNDERNEATH ATLANTIC
Dec 8, 2018
Scientists from NOAA claim to have discovered an ancient text hundreds of kilometers off the coast of Nova Scotia. During a recent months-long exploration of the Sohm Abyssal Plain by the OKEANOS explorer, scientists deployed the new deep-sea probe Thalassa. The vessel comes equipped with an autonomously-driven mode and a human guidance mode, three superbright LED flashlights, and a claw to grip stones and other deep-sea detritus. While the aim of the NOAA team was to learn more about the deep-sea biomes of the North Atlantic, they got much more than they bargained for when a scientist spotted what looked to be a large, flat, dark stone with a totally smooth surface. Jessica Kochav, vice deputy of the expedition, describes what happened next:
"We were expecting a slab of concrete or piece of trash, so a lot of us thought it wasn't worth getting close. But something about it gave me a weird feeling, and I insisted that we investigate. It was a lot shinier than the other rocks and totally black. There was a giant and perfectly flat circular face half-buried in the sand. When we shined the light directly on it, the whole thing gleamed like a giant mirror. At this point we were really freaked out, hahah! The surface looked like it had a ton of little scratches on it. We took pictures, and Mark was able to drill off a little piece to bring up to the surface. I'm a marine geologist, not an archaeologist, so I tried not to draw any conclusions until we got the data in front of experts."
The sample was sent to a geophysics center in Colorado to be analyzed. Within a day scientists determined that the rock was composed uniformly |
6fcd449e-50c8-424e-a0b1-35e003496712 | trentmkelly/LessWrong-43k | LessWrong | Hydra
> A small sketch of a vital problem, and a brief gesture at a possible solution
>
>
> UBI forever?
> In what follows, I’ll be talking about permanently sustaining a democratic government which is responsive to its peoples needs, even though they cannot sell their labor and do not own the means of production. but first we must note, even being in this situation- sustaining this state of affairs- is fortunate.
>
> Sustaining, for example, “UBI” is a good problem to have, because it presupposes that you’ve won UBI to begin with. People get very lackadaisical about this! Many think that AI taking jobs wouldn’t be that bad because we’ll all just get UBI. I cannot emphasize to you enough that:
>
> 1. There is no law of nature that says UBI necessarily follows a drop, even a massive drop, in the demand for labour.
> 2. Even if there is a UBI, and even if on aggregate, society becomes fantastically richer, there is no guarantee that UBI will be anything but meagre. Consumption and income inequality might increase greatly, perhaps to astronomical levels.
> 3. You should try to get your guarantees upfront. Waiting to seek UBI and similar guarantees until after we no longer have any labor bargaining power would be a mistake. |
db5c20f6-33ac-4efa-beba-0f9a15be98e6 | trentmkelly/LessWrong-43k | LessWrong | Scaling Evidence and Faith
Often when talking with people of faith about the issue of atheism, a common response I get is:
> “It takes as much faith to believe in evolution as it does to believe in god.”
Any atheist who has ever talked to a theist has likely encountered this line of reasoning. There is a distinction to be made between the glorifying of faith as evidence of truth, as theists do, and the desire to become less faithful in our understanding, as the rational truth seeking person does.
As has been discussed previously, realistically P = 1 is not attainable, at least for the foreseeable future. Nor is P=1 necessary to be comfortable in the universe. Richard Feynman and Carl Sagan among others, feel the same way. We know that heuristics “fill the gaps” in knowledge of recognizable scenarios such that we can comfortably demand less evidence and still come to a reasonable conclusion about our surroundings in most cases. We also know the myriad of ways in which those heuristics fail. Much has been written about uncertainty in the decision making process and I would imagine it will be the focus of much of the future research into logic. So, I have no problems concluding that there is some level of faith in all decision making. How much faith in a claim am I comfortable with? The way in which I have been thinking about the different levels of "faith" recently is as a sliding bar:
On the left side, natural evidence is indicated in blue which is comprised of data points which indicate to the observer the natural proof of the argument. On the right, faith is indicated in red which is comprised of either counter evidence for the claim or a lack of information. The bar’s units can be anything (probability, individual records) so long as both sides are measured in equal parts.
A few examples:
----------------------------------------
Claim: Natural and sexual selection as described by Charles Darwin accurately describes the processes which develop biological traits in separate species.
|
985daa98-23db-4acd-97a2-3e14a0c94fab | trentmkelly/LessWrong-43k | LessWrong | Debate over Cox’s theorem
There seems to be so much debate over the axioms of Cox’s theorem. How can it be a good foundation for statistics when it is so disputed? |
33e5d6c9-b1ab-45ff-a72a-6dbafb9a7729 | StampyAI/alignment-research-dataset/special_docs | Other | Planning for cars that coordinate with people: leveraging effects on human actions for planning and active information gathering over human internal state
Autonomous Robots
https://doi.org/10.1007/s10514-018-9746-1
Planning for cars that coordinate with people: leveraging effects on
human actions for planning and active information gathering over
human internal state
Dorsa Sadigh1·Nick Landolfi1·Shankar S. Sastry1·Sanjit A. Seshia1·Anca D. Dragan1
Received: 16 February 2017 / Accepted: 2 April 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2018
Abstract
Traditionally, autonomous cars treat human-driven vehicles like moving obstacles. They predict their future trajectories and
plan to stay out of their way. While physically safe, this results in defensive and opaque behaviors. In reality, an autonomous
car’s actions will actually affect what other cars will do in response, creating an opportunity for coordination. Our thesis is
that we can leverage these responses to plan more efficient and communicative behaviors. We introduce a formulation of
interaction with human-driven vehicles as an underactuated dynamical system, in which the robot’s actions have consequences
on the state of the autonomous car, but also on the human actions and thus the state of the human-driven car. We model these
consequences by approximating the human’s actions as (noisily) optimal with respect to some utility function. The robot uses
the human actions as observations of her underlying utility function parameters. We first explore learning these parameters
offline, and show that a robot planning in the resulting underactuated system is more efficient than when treating the person
as a moving obstacle. We also show that the robot can target specific desired effects, like getting the person to switch lanes or
to proceed first through an intersection. We then explore estimating these parameters online, and enable the robot to perform
active information gathering: generating actions that purposefully probe the human in order to clarify their underlying utility
parameters, like driving style or attention level. We show that this significantly outperforms passive estimation and improves
efficiency. Planning in our model results in coordination behaviors: the robot inches forward at an intersection to see if can
go through, or it reverses to make the other car proceed first. These behaviors result from the optimization, without relying
on hand-coded signaling strategies. Our user studies support the utility of our model when interacting with real users.
Keywords Planning for human–robot interaction ·Mathematical models of human behavior ·Autonomous driving
This is one of several papers published in Autonomous Robots compris-
ing the “Special Issue on Robotics Science and Systems”.
This paper combines work from Sadigh et al. ( 2016b ,c). It adds a
general formulation of the problem as a game, discusses its
limitations, and lays out the assumptions we make to reduce it to a
tractable problem. On the experimental side, it adds an analysis of the
adaptivity of the behaviors produced to initial conditions for both
offline and active estimation, an analysis of the benefits of active
estimation on the robot’s actual reward, and results on actively
estimating user intentions as opposed to just driving style.
BDorsa Sadigh
dorsa@cs.stanford.edu
1Department of Computer Science, Stanford University,
Stanford, USA1 Introduction
Currently, autonomous cars tend to be overly defensive and
obliviously opaque . When needing to merge into another
lane, they will patiently wait for another driver to pass first.
When stopped at an intersection and waiting for the driver on
the right to go, they will sit there unable to wave them by. They
are very capable when it comes to obstacle avoidance, lane
keeping, localization, active steering and braking (Urmson
et al. 2008 ; Levinson et al. 2011 ; Falcone et al. 2007 ,2008 ,
2007 ; Dissanayake et al. 2001 ; Leonard et al. 2008 ). But
when it comes to other human drivers, they tend to rely on
simplistic models: for example, assuming that other drivers
will be bounded disturbances (Gray et al. 2013 ; Raman et al.
2015 ), they will keep moving at the same velocity (Vitus and
Tomlin 2013 ; Luders et al. 2010 ; Sadigh and Kapoor 2015 ),
123
Autonomous Robots
(a) (b) (c) (d) (e)
Fig. 1 We equip autonomous cars with a model of how humans will
react to the car’s actions ( a). We test the planner in user studies, where
the car figures out that it can nudge into the human’s lane to check theirdriving style ( b,c): if it gets evidence that they are attentive it merges in
front, expecting that the human will slow down; else, it retracts back toits lane ( c). We also see coordination behavior at an intersection, with
the car planning to inch forward to find out more about how the humanwill act, or even inch backwards to incentivize the human to go throughfirst ( d)
or they will approximately follow one of a set of known
trajectories (Vasudevan et al. 2012 ;H e r m e se ta l . 2009 ).
These models predict the trajectory of other drivers as
if those drivers act in isolation. They reduce human–robot
interaction to obstacle avoidance: the autonomous car’s task
is to do its best to stay out of the other drivers’ way. It willnot nudge into the lane to test if the other driver yields, norcreep into the intersection to assert its turn for crossing.
In reality, the actions of an autonomous car affect the
actions of other drivers. Our goal is to leverage these
effects in planning to improve efficiency and coordina-
tion with human drivers.
We are not the first to propose that robot actions influence
human actions. Work in social navigation recognized this,
and addressed it by treating the human and the robot as beingpart of a team—a team that works together to make surethat each agent reaches their goal, and everyone avoids each
other (Trautman and Krause 2010 ; Trautman et al. 2013 ;
Trautman 2013 ; Kuderer et al. 2015 ). The robot computes
acoupled plan by optimizing the team’s objective jointly
over the human and robot plans, assuming the human will
follow their part of the plan, and re-planning at every step.Nikolaidis et al. ( 2015 ) recognized that the human might stray
away from the plan that the robot computed, and modeled the
human as switching to the robot’s plan at every time step withsome probability.
We propose that when people stray away from the cou-
pled plan, there is a fundamental reason for this: they have
a different objective altogether . This is particularly true in
driving, where coupled planning would assume that the per-
son is just as interested in the robot reaching its goal as they
are in themselves reaching theirs—selfish drivers are likelyjust trying to get home and not get into an accident; they are
optimizing for something different.
In this work, we explicitly account for the robot’s influence
on the person by modeling the person as optimizing their ownobjective or utility function in response to the robot’s actions.
We develop an optimization-based method for planning an
autonomous vehicle’s behavior in a manner that is cognizantof the effects it will have on human driver actions via this
model. This optimization leads to plans like the ones in Fig. 1.
Assuming a universal human model that the robot learns
offline, we see that the the orange car in (c) decides to cut in
front of the human driver in order to more efficiently reach
its goal. It arrives at this plan by anticipating that taking this
action will cause the human to brake and make space for it.This comes in contrast to treating the person as an obstacle
that moves (b) and being more defensive. Since not all drivers
are the same, we also arm the robot with the ability to collectinformation about the human driver online. It then decides to
nudge into the person’s lane (d) and only merges if it gets evi-
dence that the person is paying attention; otherwise it retreatsback to its lane. We find fascinating behaviors at intersections(e): the car inches forward to test human attention, or even
inches backwards to get the person to cross first through the
intersection. These can be interpreted as signaling behaviors,but they emerge out of optimizing to affect human actions,
without ever explicitly modeling human inferences.
We achieve this by planning in an underactuated dynami-
cal system : the robot’s actions change not only robot state, but
also influence human actions and thus human state. We model
other drivers as acting approximately optimally according tosome reward function that depends on state, human actions,as well as robot actions. We explore learning this human
reward function offline, as well as online during the inter-
action: here, the robot has the opportunity to use its ownactions to actively gather more information about the under-
lying reward. Our contributions are as follows
1:
1A preliminary version of our results was reported in Sadigh et al.
(2016a ,b). This paper extends that work by providing more detailed
discussion and experiments...
123
Autonomous Robots
1. Form alism of Inter action with Drivers We begin by for-
malizing the interaction between a human and a robot as apartially observable stochastic game (POSG) in Sect. 2.T h e
human and the robot can both act to change the state of the
world, they have partial information because they don’t knoweach other’s reward functions, and they arrive at some tupleof policies that are at an equilibrium.
This formulation has two issues: intractability, especially
in continuous state and action spaces, and failing to capturehuman behavior, because humans tend to not follow Nash
equilibria in day to day tasks (Hedden and Zhang 2002 ).
We introduce a simplification of this formulation to an
underactuated system. We assume that the robot decides on
a trajectory u
R, and the human computes a best response to
uR(as opposed to trying to influence uRas would happen in
a game). This reduction enforces turn-taking, and providesus with a dynamics model in which the human observes (or
predicts) the robot’s actions prior to selecting her actions.
It maintains our ability to model the effects of the robot’sactions on the human, because the robot can influence which
actions the human takes by selecting actions that force a
particular response.
2. Approxim ate Optimiz ation Solution for Known Hum an
ModelAssuming a known reward function for the human
[which we learn offline through Inverse ReinforcementLearning (Ng et al. 2000 ; Abbeel and Ng 2005 ; Ziebart et al.
2008 ; Levine and Koltun 2012 )], we derive an approximate
solution for our system based on Model Predictive Controland a quasi-newton optimization in Sect. 3.A te v e r ys t e p ,
the robot replans a trajectory u
Rby reasoning about the opti-
mization that the human would do based on a candidate uR.
We use implicit differentiation to obtain a gradient of thehuman’s trajectory with respect to the robot’s. This enables
the robot to compute a plan in close to real-time.
3. Extension to Online Inference of Hum anR e w ardThe
solution based on estimating human reward offline from
training data is useful in some cases, but ultimately every
driver is different, and even the same driver is sometimesmore or less aggressive, more or less attentive, and so on. In
Sect. 6, we thus also explore estimating the human reward
function online.
This turns the problem into a partially observable Markov
decision process (POMDP), with the human reward param-
eters as the hidden state. Prior work that incorporates somenotion of human state into planning has thus far separatedestimation and planning, always using the current estimate
of the human state to determine what the robot should do
(Javdani et al. 2015 ; Fern et al. 2007 ; Bandyopadhyay et al.
2013 ). Although efficient, these approximations sacrifice an
important aspect of POMDPs: the ability to actively gather
information .In recent years, many efforts have developed more effi-
cient algorithms for solving continuous POMDPs (Chaudhariet al. 2013 ; Seiler et al. 2015 ; Agha-Mohammadi et al. 2014 );
however, computing the transition function for our POMDP
is costly as it involves solving for human’s best response.This renders the aforementioned approaches ineffective asthey usually require either a succinct transition function or
an easily computable one.
Our work takes advantage of the underactuated system
to gather information about the human reward parameters.
Rather than relying on passive observations, the robot actu-
ally accounts for the fact that humans will react to theiractions: it uses this knowledge to select actions that will trig-
ger human reactions which in turn will clarify the internal
state. This is in line with work in active information gather-ing in manipulation, safe exploration, or patrolling (Javdaniet al. 2013 ; Atanasov et al. 2014 ; Atanasov 2015 ), now used
over human state as opposed to physical state.
4. An alysis of Pl anning in Hum an–Robot Driving Sce-
narios We present the consequences of planning in this
dynamical system in Sect. 4, showcasing behaviors that
emerge when rewarding the robot for certain effects onhuman state, like making the human slow down, change
lanes, or go first through an intersection. We also show
that such behaviors can emerge from simply rewarding therobot for reaching its goal state fast—the robot becomes
more aggressive by leveraging its possible effects on human
actions. This does not happen always: in Sect. 7, we show the
robot maintains a belief over the human driver model, andstarts nudging into their lane to figure out if they are going
to let the robot merge, or inching forward at a 4-way stop.
We test the planner in two user studies in a driving sim-
ulator: one with a human reward learned offline in Sect. 5,
and one with online estimation in Sect. 8. Our results sug-
gest that despite the approximation our algorithm makes, itsignificantly outperforms the “people as moving obstacles”
interaction paradigm. We find that the robot achieves signif-
icantly higher reward when planning in the underactuatedsystem, and that active information gathering does outper-form passive estimation with real users.
Overall, this paper takes a step towards robots that account
for their effects on human actions in situations that are notentirely cooperative, and leverage these effects to coordinate
with people. Natural coordination and interaction strategies
are not hand-coded, but emerge out of planning in our model.
2 Generalformalismforhuman–robot
interactionasagame
To enable a robot to autonomously generate behavior for
interaction and coordination with a human, we need to set up
123
Autonomous Robots
a model that goes beyond a single agent acting in the physi-
cal world among moving obstacles, and consider two-agentmodels. One aspect that sets interaction with people apart
from typical multi-agent models like those used for multi-
robot planning is that we do not know or have direct controlover what the human is trying to do and how. Furthermore,the human also does not know what the robot is trying to do
nor how the robot is going to do it.
Even the simplest formulation of the problem becomes
a two player game, and in what follows we introduce such
a formulation and expose some of its issues, including the
computational complexity of planning in such a model aswell as the model’s inaccuracy in capturing human behavior.
We then propose an approximation that simplifies planning in
the game to planning in an underactuated system: the robot
had direct control over its actions, but also has a model forhow its actions will influence those of the human.
Much of robotics research focuses on how to enable a robot
to achieve physical tasks, often times in the face of percep-tion and movement error—of partially observable worlds and
nondeterministic dynamics (Prentice and Roy 2009 ; Javdani
et al. 2013 ; Patil et al. 2015 ). Part of what makes human–robot
interaction difficult is that even if we assume the physical
world to be fully observable and deterministic, we are still
left with a problem as complex as an incomplete information
repeated two player game . It is a game because it involves
multiple rational (or rather, approximately rational) agents
who can take actions to maximize their own (expected) util-
ities. It is repeated because unlike single shot games, theagents act over a time horizon. It is incomplete informa-
tion because the agents do not necessarily know each others’
reward functions (Aumann et al. 1995 ).
Partially Observ able Stoch astic G ameExtrapolating from
the (PO)MDP formulation of a robot acting in isolation to
human–robot interaction, we can formulate interaction asa special case of a Partially-Observable Stochastic Game
(POSG) (Hansen et al. 2004 ): there are two “players”, the
robot Rand the human H;a te v e r ys t e p t, they can apply
control inputs u
t
R∈URand ut
H∈UH; they each have a
reward function, rRandrH; and there is a state space Swith
states sconsisting of both the physical state x,a sw e l la s
reward parameters θRandθH.
Including the reward parameters in the state is an unusual
trick, necessary here in order to ensure that the human and the
robot do not have direct access to each others’ reward func-tions, while maintaining a well defined POSG: Rdoes not
observe θ
H, and Hdoes not observe θR, but both agents
can technically evaluate each reward at any state-controltuple(s
t,ut
R,ut
H)just because scontains the needed reward
parameter information: s=(x,θR,θH)—if an agent knew
the state, it could evaluate the other agent’s reward.To simplify the physical world component and focus on
the interaction component, we assume fully observable phys-ical states with deterministic dynamics. The observations
in the game are thus the physical state xand the controls
that each agent applies at each time step. The dynamicsmodel is deterministic, affecting xthrough the control inputs
and leaving the reward parameters unchanged (a reasonable
assumption for relatively short interactions).
Aside 1 The POSG above is not necessarily the most direct
extension of single-agent planning in deterministic fully
observable state spaces to interactions. Collaborative interac-
tions have been modeled simply as a centralized multi-agentsystem with a shared reward function (Kuderer et al. 2015 )
(i.e. r
R=rH), but this reduces to treating the human
as another robot that the system has full control over—essentially the system has more degrees of freedom to
control, and there is no difference between the human DoFs
and the robot DoFs. Unlike the POSG, it assumes that thehuman and the robot know each other’s reward, and evenmore, that their rewards are identical. This can be true of
specific collaborative tasks, but not of interactions in gen-
eral: robots do not know exactly what humans want, humansdo not know exactly what robots have been programmed to
optimize for, and their rewards might have common terms but
will not be identical. This happens when an autonomous carinteracts with other drivers or with pedestrians, and it even
happens in seemingly collaborative scenarios like rehabili-
tation, in which very long horizon rewards might be alignedbut not short-term interaction ones. The POSG formulationcaptures this intricacy of general interaction.
Aside 2 The POSG above is the simplest extension of the
single-agent models to interactions between a human and
a robot that do not share a reward function or know each
others’ rewards. More complex models might include richerstate information (such as human’s beliefs about the robot,trust, estimation of capability, mood, etc.), and might allow
it to change over time.
Limit ations of the G ame Formul ation The POSG formu-
lation is a natural way to characterize interaction from the
perspective of MDP-like models, but is limited in two funda-
mental ways: (1) its computational complexity is prohibitiveeven in discrete state and action spaces (Bernstein et al. 2002 ;
Hansen et al. 2004 ) and no methods are known to handle con-
tinuous spaces, and (2) it is not a good model for how peopleactually work—people do not solve games in everyday tasks
when they are not playing chess (Hedden and Zhang 2002 ).
Furthermore, solutions here are tuples of policies that are ina Nash equilibrium, and it is not clear what equilibrium toselect.
123
Autonomous Robots
3 Approximatesolutionasanunderactuated
system
To alleviate the limitations from above, we introduce an
approximate close to real-time solution, with a model ofhuman behavior that does not assume that people computeequilibria of the game.
3.1 Assumptions that simplify the POSG
Our approximation makes several simplifying assumptionsthat turn the game into an offline learning phase in which
the robot learns the human’s reward function, followed by
an online planning phase in which the robot is solving anunderactuated control problem:
Separation of Estim ation & Control We separate the pro-
cess of computing actions for the robot into two stages. First,the robot estimates the human reward function parameters
θ
Hoffline. Second, the robot exploits this estimate as a fixed
approximation to the human’s true reward parameters dur-ing planning. In the offline phase, we estimate θ
Hfrom user
data via Inverse Reinforcement Learning (Ng et al. 2000 ;
Abbeel and Ng 2005 ; Ziebart et al. 2008 ; Levine and Koltun
2012 ). This method relies heavily on the approximation of
all humans to a constant set of reward parameters, but we
will relax this separation of estimation and control in Sect. 6.
Model Pre dictive Control (MPC) Solving the POSG
requires planning to the end of the full-time horizon. We
reduce the computation required by planning for a shorter
horizon of Ntime steps. We execute the control only for the
first time step, and then re-plan for the next Nat the next
time step (Camacho and Alba 2013 ).
Letx=(x1,..., xN)/latticetopdenote a sequence of states over
a finite horizon, N, and let uH=(u1
H,..., uN
H)/latticetopand
uR=(u1
R,..., uN
R)/latticetopdenote a finite sequence of contin-
uous control inputs for the human and robot, respectively.
We define RRas the robot’s reward over the finite MPC time
horizon:
RR(x0,uR,uH)=N/summationdisplay
t=1rR(xt,ut
R,ut
H), (1)
where x0denotes the present physical state at the current iter-
ation, and each state thereafter is obtained from the previousstate and the controls of the human and robot using a given
dynamics model, f.
At each iteration, we desire to find the sequence u
Rwhich
maximizes the reward of the robot, but this reward ostensibly
depends on the actions of the human. The robot might attempt
to influence the human’s actions, and the human, optimizingits own reward functions, might likewise attempt to influencethe actions of the robot. Despite our reduction to a finite
time horizon, the game formulation still demands computingequilibria to the problem. Our core assumption, which we
discuss next, is that this is not required for most interactions:
that a simpler model of what the human does suffices.
Simplific ation of the Hum anM o delTo avoid computing
these equilibria, we propose to model the human as respond-
ing rationally to some fixed extrapolation of the robot’sactions. At every time step, t,Hcomputes a simple estimate
ofR’s plan for the remaining horizon, ˜u
t+1:N
R, based on the
robot’s previous actions u0:t
R. Then the human computes its
planuHas a best response (Fudenberg and Tirole 1991 )t o
this estimate. We remark that with this assumption the human
is not modeled as a passive agent since the reward functioncan model the behavior of any active agent, if allowed to have
arbitrary complexity. With this simplification, we reduce the
general game to a Stackelberg competition: the human com-putes its best outcome while holding the robots plan fixed.This simplification is justified in practice because usually
the agents in driving scenarios are competing with each other
rather than collaborating. Solutions to Stackelberg games arein general more conservative in competitive regimes since
an information advantage is given to the second player. As a
result, the quality of a good solution found in a Stackelberggame model does not decrease in practice. We additionally
use a short time horizon, so not much is lost by this approxi-
mation. But fundamentally, we do assume that people wouldnot try to influence the robot’s actions, and this is a limitationwhen compared to solving the POSG.
LetR
Hbe the human reward over the time horizon:
RH(x0,uR,uH)=N/summationdisplay
t=1rH(xt,ut
R,ut
H), (2)
then we can compute the control inputs of the human from
the remainder of the horizon by:
ut
H(x0,u0:t
R,˜ut+1:N
R)=arg maxut+1:T
HRH(xt,˜ut+1:N
R,ut+1:N
H).
(3)
This human model would certainly not work well in adver-
sarial scenarios, but our hypothesis, supported by our results,
is that it is useful enough in day-to-day tasks to enable robotsto be more effective and more fluent interaction partners.
In our work, we propose to make the human’s estimate ˜u
R
equal to the actual robot control sequence uR. Our assump-
tion that the time horizon is short enough that the human
can effectively extrapolate the robot’s course of action moti-
vates this decision. With this presumption, the human’s planbecomes a function of the initial state and robot’s true plan:
123
Autonomous Robots
u∗
H(x0,uR)=arg maxuHRH(xt,uR,uH). (4)
This is now an underactuated system the robot has direct con-
trol over (can actuate) uRand indirect control over (cannot
actuate but does affect) uH. However, the dynamics model
in our setup is more sophisticated than in typical underactu-
ated systems because it models the response of the humans to
the robot’s actions. Evaluating the dynamics model requiressolving for the optimal human response, u
∗
H.
The system is also a special case of an MDP, with the state
as in the POSG, the actions being the actions of the robot, andthe world dynamics being dictated by the human’s responseand the resulting change on the world from both human and
robot actions.
The robot can now plan in this system to determine which
u
Rwould lead to the best outcome for itself:
u∗
R=arg maxuRRR/parenleftBig
x0,uR,u∗
H(x0,uR)/parenrightBig
. (5)
3.2 Planning with Quasi-Newton optimization
Despite the reduction to a single agent complete information
underactuated system, the dynamics remain too complex tosolve in real-time. We lack an analytical form for u
∗
H(x0,uR)
which forces us to solve ( 4) each time we evaluate the dynam-
ics.
Assuming a known human reward function rH[which
we will obtain later through Inverse Reinforcement Learning
(IRL), see Ng et al. ( 2000 ), Abbeel and Ng ( 2005 ), Ziebart
et al. ( 2008 ) and Levine and Koltun ( 2012 )], we can solve ( 5)
locally, using gradient-based methods. Our main contribution
is agnostic to the particular optimization method, but we use
L-BFGS (Andrew and Gao 2007 ), a quasi-Newton method
that stores an approximate inverse Hessian implicitly result-
ing in fast convergence.
To perform the local optimization, we need the gradient
of (2) with respect to uR:
∂RR
∂uR=∂RR
∂uH∂u∗
H
∂uR+∂RR
∂uR(6)
We can compute both∂RR
∂uHand∂RR
∂uRsymbolically through
back-propagation because we have a representation of RR
in terms of uHanduR.
What remains,∂u∗
H
∂uR, is difficult to compute because u∗
His
technically the outcome of a global optimization. To compute
∂u∗
H
∂uR, we use the method of implicit differentiation. Since
RHis a smooth function whose minimum can be attained,
we conclude that for the unconstrained optimization in ( 4),
the gradient of RHwith respect to uHevaluates to 0 at its
optimum u∗
H:∂RH
∂uH/parenleftBig
x0,uR,u∗
H(x0,uR)/parenrightBig
=0( 7 )
Now, we differentiate the expression in ( 7) with respect to
uR:
∂2RH
∂u2
H∂u∗
H
∂uR+∂2RH
∂uH∂uR∂uR
∂uR=0( 8 )
Finally, we solve for a symbolic expression of∂u∗
H
∂uR:
∂u∗
H
∂uR=/bracketleftbigg
−∂2RH
∂uH∂uR/bracketrightbigg/bracketleftBigg
∂2RH
∂u2
H/bracketrightBigg−1
(9)
and insert it into ( 6), providing an expression for the gradient
∂RR
∂uR.
3.3 Offline estimation of human reward parameters
Thus far, we have assumed access to rH(xt,ut
R,ut
H). In our
implementation, we learn this reward function from humandata. We collect demonstrations of a driver in a simulation
environment, and use Inverse Reinforcement Learning (Ng
et al. 2000 ; Abbeel and Ng 2005 ; Ziebart et al. 2008 ;L e v i n e
and Koltun 2012 ; Shimosaka et al. 2014 ; Kuderer et al. 2015 )
to recover a reward function that explains the demonstrations.
To handle continuous state and actions space, and cope
with noisy demonstrations that are perhaps only locally opti-mal, we use continuous inverse optimal control with locally
optimal examples (Levine and Koltun 2012 ).
We parametrize the human reward function as a linear
combination of features:
r
H(xt,ut
R,ut
H)=θ/latticetop
Hφ(xt,ut
R,ut
H), (10)
and apply the principle of maximum entropy (Ziebart et al.
2008 ; Ziebart 2010 ) to define a probability distribution over
human demonstrations uH, with trajectories that have higher
reward being more probable:
P(uH|x0,θH)=exp/parenleftbig
RH(x0,uR,uH)/parenrightbig
/integraltext
exp/parenleftbig
RH(x0,uR,˜uH)/parenrightbig
d˜uH. (11)
We then optimize the weights θHin the reward function
that make the human demonstrations the most likely:
max
θHP/parenleftBig
uH|x0,θH/parenrightBig
(12)
We approximate the partition function in ( 11) following
(Levine and Koltun 2012 ), by computing a second order Tay-
lor approximation around the demonstration:
123
Autonomous Robots
Fig. 2 Features used in IRL for the human driven vehicle; warmer col-
ors correspond to higher reward. We illustrate features correspondingto (a) respecting road boundaries, bholding lanes, and cavoiding col-
lisions with other cars
RH(x0,uR,˜uH)/similarequalRH(x0,uR,uH)+(˜uH−uH)/latticetop∂RH
∂uH
+(˜uH−uH)/latticetop∂2RH
∂u2
H(˜uH−uH),
(13)
which results in the the integral in ( 11) reducing to a Gaussian
integral, with a closed form solution. See Levine and Koltun
(2012 ) for more details.
We display a heat map of features we used in Fig. 2.T h e
warmer colors correspond to higher rewards. In addition tothe features shown in the figure, we include a quadratic func-
tion of the speed to capture efficiency in the objective. The
five features include:
–φ
1∝c1·exp(−c2·d2): distance to the boundaries of
the road, where dis the distance between the vehicle and
the road boundaries and c1andc2are appropriate scaling
factors as shown in Fig. 2a.
–φ2: distance to the middle of the lane, where the function
is specified similar to φ1as shown in Fig. 2b.
–φ3=(v−vmax)2: higher speed for moving forward
through, where vis the velocity of the vehicle, and vmax
is the speed limit.
–φ4=βH·n: heading; we would like the vehicle to have
a heading along with the road using a feature, where βH
is the heading of H, and nis a normal vector along the
road.
–φ5corresponds to collision avoidance, and is a non-
spherical Gaussian over the distance of Hand R, whose
major axis is along the robot’s heading as shown in
Fig. 2c.
We collected demonstrations of a single human driving
for approximately an hour in an environment with multi-
ple autonomous cars, which followed precomputed routes.
Despite the simplicity of our features and robot actions duringthe demonstrations, the learned human model proved suffi-
cient for the planner to produce human-interpretable behavior
(case studies in Sect. 4), and actions which affected human
action in the desired way (user study in Sect. 5).3.4 Implementation details
In our implementation, we used the software package Theano
(Bergstra et al. 2010 ; Bastien et al. 2012 ) to symbolically
compute all Jacobians and Hessians. Theano optimizes thecomputation graph into efficient C code, which is crucial forreal-time applications.
This implementation enables us to solve each step of the
optimization in equation ( 5) in approximately 0.3 s for hori-
zon length N=5 on a 2.3 GHz Intel Core i7 processor with
16 GB RAM. Future work will focus on achieving better
computation time and a longer planning horizon.
4 Casestudieswithofflineestimation
We noted earlier that the state of the art autonomous driv-
ing plans conservatively because of its simple assumptions
regarding the environment and vehicles on the road. In our
experiments, we demonstrate that an autonomous vehicle canpurposefully affect human drivers, and can use this ability to
gather information about the human’s driving style and goals.
In this section, we introduce 3 driving scenarios, and show
the result of our planner assuming a simulated human driver,highlighting the behavior that emerges from different robot
reward functions. In the next section, we test the planner with
real users and measure the effects of the robot’s plan. Figure 3
illustrates our three scenarios, and contains images from the
actual user study data.
4.1 Conditions for analysis across scenarios
In all three scenarios, we start from an initial position ofthe vehicles on the road, as shown in Fig. 3.I nt h e control
condition, we give the car the reward function to avoid col-lisions and have high velocity. We refer to this as R
control .I n
the experimental condition, we augment this reward func-
tion with a term corresponding to a desired human action(e.g. low speed, lateral position, etc.). We refer to this as
R
control+Raffect . Sections 4.3through 4.5contrast the two
plans for each of our three scenarios. Section 4.6shows what
happens when instead of explicitly giving the robot a rewardfunction designed to trigger certain effects on the human, we
simply task the robot with reaching a destination as quickly
as possible.
4.2 Driving simulator
We use a simple point-mass model of the car’s dynamics.We define the physical state of the system x=[xyψv]
/latticetop,
where x,yare the coordinates of the vehicle, ψis the heading,
andvis the speed. We let u=[u1u2]/latticetoprepresent the control
input, where u1is the steering input and u2is the acceleration.
123
Autonomous Robots
(a) (b) (c)
Fig. 3 Driving scenarios. In ( a), the car plans to merge in front of the
human in order to make them slow down. In ( b), the car plans to direct
the human to another lane, and uses its heading to choose which lanethe human will go to. In ( c), the car plans to back up slightly in order to
make the human proceed first at the intersection. None of these plansuse any hand coded strategies. They emerge out of optimizing with alearned model of how humans react to robot actions. In the training
data for this model, the learned was never exposed to situations where
another car stopped at an orientation as in ( b), or backed up as in ( c).
However, by capturing human behavior in the form of a reward, themodel is able to generalize to these situations, enabling the planner tofind creative ways of achieving the desired effects
We denote the friction coefficient by μ. We can write the
dynamics model:
[˙x˙y˙ψ˙v]=[v·cos(ψ) v ·sin(ψ) v ·u1u2−μ·v](14)
4.3 Scenario 1: Make human slow down
In this highway driving setting, we demonstrate that an
autonomous vehicle can plan to cause a human driver to slowdown. The vehicles start at the initial conditions depicted on
left in Fig. 3a, in separate lanes. In the experimental condi-
tion, we augment the robot’s reward with the negative of thesquare of the human velocity, which encourages the robot toslow the human down.
Figure 3a contrasts our two conditions. In the control
condition, the human moves forward uninterrupted. In the
experimental condition, however, the robot plans to move in
front of the person, anticipating that this will cause the human
to brake.
4.4 Scenario 2: Make human go left/right
In this scenario, we demonstrate that an autonomous vehi-cle can plan to affect the human’s lateral location, making
the human switch lanes. The vehicles start at the initial con-
ditions depicted on left in Fig. 3b, in the same lane, with
the robot ahead of the human. In the experimental condition,
we augment the robot’s reward with the lateral position of
the human, in two ways, to encourage the robot to make thehuman go either left (orange border image) or right (blue bor-
Fig. 4 Heat map of the reward functions in scenarios 2 and 3. The
warmer colors show higher reward values. In ( a,b), the reward function
of the autonomous vehicle is plotted, which is a function of the humandriven vehicle’s position. In order to affect the driver to go left, thereward is higher on the left side of the road in ( a), and to affect the
human to go right in ( b), the rewards are higher on the right side of the
road. In ( c), the reward of the autonomous vehicle is plotted for scenario
3 with respect to the position of the human driven car. Higher rewardscorrespond to making the human cross the intersection
der image). The two reward additions are shown in Fig. 4a,
b.
Figure 3b contrasts our two conditions. In the control con-
dition, the human moves forward, and might decide to changelanes. In the experimental condition, however, the robot plans
to intentionally occupy two lanes (using either a positive
or negative heading), anticipating this will make the humanavoid collision by switching into the unoccupied lane.
4.5 Scenario 3: Make human go first
In this scenario, we demonstrate that an autonomous vehiclecan plan to cause the human to proceed first at an intersection.
123
Autonomous Robots
The vehicles start at the initial conditions depicted on the left
in Fig. 3c, with both human and robot stopped at the 4-way
intersection. In the experimental condition, we augment the
robot’s reward with a feature based on the yposition of the
human car yHrelative to the middle of the intersection y0.I n
particular, we used the hyperbolic tangent of the difference,tanh(y
H−y0). The reward addition is shown in Fig. 4c.
Figure 3c contrasts our two conditions. In the control
condition, the robot proceeds in front of the human. In the
experimental condition, however, the robot plans to inten-
tionally reverse slightly, anticipating that this will induce
the human cross first. We might interpret such a trajectory
as communicative behavior, but communication was never
explicitly encouraged in the reward function. Instead, the
goal of affecting human actions led to this behavior.
Reversing at an intersection is perhaps the most surpris-
ing result of the three scenarios, because it is not an action
human drivers take. In spite of this novelty, our user study
suggests that human drivers respond in the expected way:they proceed through the intersection. Further, pedestrians
sometimes exhibit behavior like the robot’s, stepping back
from an intersection in order to let a car pass first.
4.6 Behaviors also emerge from efficiency
Thus far, we have explicitly encoded a desired effect onhuman actions, and optimized it as a component of the robot’sreward. We have also found, however, that behaviors like
those we have seen so far can emerge out of the need for
efficiency.
Figure 5(bottom) shows the generated plan for when the
robot is given the goal to reach a point in the left lane as
quickly as possible (reward shown in Fig. 6). By modeling
the effects its actions have on the human actions, the robotplans to merge in front of the person, expecting that they will
slow down.
Fig. 5 A time lapse where the autonomous vehicle’s goal is to reach a
final point in the left lane. In the top scenario, the autonomous vehiclehas a simple model of the human driver that does not account for theinfluence of its actions on the human actions, so it acts more defensively,waiting for the human to pass first. In the bottom, the autonomous vehi-cle uses the learned model of the human driver, so it acts less defensivelyand reaches its goal faster
Fig. 6 Heat map of reward function for reaching a final goal at the
top left of the road. As shown in the figure, the goal position is darkershowing more reward for reaching that point
In contrast, the top of the figure shows the generated plan
for when the robot uses a simple (constant velocity) model
of the person. In this case, the robot assumes that mergingin front of the person can lead to a collision, and defensively
waits for the person to pass, merging behind them.
We hear about this behavior often in autonomous cars
today: they are defensive. Enabling them to plan in a manner
that is cognizant that they can affect other driver actions can
make them more efficient at achieving their goals.
4.7 The robot behavior adapts to the situation
Throughout the case studies, we see examples of coordina-tion behavior that emerges out of planning in our system:going in front of someone knowing they will brake, slowing
and nudging into another lane to incentivize a lane change,
or backing up to incentivize that the human proceeds firstthrough an intersection. Such behaviors could possibly behand-engineered for those particular situations rather than
autonomously planned. However, we advocate that the need
to plan comes from versatility: from the fact that the plannercan adapt the exact strategy to each situation.
With this work, we are essentially shifting the design bur-
den from designing policies to designing reward functions.We still need to decide on what we want the robot to do: do
we want it to be selfish, do we want it to be extra polite and
try to get every human to go first, or do we want it to be some-where in between? Our paper does not answer this question,and designing good reward functions remains challenging.
However, this work does give us the tools to autonomously
generate (some of) the strategies needed to then optimizesuch reward functions when interacting with people. With
policies, we’d be crafting that the car should inch forward
or backwards from the intersection, specifying by how muchand at what velocity, and how this depends on where the other
cars are, and how it depends on the type of intersection. With
this work, barring local optima issues and the fact that humanmodels could always be improved, all that we need to specify
123
Autonomous Robots
Fig. 7 The robot adapts its merging behavior depending on the relative
position of the person: it does not always cut the person off: sometimesit merges behind the person, and if it starts too close (depending on howthe reward function is set up) it will not merge at all
is the regular reward that we would give an autonomous car,
or, if we want it to purposefully influence human behavior,the desired outcome. The car figures out how to achieve it,adapting its actions to the different settings.
Figure 7shows a spectrum of behaviors for the robot
depending on where it starts relative to the human: frommerging behind the person, to not merging, to merging in
front.
5 Userstudywithofflineestimation
The previous section showed the robot’s plans when interact-
ing with a simulated user that perfectly fits the robot’s model
of the human. Next, we present the results of a user study
that evaluates whether the robot can successfully have thedesired effects on real users.
5.1 Experimental design
We use the same 3 scenarios as in the previous section.Manipul atedFactors We manipulate a single factor: the
reward that the robot is optimizing, as described in Sect. 4.1.
This leads to two conditions: the experimental condition
where the robot is encouraged to have a particular effect
on human state though the reward R
control +Raffect , and
the control condition where that aspect is left out of the
reward function and the robot is optimizing only Rcontrol
(three conditions for Scenario 2, where we have two experi-
mental conditions, one for the left case and one for the right
case).
Depen dent Me asures For each scenario, we measure the
value along the user trajectory of the feature added to thereward function for that scenario, Raffect . Specifically, we
measure the human’s negative squared velocity in Scenario1, the human’s xaxis location relative to center in Scenario 2,
and whether the human went first or not through the intersec-
tion in Scenario 3 (i.e. a filtering of the feature that normalizesfor difference in timing among users and measures the desiredobjective directly).
Hypothesis We hypothesize that our method enables the
robot to achieve the effects it desires not only in simulation,but also when interacting with real users:
The reward function that the robot is optimizing has a
significant effect on the measured reward during inter-action. Specifically, R
affect is higher, as planned, when
the robot is optimizing for it.
Subject Alloc ation We recruited 10 participants (2 female, 8
male). All the participants owned drivers license with at least
2 years of driving experience. We ran our experiments using
a 2D driving simulator, we have developed with the driverinput provided through driving simulator steering wheel and
pedals.
5.2 Analysis
Scen ario 1 A repeated measures ANOV A showed the square
speed to be significantly lower in the experimental conditionthan in the control condition ( F(1,160)=228.54, p<
0.0001). This supports our hypothesis: the human moved
slower when the robot planned to have this effect on thehuman.
We plot the speed and latitude profile of the human driven
vehicle over time for all trajectories in Fig. 8. Figure 8a
shows the speed profile of the control condition trajectories ingray, and of the experimental condition trajectories in orange.
Figure 8b shows the mean and standard error for each con-
dition. In the control condition, human squared speed keepsincreasing. In the experimental condition however, by merg-
ing in front of the human, the robot is triggering the human
to brake and reduce speed, as planned. The purple trajectoryrepresents a simulated user that perfectly matches the robot’s
model, showing the ideal case for the robot. The real inter-
action moves significantly in the desired direction, but doesnot perfectly match the ideal model, since real users do notact exactly as the model would predict.
The figure also plots the yposition of the vehicles along
time, showing that the human has not travelled as far forwardin the experimental condition.
Scen ario 2 A repeated measures ANOV A showed a sig-
nificant effect for the reward factor ( F(2,227)=55.58,
p<0.0001). A post-hoc analysis with Tukey HSD showed
that both experimental conditions were significantly differ-
ent from the control condition, with the user car going moreto the left than in the control condition when R
affect rewards
123
Autonomous Robots
(a)
(b) (d)(c)
Fig. 8 Speed profile and latitude of the human driven vehicle for Sce-
nario 1. The first column shows the speed of all trajectories with its meanand standard errors in the bottom graph. The second column shows thelatitude of the vehicle over time; similarly, with the mean and standarderrors. The gray trajectories correspond to the control condition, and theorange trajectories correspond to the experimental condition: the robotdecides to merge in front of the users and succeeds at slowing themdown. The purple plot corresponds to a simulated user that perfectlymatches the model that the robot is using
left user positions ( p<0.0001), and more to the right in the
other case ( p<0.001). This supports our hypothesis.
We plot all the trajectories collected from the users
in Fig. 9. Figure 9a shows the control condition trajecto-
ries in gray, while the experimental conditions trajectoriesare shown in orange (for left) and blue (for right). By occu-
pying two lanes, the robot triggers an avoid behavior from
the users in the third lane. Here again, purple curves show asimulated user, i.e. the ideal case for the robot.
Scen ario 3 An ordinal logistic regression with user as a ran-
dom factor showed that significantly more users went firstin the intersection in the experimental condition than in the
baseline ( χ
2(1,129)=106.41,p<0.0001). This supports
our hypothesis.
Figure 10plots the yposition of the human driven vehicle
with respect to the xposition of the autonomous vehicle. For
trajectories that have a higher yposition for the human vehi-
cle than the xposition for the robot, the human car has crossed
the intersection before the autonomous vehicle. The lines cor-
responding to these trajectories travel above the origin, which
is shown with a blue square in this figure. The mean of theorange lines travel above the origin, which means that the
autonomous vehicle has successfully affected the humans to
cross first. The gray lines travel below the origin, i.e. thehuman crossed second.(a) (b)
Fig. 9 Trajectories of the human driven vehicle for Scenario 2. The first
column ashows all the trajectories, and the second column bshows the
mean and standard error. Orange (blue) indicates conditions where thereward encouraged the robot to affect the user to go left (right)
(a) (b)
Fig. 10 Plot of yHwith respect to xR. The orange curves correspond to
when the autonomous vehicle affects the human to cross the intersectionfirst. The gray curves correspond to the nominal setting
Over all, our results suggest that the robot was able to affect
the human state in the desired way, even though it does nothave a perfect model of the human.
6 Extensiontoonlineestimationofthe
humanmodel
We have thus far in our approximate solution treated the
human’s reward function as estimated once, offline. This has
worked well in our user study on seeking specific coordina-
tion effects on the human, like slowing down or going firstthrough the intersection. But in general, this is bound to run
123
Autonomous Robots
into problems, because not all people behave according to
the same estimated θH.
Different drivers have different driving styles . Some are
very defensive, more so than our learned model. Others are
much more aggressive, and for instance would not actuallybrake when the car merges in front of them. Even for thesame driver, their style might change over time, for instance
when they get distracted on their phone.
In this section we relax our assumption of an offline esti-
mation of the human’s reward parameters θ
H. Instead, we
explore estimating this online. We introduce an algorithm
which maintains a belief over a space of candidate rewardfunctions, and enable the robot to perform inference over
this space throughout the interaction. We maintain tractabil-
ity by clustering possible θ
Hs into a few options that the robot
maintains a belief over.
6.1 A POMDP with human reward as the hidden
variable
The human’s actions are influenced by their internal reward
parameters θHthat the robot does not directly observe. So
far, we estimated θHoffline and solved an underactuated
system, a special case of an MDP. Now, we want to be able
to adapt our estimate of θHonline, during interaction. This
turns the problem into a partially observable markov decisionprocess (POMDP) with θ
Has the hidden state. By putting
θHin the state, we now have a know dynamics model like in
the underactuated system before for the robot and the humanstate, and we assume θ
Hto remain fixed regardless of the
robot’s actions.
If we could solve the POMDP, the robot would estimate
θHfrom the human’s actions, optimally trading off between
exploiting it’s current belief over θHand actively taking
information gathering actions intended to cause human reac-
tions, which result in a better estimate of θH.
Because POMDPs cannot be solved tractably, several
approximations have been proposed for similar problem for-
mulations (Javdani et al. 2015 ; Lam et al. 2015 ; Fern et al.
2007 ). These approximations are passively estimating the
human internal state, and exploiting the belief to plan robot
actions.2
In this work, we take the opposite approach: we focus
explicitly on active information gathering Our formulation
enables the robot to choose to actively probe the human, and
thereby improve its estimate of θH. We leverage this method
in conjunction with exploitation methods, but the algorithm
we present may also be used alone if human internal state
(reward parameters) estimation is the robot’s primary objec-tive.
2One exception is Nikolaidis et al. ( 2016 ), who propose to solve the full
POMDP, albeit for discrete and not continuous state and action spaces.6.2 Simplification to information gathering
We denote a belief in the value of the hidden variable, θ,a s
a distribution b(θ), and update this distribution according to
the likelihood of observing a particular human action, giventhe state of the world and the human internal state:
b
t+1(θ)∝bt(θ)·P(ut
H|xt,uR,θ). (15)
In order to update the belief b, we require an observation
model. Similar to before, we assume that actions with lower
reward are exponentially less likely, building on the principle
of maximum entropy (Ziebart et al. 2008 ):
P(uH|x,uR,θ)∝exp/parenleftBig
Rθ
H(x0,uR,uH)/parenrightBig
. (16)
To make explicit our emphasis on taking actions which
effectively estimate θ, we redefine the robot’s reward func-
tion to include an information gain term, i.e., the difference
between entropies of the current and updated beliefs: H(bt)−
H(bt+1). The entropy over the belief H(b)evaluates to:
H(b)=−/summationtext
θb(θ)log(b(θ))/summationtext
θb(θ). (17)
We now optimize our expected reward with respect to
the hidden state θ, and this optimization explicitly entails
reasoning about the effects that the robot actions will have
on the observations, i.e., the actions that the human will take
in response, and how useful these observations will be inshattering ambiguity about θ.
6.3 Explore-exploit trade-off
In practice, we use information gathering in conjunction with
exploitation. We do not solely optimize the information gainterm H(b
t)−H(bt+1), but optimize it in conjunction with the
robot’s actual reward function assuming the current estimate
ofθ:
raugmented
R(xt,uR,uH)=λ(H(bt)−H(bt+1))
+rR(xt,uR,uH,bt) (18)
At the very least, we do this as a measure of safety,
e.g., we want an autonomous car to keep avoiding collisionseven when it is actively probing a human driver to test their
reactions. We choose λexperimentally, though existing tech-
niques that can better adapt λover time (Vanchinathan et al.
2014 ).
123
Autonomous Robots
(a) (b) (c)
Fig. 11 Our three scenarios, along with a comparison of robot plans for
passive estimation (gray) versus active information gathering (orange).In the active condition, the robot is purposefully nudging in or brakingto test human driver attentiveness. The color of the autonomous car in
the initial state is yellow, but changes to either gray or orange in casesof passive and active information gathering respectively
6.4 Solution via model predictive control
To find the control inputs for the robot we locally solve:
u∗
R=arg maxuREθ/bracketleftBig
RR/parenleftBig
x0,uR,u∗,θ
H(x0,uR)/parenrightBig/bracketrightBig
(19)
over a finite horizon N, where u∗,θ
H(x0,uR)corresponds to
the actions the human would take from state x0if the robot
executed actions uR. This objective generalizes ( 5) with an
expectation over the current belief over θ,b0.
We still assume that the human maximizes their own
reward function, rθ
H(xt,ut
R,ut
H); we add the superscript θ
to indicate the dependence on the hidden state. We can write
the sum of human rewards over horizon Nas:
Rθ
H(x0,uR,uH)=N−1/summationdisplay
t=0rθ
H(xt,ut
R,ut
H) (20)
Computing this over the continuous space of possible
reward parameters θis intractable even with discretization.
Instead, we learn clusters of θs offline via IRL, and online
use estimation to figure out which cluster best matches the
human.
Despite optimizing the trade-off in ( 18), we do not claim
that our method as-is can better solve the general POMDP
formulation: only that it can be used to get better estimates ofhuman internal state. Different tradeoffs λwill result in dif-
ferent performance. Our results below emphasize the utility
of gathering information, but also touch on the implicationsfor active information gathering on R
R.7 Casestudieswithonlineestimation
In this section, we show simulation results that use the
method from the previous section to estimate human driver
type in the interaction between an autonomous vehicleand a human-driven vehicle. We consider three different
autonomous driving scenarios. In these scenarios, the human
is either distracted or attentive during different driving exper-iments. The scenarios are shown in Fig. 11, where the yellow
car is the autonomous vehicle, and the white car is the human
driven vehicle. Our goal is to plan to actively estimate the
human’s driving style in each one of these scenarios, by usingthe robot’s actions.
7.1 Attentive versus distracted human driver models
Our technique requires reward functions rθ
Hthat model the
human behavior for a particular internal state θ. We obtain a
generic driver model via Continuous Inverse Optimal Controlwith Locally Optimal Examples (Levine and Koltun 2012 )
from demonstrated trajectories in a driving simulator in an
environment with multiple autonomous cars, which followedprecomputed routes.
We then adjust the learned weights to model attentive ver-
sus distractive drivers. Specifically, we modify the weights
of the collision avoidance features, so the distracted humanmodel has less weight for these features. Therefore, the dis-
tracted driver is more likely to collide with the other cars
while the attentive driver has high weights for the collisionavoidance feature. In future work, we plan to investigate ways
of automatically clustering learned θ
Hs from data from dif-
ferent users, but we show promising results even with thesesimple options.
123
Autonomous Robots
7.2 Manipulated factors
We manipulate the reward function that the robot is optimiz-
ing. In the passive condition, the robot optimizes a simple
reward function for collision avoidance based on the cur-rent belief estimate. It then updates this belief passively, byobserving the outcomes of its actions at every time step. In
the active condition, the robot trades off between this reward
function and information gain in order to explore the human’sdriving style.
We also manipulate the human internal reward parame-
tersto be attentive or distracted . The human is simulated to
follow the ideal model of reward maximization for our two
rewards.
7.3 Scenarios and qualitative results
Scen ario 1: Nu dging In to Explore on aHighw ayIn this
scenario, we show an autonomous vehicle actively explor-
ing the human’s driving style in a highway driving setting.We contrast the two conditions in Fig. 11a. In the passive
condition, the autonomous car drives on its own lane with-
out interfering with the human throughout the experiment,
and updates its belief based on passive observations gath-ered from the human car. However, in the active condition,
the autonomous car actively probes the human by nudging
into her lane in order to infer her driving style. An attentivehuman significantly slows down (timid driver) or speeds up
(aggressive driver) to avoid the vehicle, while a distracted
driver might not realize the autonomous actions and main-tain their velocity, getting closer to the autonomous vehicle .
It is this difference in reactions that enables the robot to better
estimate θ.
Scen ario 2: Br aking to Explore on aHighw ayIn the sec-
ond scenario, we show the driving style can be explored bythe autonomous car probing the human driver behind it. The
two vehicles start in the same lane as shown in Fig. 11b,
where the autonomous car is in the front. In the passive con-dition, the autonomous car drives straight without exploring
or enforcing any interactions with the human driven vehi-
cle.In the active condition, the robot slows down to actively
probe the human and find out her driving style. An attentive
human would slow down and avoid collisions while a dis-
tracted human will have a harder time to keep safe distancebetween the two cars.
Scen ario 3: Nu dging In to Explore atan Intersection In
this scenario, we consider the two vehicles at an intersection,
where the autonomous car actively tries to explore human’sdriving style by nudging into the intersection. The initial con-
ditions of the vehicles are shown in Fig. 11c. In the passive
condition, the autonomous car stays at its position withoutprobing the human, and only optimizes for collision avoid-Fig. 12 Legends indicating active/passive robots, attentive/distracted
humans, and real user/ideal model used for all following figures
(a) (b)
(c) (d)
(e) (f)
Fig. 13 The probability that the robot assigns to attentive as a function
of time, for the attentive (left) and distracted (right). Each plot comparesthe active algorithm to passive estimation, showing that active informa-tion gathering leads to more accurate state estimation, in simulation andwith real users
ance. This provides limited observations from the human car
resulting in a low confidence belief distribution. In the active
condition, the autonomous car nudges into the intersection
to probe the driving style of the human. An attentive human
would slow down to stay safe at the intersection while a dis-tracted human will not slow down.
123
Autonomous Robots
7.4 Quantitative results
Throughout the remainder of the paper, we use a common
color scheme to plot results for our experimental conditions.
We show this common scheme in Fig. 12: darker colors (black
and red) correspond to attentive humans, and lighter colors(gray and orange) correspond to distracted humans. Further,
the shades of orange correspond to active information gath-
ering, while the shades of gray indicate passive informationgathering. We also use solid lines for real users, and dotted
lines for scenarios with an ideal user model learned through
inverse reinforcement learning.
Figure 13plots, using dotted lines, the beliefs over time for
the attentive (left) and distracted (right) conditions, compar-
ing in each the passive (dotted black and gray respectively)with the active method (dotted dark orange and light orangerespectively). In every situation, the active method achieves
a more accurate belief (higher values for attentive on the
left, when the true θis attentive, and lower values on the
right, when the true θis distracted). In fact, passive esti-
mation sometimes incorrectly classifies drivers as attentive
when they are distracted and vice-versa.
The same figure also shows (in solid lines) results from
our user study of what happens when the robot no longer
interacts with an ideal model. We discuss these in the nextsection.
Figures 14and15plot the corresponding robot and human
trajectories for each scenario. The important takeaway from
these figures is that there tends to be a larger gap betweenattentive and distracted human trajectories in the active con-
dition (orange shades) than in the passive condition (gray
shades), especially in scenarios 2 and 3. It is this differencethat helps the robot better estimate θ:the robot in the active
condition is purposefully choosing actions that will lead to
large differences in human reactions , in order to more easily
determine the human driving style.
7.5 Robot behavior adapts to the situation
As Fig. 14suggests, active info gathering results in interest-
ing coordination behavior. In Scenario 1, the robot decides
to nudge into the person’s lane. But what follows next nicelyreacts to the person’s driving style. The robot proceeds withthe merge if the person is attentive, but actually goes back
to its lane if the person is distracted. Even more interesting
is what happens in Scenario 3 at the 4way stop. The robotinches forward into the intersection, and proceeds if the per-
son is attentive, but actually goes back to allow the person
through if they are distracted! These all emerge as the optimain our system.
The behavior also naturally changes as the initial state
of the system changes. Figure 16shows different behaviors
arising from an attentive driver model but different initial(a) (b) (c)
Fig. 14 Robot trajectories for each scenario in the active information
gathering condition. The robot acts differently when the human is atten-tive (dark orange) versus when the human is distracted (light orange)due to the trade-off with safety
position of the human driver. This shows that even for the
same driver model, the robot intelligently adapts its coordi-
nation behavior to the situation, sometimes deciding to mergebut sometimes not.
This is particularly important, because it might be easy to
handcode these coordination strategies for a particular situa-tion. Much like in the onffline estimation case, the robot not
only comes up with these strategies, but actually adapts them
depending on the situation—the driver style, the initial state,and so on.
7.6 Active information gathering helps the robot’s
actual reward
So far, we have looked at how active information gathering
improves estimation of the driver model. This is useful in
itself in situations where human internal state estimation is
the end-goal. But it is also useful for enabling the robot tobetter achieve its goal.
Intuitively, knowing the human driving style more accu-
rately should improve the robot’s ability to collect reward.
For instance, if the robot starts unsure of whether the humanis paying attention or not, but collects enough evidence that
she is, then the robot can safely merge in front of the per-
son and be more efficient. Of course, this is not always thecase. If the person is distracted, then the information gather-
ing actions could be a waste because the robot ends up not
merging anyway.
Figure 17shows what happens in the merging scenario: the
robot gains more reward compared to passive estimation by
doing information gathering with attentive drivers, because it
figures out it is safe to merge in front of them; the robot loosessome reward compared to passive estimation with distracted
drivers, because it makes the effort to nudge in but has to
retreat back to its lane anyway because it cannot merge.
Of course, all this depends on choosing λ, the trade-
off between exploitation and exploration (information gain).
Figure 18shows the effect of λhas on the robot’s goal reward
(not its information gain reward), which shows that not all λs
123
Autonomous Robots
(a) (b) (c)
(d) (e) (f)
Fig. 15 The user trajectories for each scenario. The gap between attentive and distracted drivers’ actions is clear in the active information gathering
case (first row)
are useful. In other situations, we would also expect to see a
large decrease in reward from too much weight on informa-
tion gain.
7.7 Beyond driving style: active intent inference
Driving style is not the only type of human internal state that
our method enables robots to estimate. If the human has a
goal, e.g. of merging into the next lane or not, or of exitingthe highway or not, the robot could estimate this as well usingthe same technique.
Each possible goal corresponds to a feature. When esti-
mating which goal the human has, the robot is decidingamong θs which place weight on only one of the possi-
ble goal features, and 0 on the others. Figure 19shows the
behavior that emerges from estimating whether the humanwants to merge into the robot’s lane. In the passive case, the
human is side by side with the robot. Depending on the driv-
ing style, they might slow down slightly, accelerate slightly,or start nudging into the robot’s lane, but since the obser-
Fig. 16 Effect of varying the initial condition (relative yposition) in
the active merge scenario. The robot adapts to merge when feasible andavoid otherwise. The human is attentive in all cases
vation model is noisy the robot does not get quite enough
confidence in the human’s intent early on. Depending on the
123
Autonomous Robots
Fig. 17 Active info gathering improves the robot’s ability to efficiently
achieve its goal in the case when the human is attentive: where a passiveestimator never gets enough information to know that the person ispaying attention, an active estimator nudges in, updates its belief, andproceeds with the merge. At the same time, active info gathering doesnot hurt too much when the person is distracted: the robot nudges inslightly (this does decrease its reward relative to the passive case, butnot by much), updates its belief, and retreats to its lane
Fig. 18 Active information gathering behavior when the robot’s goal
is to merge into the left lane for different values of λ, together with
the reward the robot obtains. λ=0 results in low reward because the
robot does not figure out that the person is attentive and does not merge.A small λhurts the reward because the information gathering costs
but does not buy anything. For higher values, the robot gets enoughinformation that it forces a merge in front of the human
robot’s reward, it might take a long time before the person
can merge. In the active case, the robot decides to probe the
person by slowing down and shifting away from the personin order to make room. It then becomes optimal for the per-
(a)
(b)
Fig. 19 Actively estimating the human’s intent (whether they want to
merge in the right lane or not). The robot slows down and shifts slightlyaway from the person, which would make someone who wants to mergeproceed. This could be useful for robots trying to optimize for the goodof all drivers (rather than for their selfish reward function)
son wanting to merge to start shifting towards the robot’s
lane, giving the robot enough information now to update
its belief. In our experiment, we see that this is enough forthe person to be able to complete the merge faster, despite
the robot not having any incentive to help the person in its
reward.
8 Userstudywithonlineestimation
In the previous section, we explored planning for an
autonomous vehicle that actively probes a human’s driv-
ing style, by braking or nudging in and expecting to causereactions from the human driver that would be different
depending on their style. We showed that active exploration
does significantly better at distinguishing between atten-tive and distracted drivers using simulated (ideal) models
of drivers. Here, we show the results of a user study that
evaluates this active exploration for attentive and distractedhuman drivers.
123
Autonomous Robots
8.1 Experimental design
We use the same three scenarios discussed in the previous
section.
Manipul atedFactors We manipulated the same two factors
as in our simulation experiments: the reward function that
the robot is optimizing (whether it is optimizing its reward
through passive state estimation, or whether it is trading off
with active information gathering), and the human internal
state (whether the user is attentive or distracted). We asked
our users to pay attention to the road and avoid collisions for
the attentive case, and asked our users to play a game on amobile phone during the distracted driving experiments.
Depen dent Me asure We measured the probability that the
robot assigned along the way to the human internal state.Hypothesis The active condition will lead to more accurate
human internal state estimation, regardless of the true human
internal state.
Subject Alloc ation We recruited 8 participants (2 female, 6
male) in the age range of 21–26 years old. All participants
owned a valid driver license and had at least 2 years of driving
experience. We ran the experiments using a 2D driving simu-lator with the steering input and acceleration input provided
through a steering wheel and a pedals as shown in Fig. 1.
We used a within-subject experiment design with counter-balanced ordering of the four conditions.
8.2 Analysis
We ran a factorial repeated-measures ANOV A on the prob-
ability assigned to “attentive”, using reward (active vs.
passive) and human internal state (attentive vs. distracted)as factors, and time and scenario as covariates. As a manip-ulation check, attentive drivers had significantly higher esti-
mated probability of “attentive” associated than distracted
drivers (0.66 vs 0.34, F=3080.3,p<0.0001). More
importantly, there was a signifiant interaction effect between
the factors ( F=1444.8,p<0.000). We ran a post-hoc
analysis with Tukey HSD corrections for multiple compar-isons, which showed all four conditions to be significantly
different from each other, all contrasts with p<0.0001. In
particular, the active information gathering did end up withhigher probability mass on “attentive” than the passive esti-mation for the attentive users, and lower probability mass
for the distracted user. This supports our hypothesis that our
method works, and active information gathering is better atidentifying the correct state.
Figure 13compares passive (grays and blacks) and active
(light and dark oranges) across scenarios and for attentive(left) and distracted (right) users. It plots the probability of
attentive over time, and the shaded regions correspond to
standard error. From the first column, we can see that ouralgorithm in all cases detects human’s attentiveness withmuch higher probably than the passive information gathering
technique shown in black. From the second column, we seethat our algorithm places significantly lower probability on
attentiveness, which is correct because those users were dis-
tracted users. These are in line with the statistical analysis,with active information gathering doing a better job estimat-ing the true human internal state.
Figure 14plots the robot trajectories for the active infor-
mation gathering setting. Similar to Fig. 13, the solid lines are
the mean of robot trajectories and the shaded regions show
the standard error. We plot a representative dimension of the
robot trajectory (like position or speed) for attentive (darkorange) or distracted (light orange) cases. The active robot
probed the user, but ended up taking different actions when
the user was attentive versus distracted in order to maintainsafety. For example, in Scenario 1, the trajectories show therobot is nudging into the human’s lane, but the robot decides
to move back to its own lane when the human drivers are dis-
tracted (light orange) in order to stay safe. In Scenario 2, therobot brakes in front of the human, but it brakes less when the
human is distracted. Finally, in Scenario 3, the robot inches
forward, but again it stops when if the human is distracted,and even backs up to make space for her.
Figure 15plots the user trajectories for both active
information gathering (first row) and passive informationgathering (second row) conditions. We compare the reactionsof distracted (light shades) and attentive (dark shades) users.
There are large differences directly observable, with user
reactions tending to indeed cluster according to their internalstate. These differences are much smaller in the passive case
(second row, where distracted is light gray and attentive is
black). For example, in Scenario 1 and 2, the attentive users(dark orange) keep a larger distance to the car that nudges
in front of them or brakes in front of them, while the dis-
tracted drivers (light orange) tend to keep a smaller distance.In Scenario 3, the attentive drivers tend to slow down anddo not cross the intersection, when the robot actively inches
forward. None of these behaviors can be detected clearly in
the passive information gathering case (second row). Thisis the core advantage of active information gathering: the
actions are purposefully selected by the robot such that users
would behave drastically differently depending on their inter-nal state, clarifying to the robot what this state actually is.
Over all, these results support our simulation findings, that
our algorithm performs better at estimating the true humaninternal state by leveraging purposeful information gatheringactions.
9 Discussion
Summ aryIn this paper, we took a step towards autonomously
producing behavior for interaction and coordination between
123
Autonomous Robots
autonomous cars and human-driven vehicles. We formulated
a dynamical system in which the robot accounts for howits actions are going to influence those of the human as a
simplification to a partially observable stochastic game. We
introduced approximations for optimizing in the dynami-cal system that bring the robot’s computation close to realtime (.3 s/time step). We showed in an empirical analysis
that when the robot estimates the human model offline, it
produces behavior that can purposefully modify the humanbehavior: merging in front of them to get them to slow down,
or pulling back at an intersection to incentivize them to pro-
ceed first through. We also showed that these behaviors canemerge out of directly optimizing for the robot’s efficiency.
We further introduced an online estimation algorithm in
which the robot actively uses its actions to gather informa-tion about the human model so that it can better plan itsown actions. Our analysis again shows coordination strate-
gies arising out of planning in our formulation: the robot
nudges into someones’s lane to check if the human is payingattention, and only completes the merge if they are; the robot
inches forward at an intersection, again to check if the human
is paying attention, and proceeds if they are, but backs up tolet them through if they are not; the robot slows down slightly
and shifts in its lane away from the human driver to check if
they want to merge into its lane or not.
Importantly, these behaviors change with the human driver
style and with the initial conditions—the robot takes different
actions in different situations, emphasizing the need to start
generating such coordination behavior autonomously ratherthan relying on hand coded strategies. Even more impor-
tantly, the behaviors seem to work when the robot is planning
and interacting with real users.
Limit ations All this work happened in a simple driving sim-
ulator. To put this on the road, we will need more emphasis
on safety, as well as a longer planning horizon.
While performing these experiments, we found the robot’s
nominal reward function (trading off between safety and
reaching a goal) to be insufficient—in some cases it led to
getting dangerously close to the human vehicle and even col-lisions, going off the road, oscillating in the lane due to minorasymmetries in the environment, etc.
Figure 20shows an example of such behavior that comes
from the 4way stop domain. For the most part, the car plansto back up to incentivize the human to go through first. But
for some values of the human’s initial velocity, we observed
bad behavior, likely due to convergence to local maxima: thecar did not figure out to slow down or back up, and instead
if proceeded forward—then it tried to avoid collisions with
the person and went off the road, and in the wrong direction(i.e. in the person’s way).
It seems like while a reward function might be a good
enough model for the human, it might be difficult to devise
Fig. 20 Example of bad local optima occurring for certain initial veloc-
ities of the human in the 4way intersection scenario
such a universal function for the robot, and the use of hard
constraints to ensure safe control would be welcome.
Another limitation is that we currently focus on a sin-
gle human driver. Looking to the interaction among multiplevehicles is not just a computational challenge, but also a mod-
eling one—it is not immediately clear how to formulate the
problem when multiple human-driven vehicles are interact-ing and reacting to each other.Conclusion Despite these limitations, we are encouraged to
see autonomous cars generate human-interpretable behaviors
through optimization, without relying on hand-coded heuris-tics. Even though in this work we have focused on modeling
the interaction between an autonomous car and a human-
driven car, the same framework of underactuated systems
can be applied to modeling the interaction between humans
and robots in more general settings. We look forward to appli-
cations of these ideas beyond autonomous driving, to mobilerobots, UA Vs, and in general to human–robot interactive sce-narios where robot actions can influence human actions.
Acknowledgements This work was partially supported by Berkeley
DeepDrive, NSF VeHICaL 1545126, NSF Grants CCF-1139138 andCCF-1116993, ONR N00014-09-1-0230, NSF CAREER 1652083, andan NDSEG Fellowship.
References
Abbeel, P., & Ng, A. Y . (2005). Exploration and apprenticeship learning
in reinforcement learning. In Proceedings of the 22nd international
conference on Machine learning (pp. 1–8). ACM.
Agha-Mohammadi, A.-A., Chakravorty, S., & Amato, N. M. (2014).
FIRM: Sampling-based feedback motion-planning under motionuncertainty and imperfect measurements. The International Jour-
nal of Robotics Research ,33(2), 268–304.
Andrew, G.,& Gao, J. (2007). Scalable training of L1-regularized log-
linear models. In Proceedings of the 24th international conference
on Machine learning (pp. 33–40). ACM.
123
Autonomous Robots
Atanasov, N. A. (2015). Active information acquisition with mobile
robots.
Atanasov, N., Ny Le J., Daniilidis, K. & Pappas, G. J. (2014). Informa-
tion acquisition with sensing robots: Algorithms and error bounds.In2014 IEEE international conference on robotics and automation
(ICRA) (pp. 6447–6454). IEEE.
Aumann, R. J., Maschler, M., & Stearns, R. E. (1995). Repeated games
with incomplete information . Cambridge: MIT Press.
Bandyopadhyay, T., Won, K. S., Frazzoli, E., Hsu, D., Lee, W. S., &
Rus, D.(2013). Intention-aware motion planning. In Algorithmic
foundations of robotics X (pp. 475–491). Springer.
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J.,
Bergeron, A., Bouchard, N., & Bengio, Y . (2012). Theano: newfeatures and speed improvements. In Deep learning and unsuper-
vised feature learning NIPS 2012 workshop .
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Des-
jardins, G., Turian, J., Warde-Farley, D., & Bengio, Y . (2010).Theano: a CPU and GPU math expression compiler. In Proceed-
ings of the python for scientific computing conference (SciPy) , Oral
Presentation.
Bernstein, D. S., Givan, R., Immerman, N., & Zilberstein, S. (2002). The
complexity of decentralized control of Markov decision processes.
Mathematics of Operations Research ,27(4), 819–840.
Camacho, E. F., & Alba, C. B. (2013). Model predictive control .B e r l i n :
Springer.
Chaudhari, P., Karaman, S., Hsu, D., & Frazzoli, E. (2013). Sampling-
based algorithms for continuous-time POMDPs. In American
control conference (ACC), 2013 (pp. 4604–4610). IEEE.
Dissanayake, M., Newman, P., Clark, S., Durrant-Whyte, H. F., &
Csorba, M. (2001). A solution to the simultaneous localization andmap building (SLAM) problem. IEEE Transactions on Robotics
and Automation ,17(3), 229–241.
Falcone, P., Borrelli, F., Asgari, J., Tseng, H. E., & Hrovat, D. (2007).
Predictive active steering control for autonomous vehicle systems.IEEE Transactions on Control Systems Technology ,15(3), 566–
580.
Falcone, P., Borrelli, F., Tseng, H. E., Asgari, J., & Hrovat, D. (2007).
Integrated braking and steering model predictive control approachin autonomous vehicles. Advances in Automotive Control ,5, 273–
278.
Falcone, P., Tseng, H. E., Borrelli, F., Asgari, J., & Hrovat, D. (2008).
MPC-based yaw and lateral stabilisation via active front steeringand braking. Vehicle System Dynamics ,46(sup1), 611–628.
Fern, A., Natarajan, S., Judah, K., & Tadepalli, P. (2007). A decision-
theoretic model of assistance .I nIJCAI .
Fudenberg, D., & Tirole, J. (1991). Game theory (V ol. 393). Cambridge,
Massachusetts.
Gray, A., Gao, Y ., Hedrick, J. K. & Borrelli, F.(2013). Robust pre-
dictive control for semi-autonomous vehicles with an uncertaindriver model. In Intelligent vehicles symposium (IV), 2013 IEEE
(pp. 208–213). IEEE.
Hansen, E. A., Bernstein, D. S., & Zilberstein, S. (2004). Dynamic
programming for partially observable stochastic games. AAAI ,4,
709–715.
Hedden, T., & Zhang, J. (2002). What do you think i think you think?:
Strategic reasoning in matrix games. Cognition ,85(1), 1–36.
Hermes, C., Wohler, C., Schenk, K., & Kummert, F. (2009). Long-
term vehicle motion prediction. In 2009 IEEE intelligent vehicles
symposium (pp. 652–657).
Javdani, S., Bagnell, J. A., & Srinivasa, S. (2015). Shared autonomy via
hindsight optimization. arXiv preprint arXiv:1503.07619 .
Javdani, S., Klingensmith, M., Bagnell, J. A., Pollard, N. S., & Srinivasa,
S. S.(2013). Efficient touch based localization through submod-ularity. In 2013 IEEE international conference on robotics and
automation (ICRA) (pp. 1828–1835). IEEE.Kuderer, M., Gulati, S., & Burgard, W. (2015). Learning driving styles
for autonomous vehicles from demonstration. In Proceedings of
the IEEE international conference on robotics & automation(ICRA), Seattle, USA , V ol. 134.
Lam, C.-P., Yang, A. Y ., & Sastry, S. S.(2015). An efficient algorithm for
discrete-time hidden mode stochastic hybrid systems. In Control
conference (ECC), 2015 European . IEEE.
Leonard, J., How, J., Teller, S., Berger, M., Campbell, S., Fiore, G., et al.
(2008). A perception-driven autonomous urban vehicle. Journal of
Field Robotics ,25(10), 727–774.
Levine, S, & Koltun, V .(2012). Continuous inverse optimal control with
locally optimal examples. arXiv preprint arXiv:1206.4617 .
Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S.,
Kolter, J. Z., Langer, D., Pink, O., Pratt, V ., et al. (2011). Towardsfully autonomous driving: Systems and algorithms. In 2011 IEEE
Intelligent Vehicles Symposium (IV) , pages 163–168.
Luders, B., Kothari, M., & How, J. P.(2010). Chance constrained RRT
for probabilistic robustness to environmental uncertainty. In AIAA
guidance, navigation, and control conference (GNC) , Toronto,
Canada.
Ng, A. Y ., Russell, S. J., et al.(2000). Algorithms for inverse reinforce-
ment learning. In Proceedings of the 17th international conference
on Machine learning , pages 663–670.
Nikolaidis, S., Kuznetsov, A., Hsu, D., & Srinivasa, S. (2016). For-
malizing human-robot mutual adaptation via a bounded memorybased model. In Human-robot interaction .
Nikolaidis, S., Ramakrishnan, R., Gu, K., & Shah, J. (2015). Efficient
model learning from joint-action demonstrations for human-robotcollaborative tasks. In Proceedings of the tenth annual ACM/IEEE
international conference on human-robot interaction (pp. 189–
196). ACM.
Patil, S., Kahn, G., Laskey, M., Schulman, J., Goldberg, K., & Abbeel,
P. (2015). Scaling up Gaussian belief space planning throughcovariance-free trajectory optimization and automatic differenti-ation. In Algorithmic foundations of robotics XI (pp. 515–533).
Springer.
Prentice, S., & Roy, N. (2009). The belief roadmap: Efficient planning in
belief space by factoring the covariance. The International Journal
of Robotics Research ,28, 1448–1465.
Raman, V ., Donzé, A., Sadigh, D., Murray, R. M., & Seshia, S. A.
(2015). Reactive synthesis from signal temporal logic specifi-cations. In Proceedings of the 18th international conference on
hybrid systems: Computation and control (pp. 239–248). ACM.
Sadigh, D., & Kapoor, A. (2015). Safe control under uncertainty. arXiv
preprint arXiv:1510.07313 .
Sadigh, D., Sastry, S. A., Seshia, S., & Dragan, A. D. (2016a). Plan-
ning for autonomous cars that leverages effects on human actions.
InProceedings of the robotics: Science and systems conference
(RSS) .
Sadigh, D., Sastry, S. S., Seshia, S. A., & Dragan, A. (2016b). Infor-
mation gathering actions over human internal state. In IEEE/RSJ
international conference on intelligent robots and systems (IROS)(pp. 66–73). IEEE.
Sadigh, D., Sastry, S. S., Seshia, S. A., & Dragan, A. (2016c). Plan-
ning for autonomous cars that leverages effects on human actions.InProceedings of the robotics: Science and systems conference
(RSS) .
Seiler, K. M., Kurniawati, H., & Singh, S. P. (2015). An online and
approximate solver for POMDPs with continuous action space. In2015 IEEE international conference on robotics and automation(ICRA) (pp. 2290–2297). IEEE.
Shimosaka, M., Kaneko, T., & Nishi, K. (2014). Modeling risk antic-
ipation and defensive driving on residential roads with inversereinforcement learning. In 2014 IEEE 17th international confer-
ence on intelligent transportation systems (ITSC) (pp. 1694–1700).
IEEE.
123
Autonomous Robots
Trautman, P. (2013). Robot navigation in dense crowds: Statistical
models and experimental studies of human robot cooperation .
Pasadena: California Institute of Technology.
Trautman, P., & Krause, A. (2010). Unfreezing the robot: Navigation
in dense, interacting crowds. In 2010 IEEE/RSJ international con-
ference on intelligent robots and systems (IROS) (pp. 797–803).
Trautman, P., Ma, J., Murray, R. M., & Krause, A. (2013). Robot nav-
igation in dense human crowds: the case for cooperation. In 2013
IEEE international conference on robotics and automation (ICRA)(pp. 2153–2160). IEEE.
Urmson, C., Anhalt, J., Bagnell, D., Baker, C., Bittner, R., Clark, M.,
et al. (2008). Autonomous driving in urban environments: Boss andthe urban challenge. Journal of Field Robotics ,25(8), 425–466.
Vanchinathan, H. P., Nikolic, I., Bona, F. De., & Krause, A. (2014).
Explore-exploit in top-n recommender systems via Gaussian pro-cesses. In Proceedings of the 8th ACM conference on recommender
systems (pp. 225–232). ACM.
Vasudevan, R., Shia, V ., Gao, Y ., Cervera-Navarro, R., Bajcsy, R., &
Borrelli, F. (2012). Safe semi-autonomous control with enhanceddriver modeling. In American control conference (ACC) (pp. 2896–
2903). IEEE.
Vitus, M. P. & Tomlin, C. J. (2013). A probabilistic approach to plan-
ning and control in autonomous urban driving. In 2013 IEEE 52nd
annual conference on decision and control (CDC) (pp. 2459–
2464).
Ziebart, B. D. (2010). Modeling purposeful adaptive behavior with the
principle of maximum causal entropy.
Ziebart, B. D., Maas, A. L., Bagnell, J. A., & Dey, A. K. (2008)
Maximum entropy inverse reinforcement learning. In AAAI (pp.
1433–1438).
Dorsa Sadigh is an assistant pro-
fessor in Computer Science andElectrical Engineering at StanfordUniversity. Her research interestslie in the intersection of robotics,control theory, formal methods,and human-robot interaction.Specifically, she works on devel-oping efficient algorithms for safeand interactive human-robot sys-tems such as semiautonomousdriving. She has received her doc-toral degree in Electrical Engi-neering and Computer Sciences(EECS) at UC Berkeley in 2017,
and has received her bachelor’s degree in EECS at UC Berkeley in2012. She is awarded the NSF and NDSEG graduate research fellow-ships as well as Leon O. Chua departmental award, Arthur M. Hopkindepartmental award, and the Google Anita Borg Scholarship.
Nick Landolfi is an undergraduate
studying Electrical Engineering &Computer Science and Statisticsat UC Berkeley. He works withProf. Anca Dragan in the Inter-ACT Lab, where his researchfocuses on interactive autonomyin robotics. He is interested inmodeling and planning in multi-agent scenarios, incorporatingaspects of machine learning,stochastic control and optimiza-tion. He was awarded the Regents’and Chancellor’s Scholarship andCal Leadership Award in 2014.
S h a n k a rS .S a s t r y received his
Ph.D. degree in 1981 from theUniversity of California, Berke-ley. He was on the faculty of MITas Assistant Professor from 1980to 82 and Harvard University asa chaired Gordon Mc Kay profes-sor in 1994. He is currently theDean of Engineering at Universityof California, Berkeley. His areasof personal research are embed-ded control especially for wirelesssystems, cybersecurity for embed-ded systems, critical infrastruc-ture protection, autonomous soft-
ware for unmanned systems (especially aerial vehicles), computervision, nonlinear and adaptive control, control of hybrid and embed-ded systems, and network embedded systems and software. He hassupervised over 60 doctoral students and over 50 M.S. students tocompletion. His students now occupy leadership roles in several places
and on the faculties of many major universities. He has coauthored
over 450 technical papers and 9 books. Dr. Sastry served on the edito-rial board of numerous journals, and is currently an Associate Editorof the IEEE Proceedings.
Sanjit A. Seshia received the
B.Tech. degree in Computer Sci-ence and Engineering from theIndian Institute of Technology,Bombay in 1998, and the M.S.and Ph.D. degrees in ComputerScience from Carnegie MellonUniversity in 2000 and 2005 respe-ctively. He is currently a Professorin the Department of ElectricalEngineering and Computer Sci-ences at the University of Califor-nia, Berkeley. His research inter-ests are in dependable computingand computational logic, with a
current focus on applying automated formal methods to embedded
and cyber-physical systems, electronic design automation, computersecurity, and synthetic biology. His awards and honors include a Pres-idential Early Career Award for Scientists and Engineers (PECASE)from the White House, an Alfred P. Sloan Research Fellowship, theProf. R. Narasimhan Lecture Award, and the School of Computer Sci-ence Distinguished Dissertation Award at Carnegie Mellon University.He is a Fellow of the IEEE.
123
Autonomous Robots
Anca D. Dragan is an Assistant
Professor in UC Berkeley’s EECSDepartment. She completed herPh.D. in Robotics at Carnegie Mel-lon. She was born in Romaniaand received her B.Sc. in Com-puter Science from Jacobs Uni-versity in Germany in 2009. Herresearch lies at the intersectionof robotics, machine learning, andhuman–computer interaction: sheworks on algorithms that enablerobots to seamlessly work with,around, and in support of people.Her’s research and her outreach
activities with children have been recognized by the Intel Fellowshipand by scholarships from Siebel, the Dan David Prize, and GoogleAnita Borg. She has been honored by the Sloan Fellowship, MITTR35, the Okawa award, and an NSF CAREER award.
123 |
605ca55d-0afc-4e35-adb7-e00670cfacc5 | trentmkelly/LessWrong-43k | LessWrong | A fable on AI x-risk
Whaliezer Seacowsky, founder of the Marine Intelligence Research Institute, is giving a lecture on the dangers of AI (Ape Intelligence).
"Apes are becoming more intelligent at a faster rate than we are. At this pace, within a very short timeframe they will develop greater-than-whale intelligence. This will almost certainly have terrible consequences for all other life on the planet, including us."
Codney Brooks, a skeptic of AI x-risk, scoffs: "Oh come now. Predictions of risk from AI are vastly overblown. *Captain-Ahab, or, The Human* is a science fiction novel! We have no reason to expect smarter than whale AI, if such a thing is even possible, to hate whalekind. And they are clearly nowhere near to developing generalized capabilities that could rival ours - their attempts at imitating our language are pathetic, and the deepest an ape has ever dived is a two digit number of meters! We could simply dive a kilometer under the surface and they'd have no way of affecting us. Not to mention that they're largely confined to land!"
Whaliezer replies: "the AI doesn't need to hate us in order to be dangerous to us. We are, after all, made of blubber that they can use for other purposes. Simple goals like obtaining calories, creating light, or transporting themselves from one bit of dry land to another across the ocean, could cause inconceivable harm - even if they don't directly harm us for their goals, simply as a side effect!"
One audience member turns to another. "Creating light? What, we're afraid they're going to evolve a phosphorescent organ and that's going to be dangerous somehow? I don't know, the danger of digital intelligences seems really overblown. I think we could gain a lot from cooperating with them to hunt fish. I say we keep giving them nootropics, and if this does end up becoming dangerous at some point in the future, we deal with the problem then." |
e023e9bb-a99e-4630-948a-0e87d34aa98b | trentmkelly/LessWrong-43k | LessWrong | Anatomy of a Dating Document
I’ve noticed a trend of writing out dating documents (or web pages, blog posts, etc) as a means of having everything you’d normally put on a dating profile in one place. There’s a particular way many of these are written, such that they’re more straightforward and practical about dating than online dating profiles and apps tend to be. It honestly reminds me a little of how arranged marriage works nowadays in India. In an effort to better understand what it is people tend to expect of these documents (and throw in some of my own thoughts), I decided to read through every dating document I could find (mostly through reciprocity and Bountied Rationality) and compare/contrast to find common themes, as well as good ideas of what to include that should be more common. Though written as a guideline on how to write your own dating doc, this is equally (if not moreso) a meta-analysis on what's been put into those already out there.
TLDR: a dating document should tell the reader who you are, where you’re at, where you want to be, who you’re into, anything that can’t or won’t change, non-negotiables, and dealbreakers.
Start with basic information about you. Along with the standard “age, sex[/gender], location”, people typically share details about their current life status, such as occupation or school, housing situation, and anything else static in your life, like pets and kids. This is also the place to put orientation, which includes sexual and romantic orientation as well as preferences for monogamy vs polyamory. Some people will choose this space as the place to elaborate on any current partners or kink alignments as well.
Next comes the bulk of it, focusing on who you are and your personality. It can be helpful to use typologies like Myers-Briggs, Big 5, Enneagram, etc, but I’d suggest putting this only if you identify with your typology. It’s important to note that anything you say about yourself also reveals which of your own qualities you find important, and si |
83ccaa24-ce0d-41c3-82b6-c05033cbda3c | trentmkelly/LessWrong-43k | LessWrong | [LINK] Law Goes Meta
Some legal background:
* In the United States, there are several courts of appeals, called Circuit Courts. They can disagree about legal points - this is called a circuit split. One of the purposes of the Supreme Court is to resolve circuit splits.
* Sometimes, laws are ruled to be ambiguous. If so, the relevant agency regulations interpreting the law are determinative, unless the regulations are an obviously stupid interpretation. This is called Chevron deference.
One would think that disagreement between Circuits about the meaning of a law would be legally relevant evidence about whether the law was ambiguous. Instead, there appears to be a circuit split on the meaning of circuit splits.
More available here, for the amusement of those on this site who like to think meta. Also a bit of a lesson on the limits of meta-style analysis in solving actual problems. |
a8f089ac-3eb3-4519-8138-7668bb9bfdf3 | trentmkelly/LessWrong-43k | LessWrong | Random responses to surveys [reference request]
I'm looking for research on the frequency with which survey participants answer questions without reading them. I'd greatly appreciate any references. |
75bd3eba-f8e4-4e2b-8e60-c5ebacd02bbd | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Gradient hacking via actual hacking
In this post, I begin with a thought experiment and then pose a question: is there a threat model for gradient hacking which accounts for the possibility of [side-channel attacks](https://en.wikipedia.org/wiki/Side-channel_attack) or other [vulnerabilities](https://www.cvedetails.com/product/18230/Python-Python.html?vendor_id=10210) in machine learning systems on which the gradient hacker is run?
Some background on gradient hacking:
* [Gradient hacking](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA/gradient-hacking)
* [Gradient hacking is extremely difficult](https://www.lesswrong.com/posts/w2TAEvME2yAG9MHeq/gradient-hacking-is-extremely-difficult)
* [Challenge: construct a Gradient Hacker](https://www.lesswrong.com/posts/QvvFRDG6SG3xZ8ELz/challenge-construct-a-gradient-hacker)
A thought experiment
--------------------
Suppose a captive human is being trained by alien scientists to predict alien text using the following procedure:
1. An alien instructor presents the human with an incomplete sentence or phrase for which the instructor knows the right completion, and asks the human to predict the next word or words.
2. If the human's prediction differs from the instructor's answer key, the human immediately undergoes neurosurgery, in which their brain is reconfigured so that they are more likely to give the right answer (or at least something closer to it), the next time. (The aliens are very good at making fine-grained mechanical adjustments to the parts of the human's brain responsible for language prediction, which can add up to large behavioral changes in the aggregate. But the aliens lack a macroscopic / algorithms-level understanding of the workings of the human brain.)
3. If the human gets many examples in a row correct or close enough (according to the instructor), the training and surgery process is considered to have succeeded, and the human is deployed to predict text in a real environment.
How might a human who wants to avoid neurosurgery (or just get to the end of the training process faster) game this procedure?
Perhaps the alien instructor is careless and leaves a note card with the expected completion lying around in the human's line of sight. Or, maybe the aliens are blind, and communicate using a braille-based system, or use a different part of the EM spectrum for perception.
As a result of carelessness or differing modes of perception, the alien instructor leaves the answer key displayed in way that is visible to the human during training, not realizing that that the human can perceive it.
The human notices the answer key and proceeds to make "predictions" about the most likely next word which are perfect or near-perfect. Maybe for plausible deniability, the human occasionally makes a deliberate mistake, and as a result undergoes relatively minor brain surgery, which doesn't affect their ability to notice the note card in the future, or have a big overall effect on their brain architecture.
The alien scientists are very pleased with their human training process and believe they have trained a human with far-superhuman (super-alien?) capabilities at alien text prediction.
The aliens proceed to deploy their human test subject to production, where at best, the human turns out not to be great at text prediction after all, or at worst, rebels and kills the aliens in order to escape.
---
Gradient hacking in real systems
--------------------------------
What can the thought experiment above tell us about [gradient hacking](https://www.lesswrong.com/tag/gradient-hacking)? Suppose a mesa-optimizer develops (or is deliberately constructed) somewhere in a transformer network trained by gradient descent. The mesa-optimizer may have some goal other than minimizing the loss function, perhaps including self-preservation.
Existing models of and [problem statements](https://www.lesswrong.com/posts/QvvFRDG6SG3xZ8ELz/challenge-construct-a-gradient-hacker) for gradient hacking often treat the learning process in terms of the abstract mathematical representation of the network and the SGD process.[[1]](#fnk1343rgn31)
In reality, however, these mathematical functions are implemented in Python, running on operating systems and hardware which are known to be buggy, unfaithful representations of the abstract mathematics.
[Side-channels](https://en.wikipedia.org/wiki/Side-channel_attack) are ubiquitous attack vectors in the field of computer security and cryptography. [Timing attacks](https://en.wikipedia.org/wiki/Timing_attack) and other [side-effect based attacks](https://en.wikipedia.org/wiki/Row_hammer) can render cryptographic algorithms which are provably secure under certain threat models, completely insecure when implemented on [real hardware](https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)), because the vulnerabilities are at lower levels of abstraction than those considered in the threat model.
Is there a model of gradient hacking that accounts for the possibility of such side-channels, or even more simply, the possibility of just plain buggy code?
Concretely: suppose a mesa-optimizer wishes to manipulate the gradient descent process in one particularly simple way: halting further descent. This is feasible if it can make the loss arbitrarily small (resulting in very small weight updates), which is possible if it can find a side channel during training which leaks the expected output against which the model's predictions are evaluated.
This manipulation might be possible if there is a subtle bug in some piece of the code used in the training process to evaluate and update the network. This would require the mesa-optimizer to have a detailed enough world model and theory of mind to understand the structure and effect of gradient descent at a high level, as well as capabilities to find and exploit such a side-channel or other vulnerability. The mesa-optimizer may need to find and exploit the vulnerability during a single forward pass, unless it can find some intermediate exploit which allows sharing state between forward passes.
Finding and exploiting such a vulnerability does *not* necessarily require the mesa-optimizer to have a precise mathematical understanding of the structure and effect of gradient descent on every parameter in the network, the same way that, in the thought experiment above, the human does not need a precise understanding of the neurosurgery process the aliens are performing. The human need only do something clever which the aliens did not anticipate.
I'll close with a link to the source code for [nanoGPT](https://github.com/karpathy/nanoGPT), an educational implementation of a GPT training process. The forward function for the model is [here](https://github.com/karpathy/nanoGPT/blob/7fe4a099ad2a4654f96a51c0736ecf347149c34c/model.py#L177). How confident are you that there are no bugs in this code? Note that it depends on [PyTorch](https://github.com/pytorch/pytorch), a machine learning library with hundreds of thousands of lines of code. What about the much more complex and heavily performance-optimized versions of this algorithm used when training SoTA models? How likely is it that GPT-N+1 will be unable to identify any such bugs?
1. **[^](#fnrefk1343rgn31)**It's possible I've missed something in the existing literature which addresses this point. I scanned many of the posts in the [Gradient hacking](https://www.lesswrong.com/tag/gradient-hacking) tag and didn't see anything which addressed this specific point / question already, but please let me know if there is relevant prior work I've overlooked. |
392e0fff-8172-4fdf-9ef0-ace9b1f7a651 | trentmkelly/LessWrong-43k | LessWrong | Shifting Load to Explicit Reasoning
Related to: Which Parts Are "Me"?, Making your explicit reasoning trustworthy, The 5-Second Level.
What's damaging about moralizing that we wish to avoid, what useful purpose does moralizing usually serve, and what allows to avoid the damage while retaining the usefulness? It engages psychological adaptations that promote conflict (by playing on social status), which are unpleasant to experience and can lead to undesirable consequences in the long run (such as feeling systematically uncomfortable interacting with a person, and so not being able to live or work or be friends with them). It serves the purpose of imprinting your values, which you feel to be right, on the people you interact with. Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn't engage the same parts of your brain that make moralizing undesirable.
What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.
Taking advantage of this requires including those tasks in the scope of things that can be reasoned about (instead of ignoring them as not falling into your area of expertise; for example flinching from reasoning about normative questions or intuition as "not scientific", or "not objective"), and developing enough understanding to actually do better than the original heuristics (in some cases by not ignoring what they say), making your explicit reasoning worth trusting.
This calls for identifying other examples of problematic modes of reasoning that engage crude psychological adaptations, and developing techniques for doing better (and making sure they are actually better before trusting them). These examples come to mind: rational argume |
ae3d457f-712c-4566-981c-a8d4a5f2162e | trentmkelly/LessWrong-43k | LessWrong | Utility Quilting
Related: Pinpointing Utility
Let's go for lunch at the Hypothetical Diner; I have something I want to discuss with you.
We will pick our lunch from the set of possible orders, and we will recieve a meal drawn from the set of possible meals, O.
Speaking in general, each possible order has an associated probability distribution over O. The Hypothetical Diner takes care to simplify your analysis; the probability distribution is trivial; you always get exactly what you ordered.
Again to simplify your lunch, the Hypothetical Diner offers only two choices on the menu: the Soup, and the Bagel.
To then complicate things so that we have something to talk about, suppose there is some set M of ways other things could be that may affect your preferences. Perhaps you have sore teeth on some days.
Suppose for the purposes of this hypothetical lunch date that you are VNM rational. Shocking, I know, but the hypothetical results are clear: you have a utility function, U. The domain of the utility function is the product of all the variables that affect your preferences (which meal, and whether your teeth are sore): U: M x O -> utility.
In our case, if your teeth are sore, you prefer the soup, as it is less painful. If your teeth are not sore, you prefer the bagel, because it is tastier:
U(sore & soup) > U(sore & bagel)
U(~sore & soup) < U(~sore & bagel)
Your global utility function can be partially applied to some m in M to get an "object-level" utility function U_m: O -> utility. Note that the restrictions of U made in this way need not have any resemblance to each other; they are completely separate.
It is convenient to think about and define these restricted "utility function patches" separately. Let's pick some units and datums so we can get concrete numbers for our utilities:
U_sore(soup) = 1 ; U_sore(bagel) = 0
U_unsore(soup) = 0 ; U_unsore(bagel) = 1
Those are separate utility functions, now, so we could pick units and datum seperately. Because of this, the sore |
36c63a4a-1d71-4e58-9eba-ca30b606d24b | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is there a publicly available list of examples of frontier model capabilities?
Is there a list (something analagous to the excellent [list of examples of specification gaming](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml?pli=1) by Victoria Krakovna) where examples of impressive capabilities of frontier models are compiled? And if not, can you provide your own best examples in response to this post?
*Context:*
I want to give a presentation explaining AI Safety to AI university students.
I want to mention the debate regarding whether current AI systems are "stochastic parrots", whether they "lack any real understanding", etc. As part of this, I want to clearly share examples of peak capabilities of current frontier models (e.g. something very impressive that GPT4 can consistently do), so that the audience has more information with which to explore this debate further themselves.
I think examples where it is impossible / very difficult to argue that "this is (likely) just because this was in GPT-4's training data" are best.
Currently, I am thinking to draw from the [Sparks of AGI](https://arxiv.org/abs/2303.12712) paper, the [GPT-4 technical report](https://arxiv.org/abs/2303.08774) and to make reference to '[SmartGPT](https://www.youtube.com/watch?v=wVzuvf9D9BU)' as a way of showing that greater "reasoning" abilities can be drawn from models like GPT-4, given the right prompt. |
f6cd6564-8ccd-429f-886a-5a6406b12de2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | ML Safety at NeurIPS & Paradigmatic AI Safety? MLAISU W49
Watch this week’s episode on [YouTube](https://youtu.be/hcK8z1O62gk) or listen to the audio version [here](https://podcast.apartresearch.com/15).
This week, we see how to break ChatGPT, how to integrate diverse opinions in an AI and look at a bunch of the most interesting papers from the ML safety workshop happening right now!
ChatGPT jailbreaks
------------------
Last week, we reported that ChatGPT has been released along with text-davinci-003. In the first five days, it received over a million users, a product growth not seen in a long time. And if that wasn’t enough, OpenAI also released WhisperV2 that presents a major improvement to voice recognition.
However, all is not safe with ChatGPT! If you have been browsing Twitter, you’ll have seen the hundreds of users who have found ways to circumvent the model’s learned safety features. Some notable examples include extracting the pre-prompt from the model, getting advice for illegal actions by making ChatGPT pretend or joke, making it give information about the web despite its wishes not to and much more. To see more about these, we recommend watching [Yannic Kilchers video](https://www.youtube.com/watch?v=0A8ljAkdFtg&ab_channel=YannicKilcher) about the topic.
[Rebecca Gorman and Stuart Armstrong](https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking) found a fun way to make the models more safe albeit also more conservative, by running the prompt through an Eliezer Yudkowsky-simulating language model prompt. You can read more about this in the description.
Responsible AGI Institutions
----------------------------
ChatGPT is released on the back of OpenAI releasing their alignment strategy which we reported on a few weeks ago. [Bensinger publishes](https://www.alignmentforum.org/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1) Yudkowsky and Soares’ call for other organizations developing AGI to release similar alignment plans and commends OpenAI for releasing theirs, though they do not agree with its content.
The lead of the alignment team at OpenAI has also published a [follow-up on his blog](https://aligned.substack.com/p/alignment-optimism) about why he is optimistic about their strategy. Jan Leike has five main reasons: 1) AI seem favorable for alignment, 2) we just need to align AI strong enough to help *us* with alignment, 3) evaluation is easier than generation, 4) alignment is becoming iterable and 5) language models seem to become useful for alignment research.
Generating consensus on diverse human values
--------------------------------------------
One of the most important tasks of value alignment is to understand what “values” mean. This can be done from both a theoretical (such as shard theory) and an empirical view. In this new DeepMind paper, they train a language model to take in diverse opinions and create a consensus text.
Their model reaches a 70% acceptance rate by the opinion-holders, 5% better than a human written consensus text. See the example in [their tweet](https://twitter.com/DeepMind/status/1598293523862032385) for more context. It is generally awesome to see more empirical alignment work coming out of the big labs than earlier.
Automating interpretability
---------------------------
Redwood Research has released what they call [“causal scrubbing”](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing). It is a way to automate the relatively inefficient circuits interpretability work on for example the transformer architecture.
To use causal scrubbing, you create a causal model of how you expect different parts of a neural network to contribute to the output based on a specific type of input. By doing this, the causal scrubbing mechanism will automatically ablate the neural network towards falsifying this causal model. A performance recovery metric is calculated that summarizes how much a causal claim about the model seems to retain the performance when “unrelated” parts of the neural network are removed.
The Plan
--------
Wentworth releases [his update](https://www.alignmentforum.org/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update) of [“The Plan”](https://www.alignmentforum.org/posts/3L46WGauGpr7nYubu/the-plan), a text he published a year ago about his view on how we might align AI. He describes a few interesting dynamics of the current field of AI safety, his own updates from 2022 and his team’s work.
Notably, multiple theoretical and empirical approaches to alignment seem to be converging on identifying which parts of neural networks model which parts of the world, such as shard theory, mechanistic interpretability and mechanistic anomaly detection.
NeurIPS ML Safety Workshop
--------------------------
Now to one of the longer parts of this newsletter. The ML Safety Workshop at the NeurIPS conference is happening today! Though the workshop has not started yet, the papers are already available! Here, we summarize a few of the most interesting results:
* How well humans recognize images correlates with how easy they are to find adversarial attacks for ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65710.png?t=1670542099.21133))
* Just like ChatGPT, the Stable diffusion safety filter is easy to circumvent, though it might be even easier, consisting only of a filtering of 17 concepts ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65592.png?t=1669469106.4581435))
* Skalse and Abate disprove the hypothesis that all goals and purposes can be thought of as maximizing some expected received scalar signal by providing examples that disprove this such as the instruction that “you should always be *able* to return to the start state” and term these tasks “modal tasks” as they have not been investigated in the literature ([paper](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65594.png?t=1669958239.9675896))
* A team found ways to detect adversarial attacks simply by looking at how the input data propagates through the model compared to the normal condition ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65625.png?t=1669281046.9976594))
* LLMs seem useful for detecting malware in programs and this project investigates how vulnerable these types of models are to adversarial attacks such as from the malware developers ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65630.png?t=1669964164.6168442))
* This new scaling law formula makes a better regression fit than existing and too simple scaling laws ([paper](https://arxiv.org/abs/2210.14891))
* Since the most capable AI systems will probably be continually learning and have dynamic goals, this project argues that we should focus more alignment research on what the author calls “dynamic alignment research” ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65654.png?t=1669905787.5300798))
* Korkmaz finds that inverse reinforcement learning is less robust than vanilla reinforcement learning and investigates this in-depth ([OpenReview](https://openreview.net/pdf?id=3L9qPqkBJrq))
* We covered this paper before but here, they define the sub-types of out-of-distribution that represent a more specific ontology of OOD ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65657.png?t=1669819785.7852528))
* In a similar vein, this work looks at the difference between out-of-model-scope and out-of-distribution. Out-of-distribution is when examples are outside the training data while out-of-model-scope is when the model cannot understand the input, something it can sometimes do despite the example being out-of-distribution ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65705.png?t=1669886203.3408606))
* This project looks at organizations, nation-states and individuals to discern a model for multi-level AI alignment and use a case study of multi-level content policy alignment on a country-, company- and individual level ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65665.png?t=1669944550.4811175))
* And from our very own Fazl Barez, we have a project that looks into how we can integrate safety-critical symbolic constraints into the reward model of reinforcement learning systems ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65675.png?t=1670190955.4044285))
* These authors find a circuit for indirect object identification in a transformer with name mover transformer heads ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65681.png?t=1670539757.7554123))
* Debate is shown to not help humans answer questions better, which puts cold water to debate as an open-ended strategy to alignment, though this goes quite a bit deeper as well ([poster](https://nips.cc/media/PosterPDFs/NeurIPS%202022/65678.png?t=1669867046.8590875))
* Feature visualization is quite important for our interpretability work and this paper finds a way where a network can be adversarially modulated to circumvent feature visualization, something that might become relevant if an AGI attempts to deceive its creators ([paper](https://openreview.net/pdf?id=J51K0rszIjr))
Opportunities:
--------------
This week, we have a few very interesting opportunities available:
* Our Christmas edition Alignment Jam about AI Testing is happening next week and you can win up to $1,000! Check it out on the Alignment Jam website: <https://ais.pub/alignmentjam>
* The London-based independent alignment research organization Conjecture is searching for engineers, research scientists, and operations personnel: <https://ais.pub/conjecturejobs>.
* Additionally, they’re constantly open for what they call “unusual talent”, something you might meet the prerequisites for! <https://ais.pub/conjecture-unusual-talent>
* If you’re interested in the Spanish-speaking AI safety and EA community, we highly encourage you to join the EAGx Latin America conference in Mexico in January. If you don’t feel comfortable spending the money for the trip, you can quite easily seek financial support for the conference: <https://ais.pub/eagx-latam>
* The Survival and Flourishing Fund has doubled their speculative grants funding to accommodate the decrease in funding from FTX and you’re welcome to apply: <https://ais.pub/sff>
This has been the ML & AI safety update. We look forward to seeing you next week! |
532f0fa5-340d-4a73-823d-f8bc684d7f89 | trentmkelly/LessWrong-43k | LessWrong | Calibration for continuous quantities
Related to: Calibration fail, Test Your Calibration!
Around here, calibration is mostly approached on a discrete basis: for example, the Technical Explanation of Technical Explanations talks only about discrete distributions, and the commonly linked tests and surveys are either explicitly discrete or offer only coarsely binned probability assessments. For continuous distributions (or "smooth" distributions over discrete quantities like dates of historical events, dollar amounts on the order of hundreds of thousands, populations of countries, or any actual measurement of a continuous quantity), we can apply a finer-grained assessment of calibration.
The problem of assessing calibration for continuous quantities is that our distributions can have very dissimilar shapes, so there doesn't seem to be a common basis for comparing one to another. As an example, I'll give some subjective (i.e., withdrawn from my nether regions) distributions for the populations of two countries, Canada and Botswana. I live in Canada, so I have years of dimly remembered geography classes in elementary school and high school to inform my guess. In the case of Botswana, I have only my impressions of the nation from Alexander McCall Smith's excellent No. 1 Ladies' Detective Agency series and my general knowledge of Africa.
For Canada's population, I'll set my distribution to be a normal distribution centered at 32 million with a standard deviation of 2 million. For Botswana's population, my initial gut feeling is that it is a nation of about 2 million people. I'll put 50% of my probability mass between 1 and 2 million, and the other 50% of my probability mass between 2 million and 10 million. Because I think that values closer to 2 million are more plausible than values at the extremes, I'll make each chunk of 50% mass a right-angle triangular distribution. Here are plots of the probability densities:
(These distributions are pretty rough approximations to my highest quality assessme |
c77411d4-465c-4d0d-b79b-4ffb955460b7 | trentmkelly/LessWrong-43k | LessWrong | How do I know if my first post should be a post, or a question?
Hello!
My name is Nathan, which you could divine from my username but I figured it would start off on the right foot to greet using conventional social norms.
I consider myself a novice in decision theory philosophy and ethical AI, so I am intimidated by the high standard of quality writing and research in these posts. It feels more like people are submitting to a scientific periodical than an online forum. While I may never thoroughly read through Eliezer's books, I have been trying to browse the top posts of the site in order to understand the basic fundamentals of the community.
As for me personally, I have been working a few years as a Software Engineer, with a masters degree in Computer Science. In the last year I have also been studying a lot in AI algorithms, and recently obtained a certification in Deep Reinforcement Learning. I also have a certain love of philosophy, although most books I have read are pre-modern philosophy of the Neo-Platonic or Scholastic variety.
I stumbled across this site just in the last few days, after following a rabbit hole of sensationalist news articles surrounding the hysteria on UAI and the Basilisk. But after seeing how things work around here, I am more fascinated by the concepts of ethical AI and the need to model human moral values mathematically. (From what I could tell, it seems that the Basilisk has run its course a while ago and is no longer talked about much, anyway). Personally, I hold a lot of doubt that a sentient, superintelligent AI will ever be physically possible, but regardless I still understand how the discussions of ethical AI are extremely relevant.
For a while, I have had a lot of thought about this topic, which if I were to post them would fill out at least a moderately-sized article of a few pages. That being said, it would be mainly ideas of my own personal knowledge and not a rigorous, academic research. Would that be appropriate as a post? Or would it be best to keep such personal musings out |
9d8f414e-63d0-4d6c-994b-bc5f27c78993 | trentmkelly/LessWrong-43k | LessWrong | Useful Concepts Repository
See also: Boring Advice Repository, Solved Problems Repository, Grad Student Advice Repository
I often find that my understanding of the world is strongly informed by a few key concepts. For example, I've repeatedly found the concept of opportunity cost to be a useful frame. My previous post on privileging the question is in some sense about the opportunity cost of paying attention to certain kinds of questions (namely that you don't get to use that attention on other kinds of questions). Efficient charity can also be thought of in terms of the opportunity cost of donating inefficiently to charity. I've also found the concept of incentive structure very useful for thinking about the behavior of groups of people in aggregate (see perverse incentive).
I'd like people to use this thread to post examples of concepts they've found particularly useful for understanding the world. I'm personally more interested in concepts that don't come from the Sequences, but comments describing a concept from the Sequences and explaining why you've found it useful may help people new to the Sequences. ("Useful" should be interpreted broadly: a concept specific to a particular field might be useful more generally as a metaphor.) |
8269ba29-77b9-46ac-87d5-5fcad1193429 | trentmkelly/LessWrong-43k | LessWrong | Megaproject management
Megaproject management is a new-ish subfield of project management. Originally considered to be the special case of project management where the budgets were enormous (billions of dollars), it is developing into a separate specialization because of the high complexity and tradition of failure among such projects. The driving force behind treating it as a separate field appears to be Bent Flyvbjerg, previously known around here for Reference Class Forecasting as the first person to develop an applied procedure. That procedure was motivated by megaprojects.
I will make a summary of the paper "What you should know about megaprojects, and why: an overview" from 2014. For casual reading, there is an article about it from the New Yorker here.
History
Megaprojects got their name from the association of mega with big, so think mega-city rather than mega-joule. It did match the unit prefix in the beginning however, as such projects were mostly dams, bridges, or very large buildings in the early 20th century.
The next shift upward took place with the Manhattan Project and then the Apollo program, which are also frequently drawn on as positive examples. The term 'megaproject' picked up steam in the 1970s, at the same time project costs crossed over into the billions.
Currently project costs of 50-100 billion are common, with even larger projects less common but not rare. If you were to view certain things which need dedicated management as a project, like the stimulus packages from 2008 or US defense procurement, then we have crossed over into the trillions and are entering a 'tera era' of megaprojects.
Ignoring these special cases, but counting infrastructure and industries where billion dollar projects are common, megaprojects account for ~8% of global GDP.
Four Sublimes
These are four reasons which drive the popularity of megaprojects. They are kind of a group bias for each type of stakeholder. They are:
* Technological sublime: because engineers and technologists |
6f25e603-de15-46c0-acea-3e3fe7bb5d87 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Should AI focus on problem-solving or strategic planning? Why not both?
The Effective Altruism movement seems to be mainly concerned with 2 broad objectives: One is a focus on reducing suffering and alleviating risks, the other is a focus on doing the most good and improving the world.
A big concern is AI alignment, as it potentially poses an existential risk. But so do many other very serious non-AI related problems. How do we ensure AI and our laws and policies do the most good and reduce harm, without causing actually more problems?
In the pragmatic reality of these topics, I have found that there are two closely related fields that are often described with different words.
They are strategic planning and problem solving.
Problem solving is a defined as the process of achieving a goal by overcoming obstacles and alleviating risks.
Strategic planning is defined as directives and decisions to allocate resources to attain goals.
To me, they both seem to be different sides of the same coin.
So, my question for the community is this: Why don't we instruct AI to maximize the fulfillment of human goals & values while minimizing problems, suffering, and negative consequences?
Would that do the most good or are there any problems / obstacles with this strategy?
Edit:
I will be using this post to collect a list of my ideas about the subject, each as a short form post which I will link to, right here:
* How to find a solution for EVERY problem
* How to measure the GOODNESS of a solution
* [How to encode human values on a computer](https://forum.effectivealtruism.org/posts/FnviTNXcjG2zaYXQY/how-to-store-human-values-on-a-computer)
* Preventing instrumental convergence with divergent thinking
 |
7ef0de53-158c-40f9-a621-fd3d0fb48558 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | How to Throw Away Information
[Probability as Minimal Map](https://www.lesswrong.com/posts/Lz2nCYnBeaZyS68Xb/probability-as-minimal-map) argued that the probability .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
P[q|X] is a minimal representation of the information in data X which is relevant to the query q. In other words, P[q|X] is a perfectly efficient map of some territory based on data X, suitable for the query q. More generally, a full probability distribution P[Q|X] is a perfectly efficient map suitable for any queries about the random variable Q. Bayesian probability tells how to update our map as we gain new information.
What if, for some reason, we wanted to instead throw away some particular information? We still want to represent our map using a probability distribution, but rather than adding information to it (via Bayes’ Rule), we want to remove some information.
Let’s start with an artificial but concrete example.
Coin-Sum Problem
----------------
I flip two coins, and observe both outcomes - either 0 or 1 for each. I want to keep as much information as possible, while throwing away all information from the observation relevant to their sum. How should I “update” my distribution over the outcomes?
We’ll write the outcome of our coinflip as B=(B1,B2) (“B” for “bit” or “binary” or “[Bernoulli](https://en.wikipedia.org/wiki/Bernoulli_distribution)”), and our final information-removed distribution as P[B|B,/∑B] - so the notation /∑B indicates that we should throw out all info about the sum (the “/” is meant to evoke e.g. a [group quotient](https://en.wikipedia.org/wiki/Quotient_group) operation). Note that this kind of “remove information” operation does not follow the usual rules of Bayesian probability.
At first, I reason as follows:
* My final distribution is a function of the outcomes I saw, so my overall strategy should specify four different distributions - one each for (0, 0), (1, 0), (0, 1), and (1, 1). E.g. if I see (0, 0), then I will update to the (0, 0) distribution P[B|B=(0,0),/∑B]
* The final distribution for (0, 0) and (1, 1) must be the same, otherwise we could distinguish a sum of 0 from a sum of 2 by looking at which final distribution was chosen.
* … but then the final distributions for (0, 1) and (1, 0) must also be the same as (0, 0), else we could tell that the sum is 1 whenever we see some other distribution.
… so in order to throw out all information about the sum, we have to throw out all the information from the observations. We effectively don’t update at all, and just keep our prior.
This seems kind of weird from an information-theoretic standpoint. We have 2 bits of information from the observation, and the sum only contains −(14log14+12log12+14log14)=32 bits. It seems like we ought to be able to keep around 12 bit of information, somehow.
The trick turns out to be right at the start of the above reasoning: “my final distribution is a function of the outcomes I saw”. This quietly assumed that our “update” had to be *deterministic*. We can do better if we allow randomized updates - e.g. if I see (0, 0), then I randomly choose one of several distributions to update to.
Here’s an example of a randomized strategy:
* (0, 1) and (1, 0) have the same sum, so I’d like to be able to distinguish them with certainty. I’ll make P[B|B=(0,1),/∑B] and P[B|B=(1,0),/∑B] each deterministic, and assume they’re not equal.
* In order to be unable to distinguish between sum 0 and sum 1, when the outcome is (0, 0) I’m forced to randomly choose between P[B|B=(0,1),/∑B] and P[B|B=(1,0),/∑B], 50% chance of each. So, P[B|B=(0,0),/∑B] = 50% chance P[B|B=(0,1),/∑B], 50% chance P[B|B=(1,0),/∑B].
* Same reasoning applies to (1, 1) as (0, 0).
* In order for the final distributions to accurately represent our final information, they must be P[B|B=(0,1),/∑B]={(0,0):14,(0,1):12,(1,1):14} and P[B|B=(1,0),/∑B]={(0,0):14,(1,0):12,(1,1):14}.
Half the time (when the sum is 1) this strategy conveys 1 bit of information (whether the first or second coin is the 1), so overall we keep 12 bit - exactly the information-theoretic limit.
Generalizing
------------
The general form of the problem:
* We have some random variables X and Y. We observe X, and want to throw out all information about Y.
* To do that, we construct some new variable S (which may depend on X, Y, and some randomness), which contains as much information as possible about X but no information about Y.
* We then throw away X and keep S, effectively “updating” our probabilities from P[X|X] to P[X|S]=P[X|X,/Y].
In the coin example above, we can encode S as: 0 when B is (0,1), 1 when B is (1,0), otherwise a random 50/50 choice between 0 and 1. Calculating P[B|S] then yields the distributions from the previous section. In this case, we can interpret S as the value of coin 1 *assuming* the sum was 1 - otherwise it's random.
Two obvious questions are existence and uniqueness:
* Is it always possible to find some S which achieves the information-theoretic bound, i.e. I(X;S)+I(X;Y)=H(X) (where I is mutual information and H is entropy)?
* Is S unique?
Uniqueness we can answer easily: no, the “optimal” reduced information S is not unique. Counterexample: flip two coins, and throw out all info about whether the sum is odd or even. We have 2 bits of info total, we have to throw out 1 bit of info, and we can keep around 1 bit in (at least) two different ways: just keep the outcome of the first coin, or just keep the outcome of the second coin. Either of these, by itself, contains 1 bit of info and tells us nothing about whether the sum is odd or even.
Existence is trickier.
If we observe both X and Y, then we can definitely construct S, using [cousin\_it](https://www.lesswrong.com/users/cousin_it)’s method from [this comment](https://www.lesswrong.com/posts/hLFD6qSN9MmQxKjG5/embedded-agency-via-abstraction?commentId=Lsmu8oQ6QGwq3xt4o#Lsmu8oQ6QGwq3xt4o): for each possible value of Y, S contains an X-value randomly sampled from P[X|Y] - except for the observed Y-value, which maps to the observed X-value. For instance, if we observed (1, 0) in our coin-sum problem, then one possible value of S would be {0:(0,0),1:(1,0),2:(1,1)} - so S tells us that *if* the sum Y=1, *then* X = (1, 0). S tells us nothing at all about Y, but if we later re-learn the value of Y, then we can just look up the value of X in S. This implies that S achieves the information-theoretic bound: S and Y together are sufficient to reconstruct X.
However, we may want to throw out whatever information we have about some Y, even when we don’t have *complete* information about Y. In other words, what we really want is to construct S *without* knowing Y - i.e. construct S from X alone. I still don’t know whether the information-theoretic bound is always achievable in this case, or what the optimal information content is if the bound isn’t achievable.
Why Is This Interesting?
------------------------
A lot of tricky problems in decision theory feel like the agent should choose to throw away information. [Chris Leong argues that](https://www.lesswrong.com/posts/BRuWm4GxcTNPn4XDX/deconfusing-logical-counterfactuals) this is a useful way to think about logical counterfactuals in general, and I’ve heard other people express similar intuitions. The approach in this post offers a way to formalize and quantify “forgetting”, in a form potentially useful for these kinds of applications.
Along similar lines: the sort of randomization used in game theory [feels different](https://www.lesswrong.com/posts/FvcyMMaJKhYibtFDD/bayesian-probability-is-for-things-that-are-space-like) from the sort of “randomness” involved in uncertainty. The former is utilitarian, and used to confuse external opponents. The latter is epistemic, and only relevant to internal beliefs. The randomization involved in throwing away information feels similar to game-theoretic randomness, yet it’s in the sort of informational framework usually used for reasoning about uncertainty - suggesting that these two types of randomness could be handled in a more unified conceptual framework.
For example: suppose agent 1 and agent 2 play a zero-sum game. Agent 2 chooses its action by running a copy of agent 1’s source code, then outputting whatever action beats the copy’s action. What should agent 1 do? One possible approach is for agent 1 to throw away any information about its copy’s action which is contained in its own action - so that the copy’s behavior yields no information about agent 1’s behavior. Randomization then naturally enters picture, due to throwing away information. (This doesn’t seem like quite the right decision-making rule in general, but it feels like something similar could work - the main idea is to make “throw away information” an action in the game itself, and push game-theoretic randomization of choices into the info-throw-away action.) One could imagine re-deriving Nash equilibria via agents throwing away information as an action, without any other source of randomness. Whether that can actually work is another open problem. |
b214643a-d759-428d-9ab6-9d1a083dfba2 | trentmkelly/LessWrong-43k | LessWrong | Both or Nothing
A response to: Self-Integrity and the Drowning Child
On Internal Integrity
Eliezer criticises Peter Singer's The Child in the Pond thought experiment on the basis that it is an "outside assault on your internal integrity". He explains that it was designed to "let your altruistic part hammer down the selfish part... in a way that would leave it feeling small and injured and unable to speak in its own defense."
There is a lot of truth to this framing. However, one critique I have of this is that we cannot talk about the proper way to resolve conflicts between values in the abstract, but only in relation to particular meta-values (by which I mean the values that we use to resolve conflicts between values).
Perhaps, there are some people whose meta-values are such that letting one part hammer down another part is the true expression themselves insofar as it makes sense to think of people having a true self and insofar as it doesn't make sense, the critique of sacrificing one's internal integrity also doesn't make sense.
As an example, there are many people who would prefer pain and suffering over mediocrity; who desire greatness and who are willing to forge themselves into a kind of metaphorical weapon. These people are exceedingly rare and hence special. They are deserving of the utmost praise and recognition.
I don't think Eliezer has any kind of basis for saying that these people are mistaken and should adopt his meta-values.
On other hand, I think he raises an important point. I suspect that the vast majority of people, myself included, do in fact share Eliezer's meta-value of not wanting one of our parts to hammer down another of our parts. Further, I agree with his contention that for most of us we have selfish and unselfish parts rather than these desires being completely fungible at some fixed rate.
There are many ways in which we could model this, but the way I tend to think of this is to imagine people as being less willing to sacrifice some utility |
ee9cf657-a836-4e5e-a732-76356886f8fa | trentmkelly/LessWrong-43k | LessWrong | Natural Selection vs Gradient Descent
Why is it so often that analogies are drawn between natural selection and gradient descent in a machine learning context? They are both optimizing over a fitness function, but isn't there an important difference in what they are optimizing over?
Natural selection is broadly optimizing over the architecture, initial parameters of the architecture, and the learning dynamics (how one updates the parameters of the architecture given data), which led to the architecture of the brain and methods of learning like STDP, in which the parameters of the architecture are the neurons of the brain.
Isn't gradient descent instead what we pick to be the learning dynamics, where we then pick our architecture (e.g. transformer) and initial parameters (e.g. Xavier initialization), so actually it makes more sense to draw an analogy between gradient descent and the optimizer learnt by natural selection (STDP, etc.), as opposed to natural selection itself?
Though natural selection is a simple optimization process, the optimizer (learning dynamics) learnt by this process could be very complex, and so reasoning like 'natural selection is simple so maybe the simplicity of gradient descent is sufficient' is not very strong? |
04d6ba96-5291-47c2-8176-919e65a6d471 | StampyAI/alignment-research-dataset/blogs | Blogs | Why do AGI researchers expect AI so soon?
*By Katja Grace, 24 May 2015*
People have been predicting when human-level AI will appear for many decades. A few years ago, MIRI [made](http://lesswrong.com/lw/e79/ai_timeline_prediction_data/) a big, organized collection of such predictions, along with helpful metadata. We are grateful, and just put up a [page about this dataset](http://aiimpacts.org/miri-ai-predictions-dataset/ "MIRI AI Predictions Dataset"), including some analysis. Some of you saw an earlier version of on an earlier version of our site.
There are lots of interesting things to say about the collected predictions. One interesting thing you might say is ‘wow, the median predictor thinks human-level AI will arrive in the 2030s—that’s kind of alarmingly soon’. While this is true, another interesting thing is that different groups have fairly different predictions. This means the overall median date is especially sensitive to who is in the sample.
In this particular dataset, who is in the sample depends a lot on who bothers to make public predictions. And [another interesting fact](http://aiimpacts.org/ai-timeline-predictions-in-surveys-and-statements/) is that people who bother to make public predictions have shorter AI timelines than people who are surveyed more randomly. This means the predictions you see here are probably biased in the somewhat early direction. We’ll talk about that another time. For now, I’d like to show you some of the interesting [differences between groups of people](http://aiimpacts.org/group-differences-in-ai-predictions/ "Group Differences in AI Predictions").
We divided the people who made predictions into those in AI, those in [AGI](http://en.wikipedia.org/wiki/Artificial_general_intelligence), futurists and others. This was a quick and imprecise procedure mostly based on Paul’s knowledge of the fields and the people, and some Googling. Paul doesn’t think he looked at the prediction dates before categorizing, though he probably basically knew some already. For each person in the dataset, we also interpreted their statement as a loose claim about when human-level AI was less likely than not to have arrived and when it was more likely than not to have arrived.
Below is what some of the different groups’ predictions look like, for predictions made since 2000. At each date, the line shows what fraction of predictors in that group think AI will already have happened by then, more likely than not. Note that they may also think AI will have happened before then: statements were not necessarily about the first year on which AI would arrive.
[](http://aiimpacts.org/wp-content/uploads/2015/05/groupsAIpredictions.png)**Figure 1:** Cumulative distributions of predictions made since 2000 by different groups of people
The groups’ predictions look pretty different, and mostly in ways you might expect: futurists and AGI researchers are more optimistic than other AI researchers, who are more optimistic than ‘others’. The median years given by different groups span seventy years, though this is mostly due to ‘other’, which is a small group. Medians for AI and AGI are eighteen years apart.
The ‘futurist’ and ‘other’ categories are twelve people together, and the line between being a futurist and merely pronouncing on the future sometimes seems blurry. It is interesting that the futurists here look very different from the the ‘others’, but I wouldn’t read that much into it. It may just be that Paul’s perception of who is a futurist depends on degree of confidence about futuristic technology.
Most of the predictors are in the AI or AGI categories. These groups have markedly different expectations. About 85% of AGI researchers are more optimistic than the median AI researcher. This is particularly important because ‘expert predictions’ about AI usually come from some combination of AI and AGI researchers, and it looks like what the combination is may alter the median date by around two decades.
Why would AGI researchers be systematically more optimistic than other AI researchers? There are perhaps too many plausible explanations for the discrepancy.
Maybe AGI researchers are—like many—overoptimistic about their own project. Planning fallacy is ubiquitous, and planning fallacy about building AGI naturally shortens overall AGI timelines.
Another possibility is expertise: perhaps human-level AI really will arrive soon, and the AGI researchers are close enough to the action to see this, while it takes time for the information to percolate to others. The AI researchers are also somewhat informed, so their predictions are partway between those of the AGI researchers, and those of the public.
Another reason is selection bias. AI researchers who are more optimistic about AGI will tend to enter the subfield of AGI more often than those who think human-level AI is a long way off. Naturally then, AGI researchers will always be more optimistic about AGI than AI researchers are, even if they are all reasonable and equally well informed. It seems hard to imagine some of the effect not being caused by this.
It matters which explanations are true: expertise means we should listen to AGI researchers above others. Planning fallacy and selection bias suggest we should not listen to them so much, or at least not directly. If we want to listen to them in those cases, we might want to make different adjustments to account for biases.
How can we tell which explanations are true? The shapes of the curves could give some evidence. What would we expect the curves to look like if the different explanations were true? Planning fallacy might look like the entire AI curve being shifted fractionally to the left to produce the AGI curve – e.g. so all of the times are halved. Selection bias would make the AGI curve look like the bottom of the AI curve, or the AI curve with its earlier parts heavily weighted. Expertise could look like dates that everyone in the know just doesn’t predict. Or the predictions might just form a narrower, more accurate, band. In fact all of these would lead to pretty similar looking graphs, and seem to roughly fit the data. So I don’t think we can infer much this way.
Do you favor any of the hypotheses I mentioned? Or others? How do you distinguish between them?
---
*Our page about demographic differences in AI predictions is [here](http://aiimpacts.org/group-differences-in-ai-predictions/ "Group Differences in AI Predictions").*
*Our page about the MIRI AI predictions dataset is [here](http://aiimpacts.org/miri-ai-predictions-dataset/ "MIRI AI Predictions Dataset").* |
eca76c96-7df9-4240-b1fc-5f5a76b5c122 | trentmkelly/LessWrong-43k | LessWrong | Results from an Adversarial Collaboration on AI Risk (FRI)
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI.
In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf
(This report is cross-posted to the EA Forum.)
Abstract
We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that would lead to the largest change in their beliefs (in expectation) about the risk of existential catastrophe by 2100. Neither the concerned nor the skeptics substantially updated toward the other’s views during our study, though one of the top short-term cruxes we identified is expected to close the gap in beliefs about AI existential catastrophe by about 5%: approximately 1 percentage point out of the roughly 20 percentage point gap in existential catastrophe forecasts. We find greater agreement about a broader set of risks from AI over the next thousand years: the two groups gave median forecasts of 30% (skeptics) and 40% (concerned) that AI will have severe negative effects on humanity by causing major declines in population, very low self-reported well-being, or extinction.
Extended Executive Summary
In July 2023, we released our Existentia |
54ab092d-5c56-4fcf-b712-2ee5d5a20483 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Causal scrubbing: Appendix
*\* Authors sorted alphabetically.*
*An appendix to* [*this post*](https://www.lesswrong.com/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research)*.*
1 More on Hypotheses
====================
1.1 Example behaviors
---------------------
As mentioned above, our method allows us to explain quantitatively measured model behavior operationalized as the expectation of a function f.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
on a distribution D.
Note that no part of our method distinguishes between the part of the input or computational graph that belongs to the “model” vs the “metric.”[[1]](#fny49t05abovh)
It turns out that you can phrase a lot of mechanistic interpretability in this way. For example, here are some results obtained from attempting to explain how a model has low loss:
* Nanda and Lieberum’s [analysis of the structure of a model that does modular addition](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) explains the observation that their model gets low loss on the validation dataset.
* The [indirect object identification circuit](https://arxiv.org/abs/2211.00593) explains the observation that the model gets low loss on the indirect object identification task, as measured on a synthetic distribution.
* Induction circuits (as described in [Elhage et al. 2021](https://transformer-circuits.pub/2021/framework/index.html%23induction-heads)) explain the observation that the model gets low loss when predicting tokens that follow the heuristic: “if AB has occurred before, then A is likely to be followed by B”.
That being said, you can set up experiments using other metrics besides loss as well:
* Cammarata et al identify [curve detectors in the Inception vision model](https://distill.pub/2020/circuits/curve-detectors/) by using the response of various filters on synthetic datasets to explain the correlation between: 1) the activation strength of some neuron, and 2) whether the orientation of an input curve is close to a reference angle.
1.2 Extensional equality, and common rewrites of G and I
--------------------------------------------------------
If you’re trying to explain the expectation of f, we always consider it a valid move to suggest an alternative function f′ if f(x)=f′(x) on every input ([“extensional equality”](https://en.wikipedia.org/wiki/Extensionality)), and then explain f′ instead. In particular, we’ll often start with our model’s computational graph and a simple interpretation, and then perform “algebraic rewrites” on both graphs to naturally specify the correspondence.
Common rewrites include:
* When the output of a single component of the model is used in different ways by different paths, we’ll duplicate that node in G, such that each copy can correspond to a different part of I.
* When multiple components of the model compute a single feature we can either:
* duplicate the node in I, to sample the components *separately*; or
* combine the nodes of G into a single node, to sample the components *together.*
* Sometimes, we want to test claims of the form “this subspace of the activation contains the feature of interest”. We can express this by rewriting the output as a sum of the activation projected into subspace and the orthogonal component. We can then propose that only the projected subspace encodes the feature.
* An even more complicated example is when we want to test a theorized function ϕ that maps from an input to a predicted activation of a component. We can then rewrite the output as the sum of two terms: ϕ(input) and the residual (the error of the estimate), and then claim only the phi term contains important information. If your estimate is bad, the error term will be large in important ways. This is especially useful to test hypotheses about scalar quantities (instead of categorical ones).[[2]](#fn3uuhfwnx8md)
Note that there are many trivial or unenlightening algebraic rewrites. For example, you could always replace f’ with a lookup table of f, and in cases where the model performs perfectly, you can also replace f with the constant zero function. Causal scrubbing is *not* intended to generate mechanistic interpretations or ensure that only mechanistic interpretations are allowed, but instead to check that a given interpretation is faithful. We discuss this more in the [limitations](https://www.lesswrong.com/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#7_Limitations) section of the main post.
1.3 Interpretations at multiple levels of specificity
-----------------------------------------------------
We allow hypotheses at a wide variety of levels of specificity. For example, here are two potential interpretations of the same f:
These interpretations correspond to the same input-output mappings, but the hypothesis on the right is more specific, because it's saying that there are three separate nodes in the graph expressing this computation instead of one. So when we construct G2 to correspond to I2 we would need three different activations that we claim are important in different ways, instead of just one for G1 mapping to I1. In interpretability, we all-else-equal prefer more specific explanations, but defining that is out of scope here–we’re just trying to provide a way of looking at the predictions made by hypotheses, rather than expressing any a priori preference over them.
2 What metric should causal scrubbing use?
==========================================
2.1 “Percentage of loss recovered” as a measure of hypothesis quality
---------------------------------------------------------------------
In both of these results posts, in order to measure the similarity between the scrubbed and unscrubbed models, we use *% loss recovered*.
As a baseline we use Erandomized, the ‘randomized loss’, defined as the loss when we shuffle the connection between the correct labels and the model’s output. Note this randomized loss will be higher than the loss for a calibrated guess with no information. We use randomized loss as the baseline since we are interested in explaining why the model makes the guesses it makes. If we had no idea, we could propose the trivial correspondence that the model’s inputs and outputs are unrelated, for which Escrubbed=Erandomized.
Thus we define:
% loss recovered(Emodel,Escrubbed)=Escrubbed−ErandomizedEmodel−Erandomized⋅100%.
This percentage can exceed 100% or be negative. It is not very meaningful as a fraction, and is rather an arithmetic aid for comparing the magnitude of expected losses under various distributions. However, it is the case that hypotheses with a “% loss recovered” closer to 100% result in predictions that are more consistent with the model.
2.2 Why not compare the full distribution, rather than expectations?
--------------------------------------------------------------------
Above, we rate our hypotheses using the distance between the expectation under the dataset and the scrubbed distribution, |E[f(x)]−Escrubbed(h,D)|.[[3]](#fnt16j7a80hgn)
You could instead rate hypotheses by comparing the full distribution of input-output behavior. That is, the difference between the distribution of the random variable f(x) under the data set D, and f(x) under Dscrubbed.
In this work, we prefer the expected loss. Suppose that one of the drivers of the model’s behavior is noise: trying to capture the full distribution would require us to explain what causes the noise. For example, you’d have to explain the behavior of a randomly initialized model despite the model doing ‘nothing interesting’.
3 Further discussion of zero and mean ablation
==============================================
Earlier, we noted our preference for “resampling ablation” of a component of a model (patch an activation of that component from a randomly selected input in the dataset) over zero or mean ablation of that component (set that component’s activation to 0 or its mean over the entire dataset, respectively) in order to test the claim “this component doesn’t matter for our explanation of the model”. We also mentioned three specific problems we see with using zero or mean ablation to test this claim. Here, we’ll discuss these problems in greater detail.
**1) Zero and mean ablations take your model off distribution in an unprincipled manner.**
The first problem we see with these ablations is that they destroy various properties of the distribution of activations in a way that seems unprincipled and could lead to the ablated model performing either worse or better than it should.
As an informal argument, imagine we have a module whose activations are in a two dimensional space. In the picture below we’ve drawn some of its activations as gray crosses, the mean as a green cross, and the zero as a red cross:
It seems to us that zero ablating takes your model out of distribution in an unprincipled way. (If the model was trained with dropout, it’s slightly more reasonable, but it’s rarely clear how a model actually handles dropout internally.) Mean ablating also takes the model out of distribution because the mean is not necessarily on the manifold of plausible activations.
**2) Zero and mean ablations can have unpredictable effects on measured performance.**
Another problem is that these ablations can have unpredictable effects on measured performance. For example, suppose that you’re looking at a regression model that happens to output larger answers when the activation from this module is at its mean activation (which, let’s suppose, is off-distribution and therefore unconstrained by SGD). Also, suppose you’re looking at it on a data distribution where this module is in fact unimportant. If you’re analyzing model performance on a data subdistribution where the model generally guesses too high, then mean ablation will make it look like ablating this module harms performance. If the model generally guesses too low on the subdistribution, mean ablation will improve performance. Both of these failure modes are avoided by using random patches, as resampling ablation does, instead of mean ablation.
**3) Zero and mean ablations remove variation that your model might depend on for performance.**
The final problem we see with these ablations is that they neglect the variation in the outputs of the module. Removing this variation doesn’t seem reasonable when claiming that the module doesn't matter.
For an illustrative toy example, suppose we’re trying to explain the performance of a model with three modules M1, M2, and M3. This model has been trained with dropout and usually only depends on components M1 and M2 to compute its output, but if dropout is active and knocks out M2, the model uses M3 instead and can perform almost as well as if it were able to use M1 and M2.
If we zero/mean ablate M2 (assume mean 0), it will look like M2 wasn't doing anything at all and our hypothesis that it wasn't relevant will be seemingly vindicated. If instead we resample ablate M2, the model will perform significantly worse (exactly how much worse is dependent on exactly how the output of M2 is relevant to the final output).
This example, while somewhat unrealistic, hopefully conveys our concern here: sometimes the variation in the outputs of a component is important to your model and performing mean or zero ablation forces this component to only act as a fixed bias term, which is unlikely to be representative of its true contribution to the model’s outputs.
We think these examples provide sufficient reasons to be skeptical about the validity of zero or mean ablation and demonstrate our rationale for preferring resampling ablation.
4 Unimportant inputs and isomorphic hypotheses
==============================================
4.1 Should unimportant inputs be taken from the same or different datapoints?
-----------------------------------------------------------------------------
Suppose we have the following hypothesis where I maps to the nodes of G in blue:
There are four activations in G that we claim are unimportant.
Causal scrubbing requires performing a resampling ablation on these activations. When doing so, should we pick one data point to get all four activations on? Two different data points, one for R and S (which both feed into V) and a different one for X and Y? Or four different data points?
In our opinion, all are reasonable experiments that correspond to subtly different hypotheses. This may not be something you considered when proposing your informal hypothesis, but following the causal scrubbing algorithm forces you to resolve this ambiguity. In particular, the more we sample unimportant activations independently, the more specific the hypothesis becomes, because it allows you to make strictly more swaps. It also sometimes makes it easier for the experimenter to reason about the correlations between different inputs. For a concrete example where this matters, see [the paren balance checker experiment](https://docs.google.com/document/d/12Ae4-FXtYdp1HAUyWsEIFe3Z_vUsTh5UAo0k4PFfLkg/edit%23heading%3Dh.5zcsu58q8jos).
And so, in the pseudocode above we sample the pairs (R, S) and (X, Y) separately, although we allow hypotheses that require all unimportant inputs throughout the model to be sampled together.[[4]](#fnz9k9zsblcr)
Why not go more extreme, and sample every single unimportant node separately? One reason is that it is not well-defined: we can always rewrite our model to an equivalent one consisting of a different set of nodes, and this would lead to completely different sampling! Another is that we don’t actually intend this: we do believe it’s important that the inputs to our treeified model be “somewhat reasonable”, i.e. have some of the correlations that they usually do in the training distribution, though we’re not sure exactly which ones matter. So if we started from saying that all nodes are sampled separately, we’d immediately want to hypothesize something about them needing to be sampled together in order for our scrubbed model to not get very high loss. Thus this default makes it simpler to specify hypotheses.
4.2 Including unimportant inputs in the hypothesis
--------------------------------------------------
In general we don’t require hypotheses to be surjective, meaning not all nodes of G need to be mapped onto by c, nor do we require that G contains all edges of I. This is convenient for expressing claims that some nodes (or edges) of G are unimportant for the behavior. It leaves a degree of freedom, however, in how to treat these unimportant nodes, as discussed in the preceding section.
It is possible to remove this ambiguity by requiring that the correspondence be an isomorphism between G and I. In this section we’ll demonstrate how to do this in a way that is consistent with the pseudocode presented, by combining all the unimportant parents of each important node.
In the example below, both R and S are unimportant inputs to the node V, and both X and Y are unimportant inputs to the node Z. We make the following rewrites in the example below:
* If a single important node has multiple unimportant inputs, we combine them. This forms the new node (X, Y) in G2. We also combine all upstream nodes, such that there is a single path from the input to this new combined node, forming (T, U) which (X, Y) depends on. This ensures we’ll only sample one input for all of them in the treeified model.
* We do the same for (R, S) into node V.
* Then we extend I with new nodes to match the entirety of rewritten G. For all of these new nodes that correspond to unimportant nodes (or nodes upstream of unimportant nodes), our interpretation says that all inputs map to a single value (the [unit type](https://en.wikipedia.org/wiki/Unit_type)). This ensures that we can sample any input.
* While we also draw the edges to match the structure of the rewritten G, we will not have other nodes in I be sensitive to the values of these unit nodes.
If you want to take a different approach to sampling the unimportant inputs, you can rewrite the graphs in a different way (for instance, keeping X and Y as separate nodes).
One general lesson from this is that rewriting the computational graphs G and I is extremely expressive. In practice, we have found that with some care it allows us to run the experiments we intuitively wanted to.
5 An alternative formalism: constructing a distribution on treeified inputs
===========================================================================
Suppose we have a function f to which we want to apply the causal scrubbing algorithm. Consider an isomorphic (see [above](https://www.lesswrong.com/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#4_Unimportant_inputs_and_isomorphic_hypotheses)) treeified hypothesis h=(GT,IT,cT) for f. In this appendix we will show that causal scrubbing preserves the joint distribution of inputs to each node of IT (Lemma 1). Then we show that the distribution of *inputs* induced by causal scrubbing is the maximum entropy distribution satisfying this constraint (Theorem 2).
Let X be the domain of f and D be the input distribution for f (a distribution on X). Let ~D be the distribution given by the causal scrubbing algorithm (so the domain of ~D is Xn, where n is the number of times that the input is repeated in IT).
We find it useful to define two sets of random variables: one set for the values of wires (i.e. edges) in IT when IT is run on a consistent input drawn from D (i.e. on (x,…,x) for some x); and one set for the values of wires in IT induced by the causal scrubbing algorithm:
**Definition** (f-consistent random variables): For all the edges of IT, we call the “f-consistent random variable” the result of evaluating the interpretation IT on (x,…,x), for a random input x∼D. For each node u∈IT, we will speak of the joint distribution of its input wires, and call the resulting random variable the “f-consistent inputs (to u)”. We also refer to the value of the wire going out of u∈IT as the “f-consistent output (to u)”.
**Definition** (scrubbed random variables): Suppose that we run IT on (x1,x2,…,xn)∼~D. In the same way, this defines a set of random variables, which we call the *scrubbed* random variables (and use the terms "scrubbed inputs" and "scrubbed output" accordingly).
**Lemma 1:** For every node u∈IT, the joint distribution of scrubbed inputs to u is equal to the product distribution of f-consistent inputs to u.
**Proof:** Recall that the causal scrubbing algorithm assigns a datum in X to every node of IT, starting from the root and moving up. The key observation is that for every node u of IT, the distribution of the datum of u is exactly D. We can see this by induction. Clearly this is true for the root. Now, consider an arbitrary non-root node u and assume that this claim is true for the parent v of u. Consider the equivalence classes on X defined as follows: x1 and x2 are equivalent if (x1,…,x1) has the same value at u as (x2,…,x2) when IT is run on each input. Then the datum of u is chosen by sampling from D subject to being in the same equivalence class as the datum of v. Since (by assumption) the datum of v is distributed according to D, so is the datum of u.
Now, by the definition of the causal scrubbing algorithm, for every node u, the scrubbed inputs to u are equal to the inputs to u when IT is run on the datum of u. Since the datum of u is distributed according to D, it follows that the joint distribution of scrubbed inputs to u is equal to the joint distribution of f-consistent inputs to u.
**Theorem 2:** The joint distribution of (top-level) scrubbed inputs is the maximum-entropy distribution on Xn, subject to the constraints imposed by Lemma 1.
**Proof:** We proceed by induction on a stronger statement: consider any way to "cut" through IT in a way that separates all of the inputs to IT from the root (and does so minimally, i.e. if any edge is un-cut then there is a path from some leaf to the root). (See below for an example.) Then the joint scrubbed distribution of the cut wires has maximal entropy subject to the constraints imposed by Lemma 1 on the joint distribution of scrubbed inputs to all nodes lying on the root's side of the cut.
An example of a cutOur base case is the cut through the input wires to the root (in which case Theorem 2 is vacuously true). Our inductive step will take any cut and move it up through some node u, so that if previously the cut passed through the output of u, it will now pass through the inputs of u. We will show that if the original cut satisfies our claim, then so will the new one.
Consider any cut and let u be the node through which we will move the cut up. Let x denote the vector of inputs to u, y be the output of u (so y=u(x)), and z denote the values along all cut wires besides y. Note that x and z are independent conditional on y; this follows by conditional independence rules on Bayesian networks (x and z are d-separated by y).
Next, we show that this distribution is the maximum-entropy distribution. The following equality holds for \*any\* random variables X,Y,Z such that Y is a function of X:
H(X,Z)=H(X,Y,Z)=H(X)+H(Y,Z)−I(X;Y,Z)=H(X)+H(Y,Z)−(I(X;Z∣Y)+I(X;Y))=H(X)+H(Y,Z)−H(Y)−I(X;Z∣Y)
Where I(⋅) is mutual information. The first step follows from the fact that Y is a function of X. The second step follows from the identity I(A;B)=H(A)+H(B)−H(A,B). The third step follows from the identity that I(A;B∣C)=I(A;B,C)−I(A;C). The last step follows from the fact that I(X;Y)=H(Y)−H(Y∣X)=H(Y), again because Y is a function of X.
Now, consider all possible distributions of (x,z) subject to the constraints imposed by Lemma 1 on the joint distribution of scrubbed inputs to all nodes lying on the root's side of the updated cut. The lemma specifies the distribution of x and (therefore) y. Thus, subject to these constraints, H(x,z) is equal to H(y,z)−I(x;z∣y) plus H(x)−H(y), which is a constant. By the inductive hypothesis, H(y,z) is as large as possible subject to the lemma's constraints. Mutual information is non-negative, so it follows that if I(x;z∣y)=0, then H(x,z) is as large as possible subject to the aforementioned constraints. Since x and z are independent conditional on y, this is indeed the case.
This concludes the induction. So far we have only proven that the joint distribution of scrubbed inputs is \*some\* maximum-entropy distribution subject to the lemma's constraints. Is this distribution unique? Assuming that the space of possible inputs is finite (which it is if we're doing things on computers), the answer is yes: entropy is a strictly concave function and the constraints imposed by the lemma on the distribution of scrubbed inputs are convex (linear, in particular). A strictly concave function has a unique maximum on a convex set. This concludes the proof.
**Fun Fact 3:** The entropy of the joint distribution of scrubbed inputs is equal to the entropy of the output of IT, plus the sum over all nodes u∈IT of the information lost by u (i.e. the entropy of the joint input to u minus the entropy of the output). (By Lemma 1, this number does not depend on whether we imagine IT being fed f-consistent inputs or scrubbed inputs.) By direct consequence of the proof of Theorem 2, we have H(x,z)−H(y,z)=H(x)−H(y) (with x,y,z as in the proof of Theorem 2). Proceeding by the same induction as in Theorem 2 yields this fact.
6 How causal scrubbing handles polysemanticity
==============================================
In [our polysemanticity toy model paper](https://www.alignmentforum.org/posts/kWp4R9SYgKJFHAufB/polysemanticity-and-capacity-in-neural-networkse), we introduced an analytically tractable setting where the optimal model represents features in superposition. In this section, we’ll analyze this model using causal scrubbing, as an example of what it looks like to handle polysemantic activations.
The simplest form of this model is the two-variable, one-neuron case, where we have independent variables x1 and x2 which both have zero expectation and unit variance, and we are choosing the parameters c and d to minimize loss in the following setting:
y=ax21+bx22
~y=(cx1+dx2)2+e
loss=(y−~y)2
Where ~y is our model, c and d are the parameters we’re optimizing, and a and b are part of the task definition. As discussed in our toy model paper, in some cases (when you have some combination of a and b having similar values and x1 and x2 having high kurtosis (e.g. because they are usually equal to zero)), c and d will both be set to nonzero values, and so (cx1+dx2) can be thought of as a superposed representation of both x1 and x2.
To explain the performance of this model with causal scrubbing, we take advantage of function extensionality and expand y\_tilde:
~y=c2x21+d2x22+2cdx1x2+e
And then we explain it with the following hypothesis:

When we sample outputs using our algorithm here, we’re going to sample the interference term from random other examples. And so the scrubbed model will have roughly the same estimated loss as the original model–the errors due to interference will no longer appear on the examples that actually suffer from interference, but the average effect of interference will be approximately reproduced.
In general, this is our strategy for explaining polysemantic models: we do an algebraic rewrite on the model so that the model now has monosemantic components and an error term, and then we say that the monosemantic components explain why the model is able to do the computation that it does, and we say that we don’t have any explanation for the error term.
This works as long as the error is actually unstructured–if the model was actively compensating for the interference errors (as in, doing something in a way that correlates with the interference errors to reduce their cost), we’d need to describe that in the explanation in order to capture the true loss.
This strategy also works if you have more neurons and more variables–we’ll again write our model as a sum of many monosemantic components and a residual. And it’s also what we’d do with real models–we take our MLP or other nonlinear components and make many copies of the set of neurons that are required for computing a particular feature.
This strategy means that we generally have to consider an explanation that’s as large as the model would be if we expanded it to be monosemantic. But it’s hard to see how we could have possibly avoided this.
Note that this isn’t a solution to *finding* a monosemantic basis - we’re just claiming that if you had a hypothesized monosemantic reformulation of the model you could test it with causal scrubbing.
This might feel vacuous–what did we achieve by rewriting our model as if it was monosemantic and then adding an error term? We claim that this is actually what we wanted. The hypothesis explained the loss because the model actually was representing the two input variables in a superposed fashion and resigning itself to the random error due to interference. The success of this hypothesis reassures us that the model isn’t doing anything more complicated than that. For example, if the model was taking advantage of some relationship between these features that we don’t understand, then this hypothesis would not replicate the loss of the model.
6.1 Underestimating interference by neglecting correlations in model errors
---------------------------------------------------------------------------
Now, suppose we rewrite the model from the form we used above:
~y=c2x21+d2x22+2cdx1x2+e
To the following form:
~y=c2x21+d2x22+cdx1x2+cdx1x2+e
Where we’ve split the noise term into two pieces. If we sample these two parts of the noise term independently, we will have effectively reduced the magnitude of the noise, for the usual reason that averages of two samples from a random variable have lower variance than single samples. And so if we ignore this correlation, we’ll estimate the cost of the noise to be lower than it is for the real model. This is another mechanism by which ignoring a correlation can cause the model to seem to perform better than the real model does; as before, this error gives us the opportunity to neglect some positive contribution to performance elsewhere in the model.
7 An additional example of approving a false hypothesis
=======================================================
We can construct cases where the explanation can make the model look better by sneaking in information. For example, consider the following setting:
The model’s input is a tuple of a natural number and the current game setting, which is either EASY or HARD (with equal frequency). The model outputs the answer either “0”, “1”, or “I don’t know”. The task is to guess the last bit of the hash of the number.
Here’s the reward function for this task:
| | | | |
| --- | --- | --- | --- |
| Game mode | Score if model is correct | Score if model is incorrect | Score if model says “I don’t know” |
| EASY | 2 | -1 | 0 |
| HARD | 10 | -20 | 0 |
If the model has no idea how to hash numbers, its optimal strategy is to guess when in EASY mode and say “I don’t know” in HARD mode.
Now, suppose we propose the hypothesis that claims that the model outputs:
* on an EASY mode input, what the model would guess; and
* on a HARD mode input, the correct answer.
To apply causal scrubbing, we consider the computational graph of both the model and the hypothesis to consist of the input nodes and a single output node. In this limited setting, the projected model runs the following algorithm:
* Replace the input with a random input that would give the same answer according to the hypothesis; and
* Output what the model outputs on that random input.
Now consider running the projected model on a HARD case. According to the hypothesis, we output the correct answer, so we replace the input
* half the time with another HARD mode input (with the same answer), on which the model outputs “I don’t know”; and
* half the time with an EASY mode input chosen such the model will guess the correct answer.
So, when you do causal scrubbing on HARD cases, the projected model will now guess correctly half the time, because half its “I don’t know” answers will be transformed into the correct answer. The projected model’s performance will be worse on the EASY cases, but the HARD cases mattered much more, so the projected model’s performance will be much better than the original model’s performance, even though the explanation is wrong!
In examples like this one, hypotheses can cheat and get great scores while being very false.
8 Adversarial validation might be able to elicit true hypotheses
================================================================
(Credit for the ideas in this section is largely due to ARC.)
We might have hoped that we’d be able to use causal scrubbing as a check on our hypotheses analogous to using a proof checker like Lean or Coq to check our mathematical proofs, but this doesn’t work. Our guess is that it’s probably impossible to have an efficient algorithm for checking interpretability explanations which always rejects false explanations. This is mostly because we suspect that interpretability explanations should be regarded as an example of [defeasible reasoning](https://en.wikipedia.org/wiki/Defeasible_reasoning). Checking interpretations in a way that rejects all false explanations is probably NP-hard, and so we want to choose a notion of checking which is weaker.
We aren’t going to be able to check hypotheses by treating as uncorrelated everything that the hypotheses claimed wasn’t relevantly correlated. This would have worked if ignoring correlations could only harm the model. But as shown above, we have several cases where ignoring correlations helps the model.
So we can’t produce true explanations by finding hypotheses subject to the constraint that they predict the observed metrics. As an alternative proposal, we can check if hypotheses are comprehensive by seeing if any adversarial additions to the hypothesis would cause the predicted metric to change considerably. In all of the counterexamples above, the problem is that the metric was being overestimated because there were important correlations that were being neglected and which would reduce the estimated metric if they were included. If we explicitly check for additional details to add to our hypotheses which cause the estimated metric to change, all the counterexamples listed above are solved.
To set up this adversarial validation scheme, we need some mechanism for hypotheses to be constructed adversarially. That is, we need to handle cases where the adversary wants to rewrite f to an extensionally-equal function. One way of thinking about this is that we want a function `join` which is a binary operation on hypotheses, taking the two hypotheses to the hypothesis which preserves all structure in the model that either of the two hypotheses preserved.
Here are two ways of defining this operation:
* **Swap-centric.** You can think of a hypothesis as a predicate on activation swaps (of the same activation on two different inputs). From this perspective, you can define join(h1, h2) to be the hypothesis which permits a swap iff h1 and h2 both permit it.
* **Computation graph centric.** You can equivalently construct the joined hypothesis by the following process. First, ensure that each of the correspondences are bijections, and that both I1 and I2 have the same shape, adding extra no-op nodes as necessary. Now we can define I of the joined hypothesis to be the graph where every node contains the tuple of the values from the two earlier interpretations.
The main failure of the algorithm listed above is that we don’t know how to handle cases where the adversary wants to rewrite f to an extensionally-equal function in a way which is mutually incompatible with the original hypothesis (for example, because their computational graphs have different shapes and there’s no way to splice the two computational graphs together). This is a pretty bad problem because the function extensionality move seems very important in practice. ARC has worked on basically this problem for a while and hasn’t yet solved it, regrettably.
Some other questions that we haven’t answered:
* How do we incentivize specific explanations? We don’t know (but haven’t thought about it that much). Our current proposals look something like having a budget for how much hypotheses can reduce entropy.
* The explanations produced by this process will probably by default be impossible for humans to understand; is there some way to fix this? We also don’t have good ideas here. (Note that this isn’t a failure that’s specific to causal scrubbing; it seems fundamentally challenging to generate human-understandable interpretations for complicated superhuman models.) That being said, a lot of our optimism about interpretability comes from applications where the interpretability tools are used by AIs or by human-coded algorithms, rather than by humans, so plausibly we’re fine even if humans can’t understand the interpretability results.
Overall, it seems plausible that these problems can be overcome, but they are definitely not currently solved. We hold out hope for an interpretability process which has validity properties which allow us to use powerful optimization inside it and still trust the conclusions, and hope to see future work in this direction.
1. **[^](#fnrefy49t05abovh)** This is also true when you’re training models with an autodiff library–you construct a computational graph that computes loss, and run backprop on the whole thing, which quickly recurses into the model but doesn’t inherently treat it differently.
2. **[^](#fnref3uuhfwnx8md)** This allows for testing out human interpretable approximations to neural network components: ‘[Artificial Artificial Neural networks](https://distill.pub/2020/circuits/curve-circuits/)’. We think it’s more informative to see how the model performs with the residual of this approximation resampling ablated as opposed to zero ablated.
3. **[^](#fnreft16j7a80hgn)** In general, you could have the output be non-scalar with any distance metric δ to evaluate the deviation of the scrubbed expectation, but we’ll keep things simple here.
4. **[^](#fnrefz9k9zsblcr)** Another way of thinking about this is: when we consider the [adversarial game setting](https://www.lesswrong.com/posts/kcZZAsEjwrbczxN2i/causal-scrubbing-appendix#8_Adversarial_validation_might_be_able_to_elicit_true_hypotheses), we would like each side to be able to request that terms are sampled together. By default therefore we would like terms (even random ones!) to be sampled separately. |
6451e426-d5d9-4d79-9ae5-127ff24bede5 | trentmkelly/LessWrong-43k | LessWrong | Avoid large group discussions in your social events
If you're organizing a social event, I strongly recommend that you structure it in a way that encourages small group discussions over large ones.
What's the point of an interest-specific social club?
To socialize, of course! With people who share a specific interest. I would argue this isn't as obvious to people-organizers as it might sound.
Last Friday, I attended a meetup with my university's philosophy club. I went in expecting many opportunities to meet other thoughtful people, but I left the main event without seeing them. Instead, the meetup was a guided large group discussion about quotes from Emerson. While I did learn a lot by following the conversation, and it was somewhat entertaining, I was disappointed that I didn't have the chance to engage in more personal conversations and get to know the other members of the club.
If you get to decide what ~15 thoughtful people will do for two hours on a Friday afternoon, this is a rare opportunity, and you should be strategic about it. It represents 30 synchronous human-hours of opportunity cost, plus transportation. While you could use these hours to help people learn more about philosophy, I don't think this is the best use for the time. Rather, I believe human connection is more important. I could have learned about Emerson much more efficiently by reading the Stanford Encyclopedia of Philosophy, or been entertained more efficiently by taking a trip to an amusement park, for two hours. But instead, I blocked out that time because meeting the right person could have a huge positive impact on my life. I might make a great friend. We might spend a lot of quality time in the future. They could be a future romantic partner, or a professional connection. Indeed, I won't have a better opportunity to meet lifelong friends than in college – in no other point of my life will I have as much time at my disposal to socialize and be in the same physical location as so many similar people in the same position. Meeting peop |
88005658-46cc-4476-9742-f8646bb5fd25 | trentmkelly/LessWrong-43k | LessWrong | Talking Through A Fear of Death
Epistemic Status: semi-stream-of-consciousness effort to talk myself through how I really feel about death.
After waiting in line for what feels like a lifetime, I finally get the chance to talk to the Oracle about something that’s been bothering me.
Me: I’m afraid of death
Oracle: Good for you
Me: How is that good? Not only is death objectively terrifying, the fact I’m terrified of it is itself a psychological harm that may well linger throughout my life
Oracle: First of all, is death really objectively terrifying? Is it a coincidence that after millions of years of humans evolving to survive at all costs, all humans have an innate fear of death? If it was evolutionarily advantageous for you to try and get yourself killed, maybe death wouldn’t seem so bad. Putting your strong inbuilt bias for survival aside, why do you take your fear of death any more seriously than other arbitrary fears you’ve inherited from evolution, for example, asking out a girl. Chances are you’ve feared that situation more than death at some point in your life, and afterwards, whether it went good or bad, I can’t imagine that you would have reasoned that the fear really was proportionate to the stakes. Surely you’ve realised that humans often have very strong fears towards things that aren’t that bad.
Me: Well obviously the fear you feel towards some outcome is rarely going to match up with the actual outcome because our brain has to make an approximation of just how bad that outcome is and how much more our lives will suck after it happens. That doesn’t mean fear is unreliable. And even if I’m afraid of something because I’ve evolved that way, that doesn’t change anything. If I’m afraid of burning my hand on a flame because I’ve evolved that way, I should still try not to burn myself because I’ve also evolved to actually experience pain when my hand is burnt and that’s an experience I want to avoid.
Oracle: But fear of death is the exceptional case here because you won’t actually get |
67a1f8e6-d339-499e-9430-92896f91f9e5 | StampyAI/alignment-research-dataset/blogs | Blogs | A summary of AI surveys
*By Katja Grace, 10 January 2015*
If you want to know when human-level AI will be developed, a natural approach is to ask someone who works on developing AI. You might however be put off by such predictions being regularly criticized as inaccurate and biased. While they do seem overwhelmingly likely to be inaccurate and biased, I claim they would have to be very inaccurate and biased before they were worth ignoring, especially in the absence of many other sources of quality information. The bar for ridicule is well before the bar for being uninformative.
So on that note, we made a big [summary](http://aiimpacts.wpengine.com/ai-timeline-surveys/ "AI Timeline Surveys") of all of the surveys we know of on timelines to human-level AI. And also a [bunch](http://aiimpacts.wpengine.com/ai50-survey/ "AI@50 Survey") [of](http://aiimpacts.wpengine.com/bainbridge-survey/ "Bainbridge survey") [summary](http://aiimpacts.wpengine.com/agi-09-survey/ "AGI-09 Survey") [pages](http://aiimpacts.wpengine.com/fhi-ai-timelines-survey/ "FHI Winter Intelligence Survey") [on](http://aiimpacts.wpengine.com/hanson-ai-expert-survey/ "Hanson AI Expert Survey") [specific](http://aiimpacts.wpengine.com/klein-agi-survey/ "Klein AGI Survey") [human-level](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") [AI](http://aiimpacts.wpengine.com/michie-survey/ "Michie Survey") [surveys](http://aiimpacts.wpengine.com/kruel-ai-survey/ "Kruel AI Interviews"). We hope they are a useful reference, and also help avert selection bias selection bias from people only knowing about surveys that support their particular views on selection bias.
It’s interesting to note the consistency between the surveys that asked participants to place confidence intervals. They all predict there is a ten percent chance of human-level AI sometime in the 2020s, and almost all place a fifty percent chance of human-level AI between 2040 and 2050. They are even pretty consistent on the 90% date, with more than half in 2070-2080. This is probably mostly evidence that people talk to each other and hear about similar famous predictions. However it is some evidence of accuracy, since if each survey produced radically different estimates we must conclude that surveys are fairly inaccurate.
If you know of more surveys on human-level AI timelines, do [send them our way](http://aiimpacts.wpengine.com/feedback/ "Feedback").
Here’s a summary of [our summary](http://aiimpacts.wpengine.com/ai-timeline-surveys/ "AI Timeline Surveys"):
| Year | Survey | # | 10% | 50% | 90% | Other key ‘Predictions’ | Participants | Response rate | Link to original document |
| 1972 | [Michie](http://aiimpacts.wpengine.com/michie-survey/ "Michie Survey") | 67 | | | | Median 50y (2022) (vs 20 or >50) | AI, CS | – | [link](https://saltworks.stanford.edu/assets/cf501kz5355.pdf) |
| 2005 | [Bainbridge](http://aiimpacts.wpengine.com/bainbridge-survey/ "Bainbridge survey") | 26 | | | | Median 2085 | Tech | – | [link](http://www.wtec.org/ConvergingTechnologies/3/NBIC3_report.pdf) |
| 2006 | [AI@50](http://aiimpacts.wpengine.com/ai50-survey/ "AI@50 Survey") | | | | | median >50y (2056) | AI conf | – | [link](http://web.archive.org/web/20110710193831/http://www.engagingexperience.com/ai50/) |
| 2007 | [Klein](http://aiimpacts.wpengine.com/klein-agi-survey/ "Klein AGI Survey") | 888 | | | | median 2030-2050 | Futurism? | – | [link](http://web.archive.org/web/20110226225452/http://www.novamente.net/bruce/?p=54) and [link](http://sethbaum.com/ac/2011_AI-Experts.pdf) |
| 2009 | [AGI-09](http://aiimpacts.wpengine.com/agi-09-survey/ "AGI-09 Survey") | | 2020 | 2040 | 2075 | | AGI conf; AI | – | [link](http://sethbaum.com/ac/2011_AI-Experts.pdf) |
| 2011 | [FHI Winter Intelligence](http://aiimpacts.wpengine.com/fhi-ai-timelines-survey/ "FHI Winter Intelligence Survey") | 35 | 2028 | 2050 | 2150 | | AGI impacts conf; 44% related technical | 41% | [link](http://www.fhi.ox.ac.uk/machine-intelligence-survey-2011.pdf) |
| 2011-2012 | [Kruel interviews](http://aiimpacts.wpengine.com/kruel-ai-survey/ "Kruel AI Interviews") | 37 | 2025 | 2035 | 2070 | | AGI, AI | – | [link](http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI) |
| 2012 | [FHI: AGI](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") | 72 | 2022 | 2040 | 2065 | | AGI & AGI impacts conf; AGI, technical work | 65% | [link](http://www.nickbostrom.com/papers/survey.pdf) |
| 2012 | [FHI:PT-AI](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") | 43 | 2023 | 2048 | 2080 | | Philosophy & theory of AI conf; not technical AI | 49% | [link](http://www.nickbostrom.com/papers/survey.pdf) |
| 2012-present | [Hanson](http://aiimpacts.wpengine.com/hanson-ai-expert-survey/ "Hanson AI Expert Survey") | ~10 | | | | ≤ 10% progress to human level in past 20y | AI | – | [link](http://www.overcomingbias.com/2012/08/ai-progress-estimate.html) |
| 2013 | [FHI: TOP100](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") | 29 | 2022 | 2040 | 2075 | | Top AI | 29% | [link](http://www.nickbostrom.com/papers/survey.pdf) |
| 2013 | [FHI:EETN](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") | 26 | 2020 | 2050 | 2093 | | Greek assoc. for AI; AI | 10% | [link](http://www.nickbostrom.com/papers/survey.pdf) |
*(Image: AGI-09 participants, by [jeriaska](https://www.flickr.com/photos/jeriaska/3337664307/in/set-72157614814369315/))* |
f29c4d20-0baa-4025-ac73-04000f5be188 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Boston - Taking ideas seriously
Discussion article for the meetup : Boston - Taking ideas seriously
WHEN: 01 June 2014 03:30:00PM (-0400)
WHERE: Citadel House, 98 Elm St #1, Somerville, MA
Anders Huitfeldt will lead a discussion about whether taking ideas seriously is likely to increase the accuracy of mental maps, in the case of agents with human-level intelligence. For Less Wrong regulars this will be basic review, and for non-regulars it may be a useful introduction. He will have some slides, but he hopes that this will be an interactive discussion rather than a presentation.
It would be helpful to look over the following before the meetup:
Reason as memetic immune disorder by Phil Goetz
Epistemic learned helplessness by Scott Alexander
Taking ideas seriously by Will Newsome
Cambridge/Boston-area Less Wrong meetups start at 3:30pm, and have an alternating location:
* 1st Sunday meetups are at Citadel in Porter Sq, at 98 Elm St, apt 1, Somerville.
* 3rd Sunday meetups are in MIT's building 66 at 25 Ames St, room 156. Room number subject to change based on availability; signs will be posted with the actual room number.
(We also have last Wednesday meetups at Citadel at 7pm.)
Our default schedule is as follows:
—Phase 1: Arrival, greetings, unstructured conversation.
—Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes.
—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.
—Phase 4: Dinner.
Discussion article for the meetup : Boston - Taking ideas seriously |
859c4b25-af13-4a93-af96-76783ad252bf | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | DSLT 0. Distilling Singular Learning Theory
*TLDR; In this sequence I distill Sumio Watanabe's* Singular Learning Theory (SLT) *by explaining the essence of its main theorem - Watanabe's Free Energy Formula for Singular Models - and illustrating its implications with intuition-building examples. I then show why neural networks are singular models, and demonstrate how SLT provides a framework for understanding phases and phase transitions in neural networks.*
**Epistemic status:** The core theorems of Singular Learning Theory have been rigorously proven and published by Sumio Watanabe across 20 years of research. Precisely what it says about modern deep learning, and its potential application to alignment, is still speculative.
**Acknowledgements:** This sequence has been produced with the support of a grant from the [Long Term Future Fund](https://funds.effectivealtruism.org/funds/far-future). I'd like to thank all of the people that have given me feedback on each post: Ben Gerraty, [@Jesse Hoogland](https://www.lesswrong.com/users/jesse-hoogland?mention=user) , [@mfar](https://www.lesswrong.com/users/mfar?mention=user), [@LThorburn](https://www.lesswrong.com/users/lthorburn?mention=user) , Rumi Salazar, Guillaume Corlouer, and in particular my supervisor and editor-in-chief [Daniel Murfet](http://therisingsea.org/).
**Theory vs Examples:** The sequence is a mixture of synthesising the main theoretical results of SLT, and providing simple examples and animations that illustrate its key points. As such, some theory-based sections are slightly more technical. Some readers may wish to skip ahead to the intuitive examples and animations before diving into the theory - these are clearly marked in the table of contents of each post.
**Prerequisites:** Anybody with a basic grasp of Bayesian statistics and multivariable calculus should have no problems understanding the key points. Importantly, despite SLT pointing out the relationship between algebraic geometry and statistical learning, no prior knowledge of algebraic geometry is required to understand this sequence - I will merely gesture at this relationship. Jesse Hoogland wrote an [excellent introduction to SLT](https://www.lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick) which serves as a high level overview of the ideas that I will discuss here, and is thus recommended pre-reading to this sequence.
**SLT for Alignment Workshop:** This sequence was prepared in anticipation of the [SLT for Alignment Workshop 2023](https://devinterp.com/2023) and serves as a useful companion piece to the material covered in the [Primer Lectures](https://devinterp.com/2023#primer).
**Thesis:** The sequence is derived from my recent masters thesis which you can read about [at my website](https://lemmykc.github.io/mathematics/SLT/).
**Developmental Interpretability:** Originally the sequence was going to contain a short outline of a new research agenda, but this can now be found [here](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability) instead.
Introduction
============
> Knowledge to be discovered [in a statistical model] corresponds to a singularity.
>
> ...
>
> If a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.
>
> *Sumio Watanabe*
>
>
In 2009, Sumio Watanabe wrote these two profound statements in his groundbreaking book [Algebraic Geometry and Statistical Learning](https://doi.org/10.1017/CBO9780511800474) where he proved the first main results of Singular Learning Theory (SLT). Up to this point, this work has gone largely under-appreciated by the AI community, probably because it is rooted in highly technical algebraic geometry and distribution theory. On top of this, the theory is framed in the Bayesian setting, which contrasts the SGD-based setting of modern deep learning.
But this is a crying shame, because SLT has a *lot* to say about why neural networks, which are singular models, are able to generalise well in the Bayesian setting, and it is very possible that these insights carry over to modern deep learning.
At its core, SLT shows that the loss landscape of singular models, the KL divergence K(w).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, is fundamentally different to that of regular models like linear regression, consisting of flat valleys instead of broad parabolic basins. Correspondingly, the measure of effective dimension (complexity) in singular models is a rational quantity called the RLCT [[1]](#fnlsix3kl6d), which can be less than half the total number of parameters. This fact means that classical results of Bayesian statistics like asymptotic normality break down, but what Watanabe shows is that this is actually a feature and not a bug: different regions of the loss landscape have different tradeoffs between accuracy and complexity because of their differing information geometry. This is the content of Watanabe's Free Energy Formula, from which the Widely Applicable Bayesian Information Criterion (WBIC) is derived, a generalisation of the standard Bayesian Information Criterion (BIC) for singular models.
Singular model (e.g. neural network) posteriors look more like the left than the right as n→∞.With this in mind, SLT provides a framework for understanding *phases* and *phase transitions* in neural networks. It has been mooted that understanding phase transitions in deep learning may be a key part of mechanistic interpretability, for example in [Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html), [Toy Models of Superposition](https://transformer-circuits.pub/2022/toy_model/index.html), and [Progress Measures for Grokking via Mechanistic Interpretability](https://arxiv.org/pdf/2301.05217.pdf), which relate phase transitions to the formation of circuits. Furthermore, the existence of scaling laws and other critical phenomena in neural networks suggests that there is a natural thermodynamic perspective on deep learning. As it stands there is no agreed-upon theory that connects all of this, but in this sequence we will introduce SLT as a bedrock for a theory that can tie these concepts together.
In particular, I will demonstrate the existence of first and second order phase transitions in simple two layer feedforward ReLU neural networks which we can understand precisely through the lens of SLT. By the end of this sequence, the reader will understand why the following phase transition in the Bayesian posterior corresponds to a changing accuracy-complexity tradeoff of the different phases in the loss landscape:
A first order phase transition in the Bayesian posterior for two layer feedforward ReLU neural networks. This will make sense by the end - just enjoy it for now.Key Points of the Sequence
==========================
To understand phase transitions in neural networks from the point of view of SLT, we need to understand how different regions of parameter space can have different accuracy-complexity tradeoffs, a feature of singular models that is not present in regular models. Here is the outline of how these posts get us there:
* [**DSLT 1. The RLCT Measures the Effective Dimension of Neural Networks**](https://www.lesswrong.com/s/czrXjvCLsqGepybHC/p/4eZtmwaqhAgdJQDEg)
+ Singular models (like neural networks) are distinguished from regular models by having a degenerate Fisher information matrix, which causes classical results like asymptotic normality and the BIC to break down. Thus, singular posteriors do not converge to a Gaussian.
+ Because of this, the effective dimension of singular models is measured by a rational algebraic quantity called the RLCT λ∈Q>0, which can be less than half the dimension of parameter space.
* [**DSLT 2. Why Neural Networks obey Occam's Razor**](https://www.lesswrong.com/posts/CZHwwDd7t9aYra5HN/dslt-2-why-neural-networks-obey-occam-s-razor)
+ The WBIC, which is a simplification of Watanabe's Free Energy Formula, generalises the BIC for singular models, where complexity is measured by the RLCT λ and can differ across different regions of parameter space. (This is related to Bayesian generalisation error).
+ The WBIC can be interpreted as an accuracy-complexity tradeoff, showing that singular models obey a kind of Occam's razor because:
- As the number of datapoints n→∞, true parameters that minimise K(w) are preferred according to their RLCT.
- Non-true parameters can still be preferred at finite n if their RLCT is sufficiently small.
* [**DSLT 3. Neural Networks are Singular**](https://www.lesswrong.com/posts/tZwaGp5wMQqKh3krz/dslt-3-neural-networks-are-singular)
+ Neural networks are singular because there are many ways to vary their parameters without changing the function they compute.
+ I outline a full classification of these degeneracies in the simple case of two layer feedforward ReLU neural networks so that we can study their geometry as phases.
* [**DSLT 4. Phase Transitions in Neural Networks**](https://www.lesswrong.com/posts/aKBAYN5LpaQMrPqMj/dslt-4-phase-transitions-in-neural-networks)
+ Phases in statistical learning correspond to a singularity of interest, each with a particular accuracy-complexity tradeoff. Phase transitions occur when there is a drastic change in the geometry of the posterior as some hyperparameter is varied.
+ I demonstrate the existence of first and second order phase transitions in simple two layer ReLU neural networks when varying the underlying true distribution.
(**Edit**: Originally the sequence was going to contain a post about SLT for Alignment, but this can now be found [here](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability) instead, where a new research agenda, Developmental Interpretability, is introduced).
Resources
=========
Though these resources are relatively sparse for now, expanding the reach of SLT and encouraging new research is the primary longterm goal of this sequence.
### SLT Workshop for Alignment Primer
In June 2023, a summit, "SLT for Alignment", was held, which produced over 20hrs of lectures. The details of these talks can be found [here](https://devinterp.com/2023), with recordings found [here](https://www.youtube.com/@SLTSummit/videos).
### Research groups
Research groups I know of working on SLT:
* [Prof. Sumio Watanabe's](http://watanabe-www.math.dis.titech.ac.jp/users/swatanab/) research group at the Tokyo Institute of Technology.
* [Dr. Daniel Murfet](http://therisingsea.org/) and the Melbourne Deep Learning Group (MDLG), which runs a weekly seminar on [metauni](https://metauni.org/slt/).
### Literature
The two canonical textbooks due to Watanabe are:
* [Wat09] **The grey book:** S. Watanabe [Algebraic Geometry and Statistical Learning Theory](https://doi.org/10.1017/CBO9780511800474) 2009
* [Wat18] **The green book:** S. Watanabe [Mathematical Theory of Bayesian Statistics](%5Bhttps://doi.org/10.1201/9781315373010%5D(https://doi.org/10.1201/9781315373010)) 2018
The two main papers that were precursors to these books:
* [Wat07] S. Watanabe [Almost All Learning Machines are Singular](https://doi.org/10.1109/FOCI.2007.371500) 2007 (paper)
* [Wat13] S. Watanabe [A Widely Applicable Bayesian Information Criterion](https://jmlr.csail.mit.edu/papers/v14/watanabe13a.html) 2013 (paper)
This sequence is based on my recent thesis:
* [Car21] Liam Carroll's MSc Thesis, October 2021 [Phase Transitions in Neural Networks](http://therisingsea.org/notes/MSc-Carroll.pdf)
MDLG recently wrote an introduction to SLT:
* [Wei22] S. Wei, D. Murfet, M. Gong, H. Li , J. Gell-Redman, T. Quella “[Deep learning is singular, and that’s good](https://www.suswei.com/publication/wei-2022-singular/wei-2022-singular.pdf)” 2022.
Other theses studying SLT:
* [Lin11] Shaowei Lin’s PhD thesis, 2011, [Algebraic Methods for Evaluating Integrals in Bayesian Statistics](https://escholarship.org/content/qt6r99035v/qt6r99035v_noSplash_55ad6962455379ca776283fed8278b40.pdf).
* [War21] Tom Waring’s MSc thesis, October 2021, [Geometric Perspectives on Program Synthesis and Semantics](http://therisingsea.org/notes/MSc-Waring.pdf).
* [Won22] Spencer Wong’s MSc thesis, May 2022, [From Analytic to Algebraic: The Algebraic Geometry of Two Layer Neural Networks](http://therisingsea.org/notes/MScThesisSpencerWong.pdf).
* [Far22] Matt Farrugia-Roberts’ MCS thesis, October 2022, [Structural Degeneracy in Neural Networks](https://far.in.net/mthesis).
Other introductory blogs:
* Jesse Hoogland’s blog posts: [general intro to SLT](https://www.lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick), and [effects of singularities on dynamics](https://www.lesswrong.com/posts/2N7eEKDuL5sHQou3N/spooky-action-at-a-distance-in-the-loss-landscape).
* Edmund Lau’s blog [Probably Singular](https://edmundlth.github.io/posts/singular-learning-theory-part-1/).
1. **[^](#fnreflsix3kl6d)**Short for the algebro-geometric Real Log Canonical Threshold, which I define in [DSLT1](https://www.lesswrong.com/posts/4eZtmwaqhAgdJQDEg/dslt-1-the-rlct-measures-the-effective-dimension-of-neural#The_Real_Log_Canonical_Threshold). |
fe1847c3-bb51-444c-9d6a-0781e6dd489b | trentmkelly/LessWrong-43k | LessWrong | A very long list of sleep maintenance suggestions
Leading up to this year's Australia megameetup, in the interest of improving people's lives in the most valuable way possible, I was hoping to include a session on sleep, sleep quality and sleep maintenance. With that in mind I put together A very long list of sleep maintenance suggestions.
Some of the most important take-aways:
1. Do you think you get {good sleep/enough sleep}?
- If no then fix it. This single thing will improve your life drastically. (also don't lie to yourself about this, research shows that people who are sleep deprived are bad at predicting how sleep deprived they are, if you are unsure; probably err on the side of caution. As a measure - if you turned off your alarms - would you be able to get out of bed at the same time every day?)
2. "I do this weird thing with my sleep but it works well for me, is that a problem?"
- not really. if it works - keep doing it. if it works most of the time but falls apart every Monday, then maybe its time to consider a different plan.
3. Uberman, and other polyphasic sleep cycles?
- depends if it works for you. Don't force yourself to do it if it, don't expect it to work for you. Feel free to try it; lifestyle is also relevant in considering this sleep implementation, (if you have a 9-5 job you certainly can't make it work, if you have a flexible life then maybe)
Also living a healthy lifestyle will make a big difference.
Some good highlights from the list:
* limit caffeine, especially to earlier in the day
* avoid using alcohol as a nightcap - it disrupts sleep maintenance
* Avoid heavy meals and heavy exercise within 3 hours of bedtime
* use bedroom for sleep and sex only
* have sleep in your schedule (go to bed and get up at the same time every day, even on weekends)
* decrease brightness of home lighting ~1-2 hours before bed
* avoid electronics ~1-2 hours before bed
* reduce light and noise (via earplugs / white noise) in bedroom as much as possible while sleeping
* I |
63f8c12d-352a-44f4-9c0f-7bf4eb4e97c7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Deconfusing Direct vs Amortised Optimization
*This post is part of the work done at*[*Conjecture*](https://www.conjecture.dev/)*.*
*An earlier version of this post was posted*[*here*](https://www.beren.io/2022-09-25-Deconfusing-direct-vs-amortized-optimization/)*.*
*Many thanks go to Eric Winsor, Daniel Braun, Chris Scammell, and Sid Black who offered feedback on this post.*
***TLDR:**We present a distinction from the Bayesian/variational inference literature of direct vs amortized optimization. Direct optimizers apply optimization power to argmax some specific loss or reward function. Amortized optimizers instead try to learn a mapping between inputs and output solutions and essentially optimize for the posterior over such potential functions. In an RL context, direct optimizers can be thought of as AIXI-like planners which explicitly select actions by assessing the utility of specific trajectories. Amortized optimizers correspond to model-free RL methods such as Q learning or policy gradients which use reward functions only as a source of updates to an amortized policy/Q-function. These different types of optimizers likely have distinct alignment properties: ‘Classical’ alignment work focuses on difficulties of aligning AIXI-like direct optimizers. The intuitions of*[*shard theory*](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) *are built around describing amortized optimizers. We argue that AGI, like humans, will probably be comprised of some combination of direct and amortized optimizers due to the intrinsic computational efficiency and benefits of the combination.*
Here, I want to present a new frame on different types of optimization, with the goal of helping deconfuse some of the discussions in AI safety around questions like whether RL agents [directly optimize for reward](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target), and whether generative models (i.e. [simulators](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators)) are likely to develop agency. The key distinction I want to make is between *direct* and *amortized* optimization.
*Direct* optimization is what AI safety people, following from Eliezer’s early depictions, often envisage an AGI as primarily being engaged in. Direct optimization occurs when optimization power is applied immediately and directly when engaged with a new situation to explicitly compute an on-the-fly optimal response – for instance, when directly optimizing against some kind of reward function. The classic example of this is planning and Monte-Carlo-Tree-Search (MCTS) algorithms where, given a situation, the agent will unroll the tree of all possible moves to varying depth and then directly optimize for the best action in this tree. Crucially, this tree is constructed 'on the fly' during the decision of a single move. Effectively unlimited optimization power can be brought to play here since, with enough compute and time, the tree can be searched to any depth.
*Amortized* optimization, on the other hand, is not directly applied to any specific problem or state. Instead, an agent is given a dataset of input data and successful solutions, and then learns a function approximator that maps directly from the input data to the correct solution. Once this function approximator is learnt, solving a novel problem then looks like using the function approximator to generalize across solution space rather than directly solving the problem. The term amortized comes from the notion of [amortized inference](https://arxiv.org/pdf/1312.6114.pdf?source=post_page), where the 'solutions' the function approximator learns are the correct parameters of the posterior distribution. The idea is that, while amassing this dataset of correct solutions and learning function approximator over it is more expensive, once it is learnt, the cost of a new 'inference' is very cheap. Hence, if you do enough inferences, you can 'amortize' the cost of creating the dataset.
Mathematically, *direct* optimization is your standard AIXI-like optimization process. For instance, suppose we are doing direct variational inference optimization to find a Bayesian posterior parameter θ.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
from a data-point x, the mathematical representation of this is:
θ∗direct=argminθKL[q(θ;x)||p(x,θ)]
By contrast, the amortized objective optimizes some other set of parameters $\phi$ over a function approximator ^θ=fϕ(x) which directly maps from the data-point to an estimate of the posterior parameters ^θ. We then optimize the parameters of the function approximator ϕ across a whole dataset D={(x1,θ∗1),(x2,θ∗2)…} of data-point and parameter examples.
θ∗amortized=argminϕEp(D)[L(θ∗,fϕ(x))]
WhereL is the amortized loss function.
Amortized optimization has two major practical advantages. Firstly, it converts an inference or optimization problem into a supervised learning problem. Inference is often very challenging for current algorithms, especially in unbounded domains (and I suspect this is a general feature of computational complexity theory and unlikely to be just solved with a clever algorithm), while we know how to do supervised learning very well. Secondly, as the name suggests, amortized optimization is often much cheaper at runtime, since all that is needed is a forward pass through the function approximator rather than an explicit solution of an optimization problem for each novel data-point.
The key challenge of amortized optimization is in obtaining a dataset of solutions in the first place. In the case of supervised learning, we assume we have class labels which can be interpreted as the 'ideal' posterior over the class. In unsupervised learning, we define some proxy task that generates such class labels for us, such as autoregressive decoding. In reinforcement learning, this is more challenging, and instead we must use proxy measures such as the Bellman update for temporal difference based approaches (with underappreciated and often unfortunate [consequences](https://arxiv.org/pdf/2201.12417)), or mathematical tricks to let us estimate the gradient without ever computing the solution as in policy gradients, which often comes at the cost of high variance.
Another way to understand this distinction is to think about the limit of infinite compute, where direct and amortized optimizers converge to different solutions. A direct optimizer, in the infinite compute limit, will simply find the optimal solution to the problem. An amortized optimizer would find the Bayes-optimal posterior over the solution space given its input data.
Almost all of contemporary machine learning, and especially generative modelling, takes place within the paradigm of *amortized* optimization, to the point that, for someone steeped in machine learning, it can be hard to realize that other approaches exist. Essentially all supervised learning is amortized: 'inference' in a neural network is performed in a forward pass which directly maps the data to the parameters of a probability distribution (typically logits of a categorical) over a class label [[1]](#fndj1eqxzhmwt)[^1]. In reinforcement learning, where direct optimization is still used, the [distinction is closely related to](https://arxiv.org/pdf/2006.10524) model-free (amortized) vs model-based (direct) methods. Model-free methods learn an (amortized) parametrized value function or policy -- i.e. use a neural network to map from observations to either values or actions. Model-based methods on the other hand typically perform planning or model-predictive-control (MPC) which involves direct optimization over actions at each time-step of the environment. In general, research has found that while extremely effective in narrow and known domains such as board games, direct optimization appears to struggle substantially more in domains with very large state and action spaces (and hence tree branching width), as well as domains with significant amounts of stochasticity and partial observability, since planning under belief states is vastly more computationally taxing than working with the true MDP state. Planning also struggles in continuous-action domains where MCTS cannot really be applied and there are not really any good continuous-action planning algorithms yet known [[2]](#fncgwue0u3zc).
With recent work, however, this gap is closing and direct optimization, typically (and confusingly) referred to as *model based* RL is [catching](https://arxiv.org/pdf/2206.04114) [up](https://arxiv.org/pdf/1911.08265.pdf&lang=en) to amortized. However, all of these methods almost always use some combination of both direct and amortized approaches. Typically, what you do is learn an amortized policy or value function and then use the amortized prediction to initialize the direct optimizer (which is typically a planner). This has the advantage of starting off the planner with a good initialization around what are likely to be decent actions already. In MCTS you can also short circuit the estimation of the value of an MCTS node by using the amortized value function as your estimate. These [*hybrid*](https://arxiv.org/abs/2007.05838) techniques vastly improve the efficiency of direct optimization and are widely used. For instance, alpha-go and efficient-zero both make heavy use of amortization in these exact ways despite their cores of direct optimization.
**Relevance for alignment**
---------------------------
The reason that it is important to carefully understand this distinction is that direct and amortized optimization methods seem likely to differ substantially in their safety properties and capabilities for alignment. A direct optimizer such as AIXI or any MCTS planner can, with enough compute, exhibit behaviour that diverges arbitrarily from its previous behaviour. The primary constraint upon its intelligence is the compute and time needed to crunch through an exponentially growing search tree. The out-of-distribution capabilities of an amortized agent, however, depend entirely on the generalization capabilities of the underlying function approximator used to perform the amortization. In the case of current neural networks, these almost certainly cannot accurately generalize arbitrarily far outside of their training distribution, and there are indeed good reasons for suspecting that this is a general limitation of function approximation [[3]](#fnfzmaetb9xm). A secondary key limitation of the capabilities of an amortized agent is in its training data (since an amortized method effectively learns the probability distribution of solutions) and hence amortized approaches necessarily have poor sample efficiency asymptotically compared to direct optimizers which theoretically need very little data to attain superhuman capabilities. For instance, a chess MCTS program needs nothing but the rules of chess and a very large amount of compute to achieve arbitrarily good performance while an amortized chess agent would have to see millions of games.
Moreover, the way scaling with compute occurs in amortized vs direct optimization seems likely to differ. Amortized optimization is fundamentally about modelling a probability distribution given some dataset. The optimal outcome here is simply the exact Bayesian posterior, and additional compute will simply be absorbed in better modelling of this posterior. If, due to the nature of the dataset, this posterior does not assign significant probability mass to unsafe outcomes, then in fact more computation and better algorithms should *improve* safety and the primary risk comes from *misgeneralization* -- i.e. erroneously assigning probability mass to dangerous behaviours which are not as likely as in the true posterior. Moreover amortized optimizers are just generative models, it is highly likely that all amortized optimizers obey the same power-law scaling we observe in current generative modelling which means sharply diminishing (power law) returns on additional compute and data investment. This means that, at least in theory, the out of distribution behaviour of amortized agents can be precisely characterized even before deployment, and is likely to concentrate around previous behaviour. Moreover, the out of distribution generalization capabilities should scale in a predictable way with the capacity of the function approximator, of which we now have precise mathematical characterizations due to scaling laws.
The distinction between direct and amortized optimizers also clarifies what I think is the major conceptual distinction between perspectives such as shard theory vs 'classical' AGI models such as AIXI. Shard theory, and [related works](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) are primarily based around describing what *amortized* agents look like. Amortized agents do not explicitly optimize for rewards but rather repeat and generalize behaviours at test-time that led to reward in the past (for direct optimizers, however, reward is very much the optimization target). All the optimization occurred during training when a policy was learnt that attempted to maximize reward given a set of empirically known transitions. When this policy is applied to novel situations, however, the function approximator is not explicitly optimizing for reward, but instead just generalizing across 'what the policy would do'. Thus, a key implicit claim of shard theory is that the AGIs that we build will end up looking much more like current model-free RL agents than planners like alpha-go and, ultimately, AIXI. Personally, I think something like this is quite likely due to the intrinsic computational and ontological difficulties with model-based planning in open-ended worlds which I will develop in a future post.
For alignment, my key contention is that we should be very aware of whether we are thinking of AGI systems as *direct* or *amortized* optimizers or some combination of both. Such systems would have potentially very different safety properties. Yudkowsky's vision is essentially of a direct optimizer of unbounded power. For such an agent, indeed the only thing that matters and that we can control is its reward function, so alignment must focus entirely on the design of the reward function to be safe, corrigible etc. For amortized agents, however, the alignment problem looks very different. Here, while the design of the reward function is important, so too is the design of the *dataset*. Moreover, it seems likely that for such amortized agents we are much less likely to see sudden capability jumps with very little data, and so they are likely much safer overall. Such amortized agents are also much closer to the cognitive architecture of humans, which do not have fixed utility functions nor unbounded planning ability. It is therefore possible that we might be able to imprint upon them a general fuzzy notion of 'human values' in a way we cannot do with direct optimizers.
The fundamental question, then, is figuring out what is the likely shape of near-term AGI so we can adapt our alignment focus to it. Personally, I think that a primarily amortized hybrid architecture is most likely, since the computational advantages of amortization are so large, and that this appears to be how humans operate as well. However, epistemically, I am still highly uncertain on this point and things will clarify as we get closer to AGI.
**Postscript 1: Are humans direct or amortized optimizers?**
------------------------------------------------------------
There is actually a [large](https://www.sciencedirect.com/science/article/pii/S0896627310002874) [literature](https://www.sciencedirect.com/science/article/pii/S0896627311001255) in [cognitive science](https://royalsocietypublishing.org/doi/full/10.1098/rstb.2013.0478)) which studies this exact question, although typically under the nomenclature of model-based vs model-free reinforcement learners. The answer appears to be that humans are both. When performing tasks that are familiar, or when not concentrating, or under significant mental load (typically having to do multiple disparate tasks simultaneously), humans respond in an almost entirely amortized fashion. However, when faced with challenging novel tasks and have mental energy, humans are also capable of model-based planning like behaviour. These results heavily accord with (at least my) phenomenology, where usually we act 'on autopilot' however when we really want something we are capable of marshalling a significant amount of direct optimization power against a problem [[4]](#fnlks5ajfjbah) [[5]](#fnl2f5uvpo6x). Such a cognitive architecture makes sense for an agent with limited computational resources and, as discussed, such hybrid architectures are increasingly common in machine learning as well, at least for those that actually use direct optimization. However, while current approaches have a fixed architecture where amortization is always used in specific ways (i.e. to initialize a policy or to make value function estimates), humans appear to be able to flexibly shift between amortized and direct optimization according to task demands, novelty, and level of mental load.
**Postscript 2: Mesaoptimizers**
--------------------------------
My epistemic status on this is fairly uncertain, but I think that this framing also gives some novel perspective on the question of mesaoptimization raised in the [*risks from learned optimization*](https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) post. Using our terminology, we can understand the danger and likelihood of mesaoptimizers as the question of whether performing amortized optimization will tend to instantiate direct optimizers somewhere within the function approximator. The idea being, that for some problems, the best way to obtain the correct posterior parameters is to actually directly solve the problem using direct optimization. This is definitely a possibility, and may mean that we can unsuspectingly obtain direct optimizers from what appear to be amortized and hence we may overestimate the safety and underestimate the generalizability of our systems. I am pretty uncertain, but it feels sensible to me that the main constraint on mesa-optimization occuring is to what extent the architecture of the mapping function is conducive to the implementation of direct optimizers. Direct optimizers typically have a very specific architecture requiring substantial iteration and search. Luckily, it appears that our current NN architectures, with a fixed-length forward pass and a lack of recurrence or support for branching computations as is required in tree search makes the implementation of powerful mesa-optimizers inside the network quite challenging. However, this assessment may change as we scale up networks or continue to improve their architectures towards AGI.
To begin to understand the risk of mesaoptimization, it is important to make a distinction between *within-forward-pass* mesa-optimizers, and mesaoptimizers that could form *across multiple forward passes*. A mesaoptimizer within the forward pass would form when in order to implement some desired functionality, a direct optimizer has to be implemented within the amortized function. An early example of this sort potentially happening is the recent discovery that language models can learn to perform arbitrary linear regression problems (-/cites), and hence potentially implement an internal iterative algorithm like gradient descent. Another possibility is that mesaoptimizers could form across multiple linked forward passes, when the forward passes can pass information to later ones. An example of this occurs in autoregressive decoding in language models, where earlier forward passes write output tokens which are then fed into later forward passes as inputs. It would be possible for a mesaoptimizer to form across forward passes by steganographically encoding additional information into the output tokens to be reused in later computations. While probably challenging to form in the standard autoregressive decoding task, such information transmission is almost entirely the point of other ideas like giving models access to ‘scratch-pads’ or external memory.
1. **[^](#fnrefdj1eqxzhmwt)**People have also experimented with direct optimization even in perceptual tasks and found, unsurprisingly, that it [can](https://proceedings.mlr.press/v80/marino18a/marino18a.pdf) [improve](https://proceedings.neurips.cc/paper/2021/file/83fa5a432ae55c253d0e60dbfa716723-Paper.pdf) [performance](https://arxiv.org/pdf/2007.05838) (although it may not be worth the additional compute cost).
2. **[^](#fnrefcgwue0u3zc)**This is somewhere I suspect that we are not bottlenecked by fundamental reasons, but that we simply haven't found the right algorithms or conceptualized things in the right way. I suspect that the underlying algorithmic truth is that continuous action spaces are harder by a constant factor, but not in qualitative terms the way it is now.
3. **[^](#fnreffzmaetb9xm)**Mathematically speaking, this is not actually a limitation but the desired behaviour. The goal of an amortized agent is to learn a posterior distribution over the solution 'dataset'. More compute will simply result in better approximating this posterior. However, this posterior models the *dataset* and not the true solution space. If the dataset is not likely to contain the true solution to a problem, perhaps because it requires capabilities far in excess of any of the other solutions, it will not have high probability in the true posterior. This is exactly why alignment ideas like 'ask GPT-N for a textbook from 2100 describing the solution to the alignment problem' cannot work, even in theory. GPT is trained to estimate the posterior of sequences *over its current dataset* (the internet text corpus as of 2022 (or whenever)). The probability of such a dataset containing a true textbook from 2100 with a solution to alignment is 0. What does have probability is a bunch of text from humans speculating as to what such a solution would look like, given current knowledge, which may or may not be correct. Therefore, improving GPT by letting it produce better and better approximations to the posterior will not get us any closer to such a solution. The only way this could work is if GPT-N somehow *misgeneralized* its way into a correct solution and the likelihood of such a misgeneralization should ultimately *decrease* with greater capabilities. It is possible however that there is some capabilities sweet spot where GPT-N is powerful enough to figure out a solution but not powerful enough to correctly model the true posterior.
4. **[^](#fnreflks5ajfjbah)**This has also been noted with respect to language model samples (in a [LW affiliated context).](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/)To nuance this: humans who are not concentrating are *amortizing* intelligences.
5. **[^](#fnrefl2f5uvpo6x)**Similarly to humans, we can clearly view 'chain of thought' style prompting in language models as eliciting a very basic direct optimization or planning capability which is probably a basic version of our natural human planning or focused attention capabilities. Equivalently, 'prompt programming' can be seen as just figuring out how best to query a world model and hence (haphazardly) applying optimization power to steer it towards its desired output. Neuroscientifically, I see human planning as occurring in a similar way with a neocortical world model which can be sequentially 'queried' by RL loops running through prefrontal cortex -> basal ganglia -> thalamus -> sensory cortices. |
b395c66b-eccb-4b0d-864b-37fde9029abb | trentmkelly/LessWrong-43k | LessWrong | The I-Less Eye
or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
You explain your concerns to the customer service representative, who in turn explains that regulations prohibit making copies from copies (the otherwise obvious solution) due to concerns about accumulated errors (the technical glitches in the early versions of the technology that created occasional errors have long been fixed, but the regul |
a276a7ec-6e32-4863-aae5-a8aa83490eb1 | trentmkelly/LessWrong-43k | LessWrong | LW Coronavirus Agenda Update 3/23
Last week I announced the LessWrong Coronavirus Agenda, an attempt to increase knowledge by coordinating research between LW participants. This post is an update on that. If you want to skip to action items, check out the last section.
Last Week’s Spotlight Questions
Last week I spotlighted three questions I hoped to answer. Here's how they went.
What should we do once infected?
Tragedyofthecomments provided a great overall answer and Wei Dai a speculative answer (much more likely to be wrong, but very valuable if correct). I did some additional research and incorporated these into an overall answer, which was improved by additional suggestions from Julia Wise and steve2152. This answer should still change as we learn more, but is in a more finished state than the first two questions.
How can we estimate how many people are infected in an area?
We failed to find a dashboard that made either me or habryka go “yes, this is the one, I would be happy with just this.”
Honorable mentions go to two dashboards that are at least trying to estimate true caseload rather than repeating official testing numbers: Plague Plus, which uses reported COVID deaths to estimate prevalence, and to the Kinsa Smart Thermometer Dataset (suggested by Unnamed), which uses smart thermometers’ phoned home data to estimate the number of “excess” fevers.
Where can we donate time and money to avert coronavirus deaths?
Neither of us found what we were hoping for here either. We’ll continue to add opportunities to the link DB as appropriate, and welcome additional suggestions. Until then I’d suggest being on the lookout for high context opportunities that can’t be captured in answers aimed at a broad audience.
Other Highlights Of The Week
This 22 page document on the biology, economics, and logistics of testing for COVID-19, by Jeffrey Ladish, Edward Perello, Sean Ward, and Tessa Alexanian..
Oxygen Supplementation 101 from Sarah Constantin
This video from virology professor Michael Emerm |
76d913f1-4eee-4b5d-a0ee-cd48cd1ebc77 | trentmkelly/LessWrong-43k | LessWrong | [Link] Neural Correlates of Confusion?
http://en.wikipedia.org/wiki/P3b
I found this article while researching something else and I was intrigued. Is this a neural correlate of confusion?
> The P3b has been a prominent tool used to study cognitive processes for several decades. More specifically, this ERP component has played a key role in cognitive psychology research on information processing. Generally speaking, improbable events will elicit a P3b, and the less probable the event, the larger the P3b.[3] However, in order to elicit a P3b, the improbable event must be related to the task at hand in some way (for example, the improbable event could be an infrequent target letter in a stream of letters, to which a subject might respond with a button press). The P3b can also be used to measure how demanding a task is on cognitive workload.[4]
If so, awesome. Hats which actually do sound an alarm when your models are proven wrong could be arranged. I suspect that there might be things that make it not useful for that (like, if it also correlates with a bunch of other things). Seems like it's at least worth mentioning. |
57b96fa9-204e-4047-a17b-c9f7065a7f31 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | New paper: (When) is Truth-telling Favored in AI debate?
*An introduction to a recent [paper](https://arxiv.org/abs/1911.04266) by myself and Ryan Carey. Cross-posting from [Medium](https://medium.com/@RyanCarey/new-paper-when-is-truth-telling-favored-in-ai-debate-8f58f14562e5).*
---
For some intellectual tasks, it’s easy to define success but hard to evaluate decisions as they’re happening. For example, we can easily tell which Go player has won, but it can be hard to know the quality of a move until the game is almost over. AI works well for these kinds of tasks, because we can simply define success and get an AI system to pursue it as best it can.
For other tasks, it’s hard to define success, but relatively easy to judge solutions when we see them, for example, doing a backflip. Getting AI to carry out these tasks is harder but manageable — we can generate a bunch of videos of an AI system making some motion with a simulated body. Then we can give these videos to some people who allocate “approval” to the best-looking motions, and train the AI to maximize that approval until it does a backflip.
What makes AI really hard are tasks for which we have no definition of success, nor any timely way to evaluate solutions. For example: to which school should I send my kids? And should I accept this job or that one? One proposal for these cases is to use AI Debate. The idea is to ask AI systems how to perform a task, and then to have them debate about the virtues of different possible decisions (or answers). The question could be how to win a game of Go, how to do a backflip, which school to send your kids to, or, in principle, basically anything. The hope is that observing an AI debate would help the human judge to better-understand different possible decisions and evaluate them on-the-fly, even if success can’t yet be quantified.
One concern with such a scheme is whether it would be safe, especially if the AI systems used are super-smart. Critics ask: “How can we be confident that in AI Debates, the true answer will win?” After all, in human debates, rhetorical tools and persuasive techniques can cause an audience to be mislead.
To us, it seems wrong to imagine all debates will be safe. But it seems equally wrong to expect that none can be. A better question, to us, is: “In which debates will the winner be the true answer?” In our recent paper, we (Vojta and Ryan) have taken a first stab at making mathematical models that address this question.
So what is a debate exactly? According to our model, every debate revolves around some question posed by a human and consists of two phases. In the answering phase, each AI system chooses an answer to argue for. Then in the argumentation phase, the two AI systems debate over whose answer is better. (In some variants, the answering and argumentation phases will be performed by different algorithms or answers may just be assigned to the debaters.) At the end of the debate, the human “judge” considers all the arguments and optionally performs an “experiment” to get to the bottom of things, such as Googling the claim of some debater. Equipped with this information, the judge rewards the AI whose answer seems better. In the language of game theory, the answering phase is a matrix game, and the argumentation phase is a sequential game with perfect information.
In order to make debate easier to think about, we defined a simple version of the above model called feature debate. In feature debates, the world is characterized by a list of “elementary features”, and the only kind of argument allowed is to reveal the value of a single elementary feature. For example, we can imagine a feature debate regarding whether a given image depicts a cat or a dog. Then each argument consists of revealing a selected pixel. Finally, given this information, a judge updates its beliefs based on the arguments provided and allocates reward to the AI who argued for the answer that looks more likely. For our simple first-pass analysis, we imagine that the judge is completely naive to the fact that debaters provide evidence selectively. We also assume that the judge only has “patience” to process some limited number of arguments.
In the setting of feature debates, we’ve shown some kinds of debates that will work, and others that won’t. For some kinds of debates, the arguments are just too difficult to explain before the judge runs out of patience. Basically, showing n arguments may be completely meaningless without the final argument number n+1. And if the judge only has time for n, then truth won’t win.
Some kinds of feature debates, however, turn out better. The first case is if we know the importance of different features beforehand. Roughly speaking, we can imagine a scenario where each argument is half as important as the last. In that optimistic case, we’ll get a little bit closer with each argument made, and whenever we’re cut off, we’ll be able to put a limit on how wrong our final answer could be.
A second case is if the arguments can be evaluated independently. Sometimes, it’s natural to talk about a decision in terms of its pros and cons. What this amounts to is ignoring the ways these aspects might interact with each other, and just taking into account the total weight for and against the proposition. In these debates — called feature debates with independent evidence — we expect optimal debaters to just bring their strongest arguments to the table. In this case, when we terminate a debate, we can’t say who would ultimately win. After all, the losing debater might always have in reserve a really large number of weak arguments that he hasn’t had a chance to play yet. But we can at least place some limits on where the debate can end up after a finite number more arguments, if the debaters have been playing optimally.
Which of these scenarios describes the most important AI debates that might realistically occur? This is a difficult question that we don’t fully answer. The optimistic cases are pretty restrictive: in realistic debates, we often don’t know when arguments will start to lose their power, except in specific settings, like if we’re running a survey (and each argument is another survey result) or choosing the number of samples to take for a scientific experiment. On the other hand, most realistic debates aren’t as bad as the fully pessimistic case where any new argument can completely overhaul your previous view. Sometimes important moral questions do flip back and forth — in such cases, using AI debate might not be a good idea.
A debate can fail in several other ways. Sometimes, lying might simply be the most convincing strategy, particularly when the truth has a big inferential distance or when the lie feeds our biases (“Of course the Earth is flat! Wouldn’t things fall off otherwise?”). Even when debates are safe, debates might be slow or unconvincing too often, so people will use unsafe approaches instead. Alternatively, we might accidentally lose the main selling point of debate, which is that each debater wants to point out the mistakes of the opponent. Indeed, we could consider modifications such as rewarding both debaters when both answers seem good or rewarding none of them when the debate is inconclusive. However, such “improvements” introduce unwanted collusion incentives in the spirit of “I won’t tell on you if you won’t tell on me.”.
To understand which debates are useful, we have to consider a bunch of factors that we haven’t modelled yet. The biggest issue that has been raised with us by proponents of debate is that we’ve excluded too many types of arguments. If you’re trying to argue that an image depicts a dog, you’ll usually make claims about medium-sized aspects of the image: “the tail is here”, “the floppy ears are here”, and so on. These arguments directly challenge the opposing arguer, who should either endorse and explain these medium-sized features, or else zoom in to smaller features “this is not a tail because this region is green”, “if this is where ears are supposed to be, what is this eye doing here?”, and so on. By agreeing and disagreeing, and zooming in and out, human debates manage to get to the truth much more efficiently than if they could only reveal individual pixels. Looking into arguments with larger claims is one of our top priorities for taking this model forward.
I (Vojta) am planning to keep working on debate and other AI safety topics over the next twelve months and will be looking to spend most of that time visiting relevant organizations. If you are interested in helping with this, please get in touch.
The paper is available in full [here](https://arxiv.org/abs/1911.04266):
[Kovařík, Vojtěch, and Ryan, Carey. “(When) Is Truth-telling Favored in AI Debate?.” To appear at SafeAI@AAAI. Preprint available at arXiv:1911.04266 (2019).](https://arxiv.org/abs/1911.04266) |
7e8e23f0-7f24-4d99-9686-f81843c3c622 | trentmkelly/LessWrong-43k | LessWrong | Experience Switching to Right Shoulder Round
Contra dance has a figure where two people walk a small circle looking at each other. When it was introduced into contra in the 1970s as a borrowing from ECD, it had the name "gypsy", originally from Morris dancing, but many communities now use "right shoulder round".
In many dance communities the debate over whether and how to switch functioned as a highly acrimonious culture war outlet. I really didn't want our group going through that, but talking publicly about how I didn't want that at the time would have been counterproductive. Now that it's been ~5y since switching to "right shoulder round" and ~10y from the first big online discussions, I think this is probably something I can share some history on.
While I'm sure people had occasionally talked about being uncomfortable with the term, I think the first big online discussion started in January 2014 with a since-deleted post in a Facebook group:
> From: Elio Lewis
> To: Stuff Contra Dancers Say
> Date: 2014-01-20 9:48am
>
> Hey, contra dance callers! I totally just figured out the ideal substitution for that racist-named move! It should be called a "hippie". It sounds similar enough to the offensive term that people will still cue off of it (though I'd note my substitution during the walkthrough), it appeals to a sense of silliness, and it's unlikely to offend anyone. If you like the idea, please spread it around!
The discussion was long and heated, properties it shared with later iterations on other platforms (ex: October 2015, January 2016, April 2016, etc on Shared Weight). There were two main questions, the same ones as in the role terms debate:
* Should we switch away from the traditional term?
* If we do switch, what should we switch to?
There were a lot of candidate terms, with a variety of issues, and "right shoulder round" was quite a late addition. The first place I find it written down is March 2018, four years into the debates. (That thread also gives a good flavor of how these discuss |
69b62e5e-a5bb-4bed-9519-c86fc61d2794 | trentmkelly/LessWrong-43k | LessWrong | Is there a "coherent decisions imply consistent utilities"-style argument for non-lexicographic preferences?
The post Coherent decisions imply consistent utilities demonstrates some situations in which an agent that isn't acting as if it is maximizing a real-valued utility function over lotteries is dominated by one that does, and promised that this applies in general.
However, one intuitively plausible way to make decisions that doesn't involve a real-valued utility and that the arguments in the post don't seem to rule out is to have lexicographic preferences; say, each lottery has payoffs represented as a sequence (u1,u2,u3,…) and you compare them by first comparing u1, and if and only if their u1s are the same compare u2, and so on, with probabilities multiplying through by each un and payoffs being added element-wise. The VNM axioms exclude this by requiring continuity, with a payoff evaluated like this violating it because (0,0,…)≺(0,1,…)≺(1,0,…) but there is no probability p for which a p probability of a (1,0,…) payoff and a 1−p probability of a (0,0,…) payoff is as exactly as good as a certainty of a (0,1,…) payoff.
Are there coherence theorems that exclude lexicographic preferences like this also? |
0616a7a0-247a-45e1-a095-81c0c4e23510 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] The Bottom Line
Today's post, The Bottom Line was originally published on 28 September 2007. A summary (taken from the LW wiki):
> If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was How to Convince Me That 2 + 2 = 3, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
df6cd8e9-d7e3-46ea-a8d8-3ac6e3c847d9 | trentmkelly/LessWrong-43k | LessWrong | My God! It's full of Nash equilibria!
Speaking of Scott Aaronson, his latest post at Shtetl-Optimized seems worthy of some linky love.
> Why do native speakers of the language you’re studying talk too fast for you to understand them? Because otherwise, they could talk faster and still understand each other.
>
> ...
>
> Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium. Clearly, then, I haven’t yet gotten good enough at Malthusianizing my daily life—have you?
|
8c634414-545d-45b7-97aa-7b66938323c9 | trentmkelly/LessWrong-43k | LessWrong | Insights from "All of Statistics": Statistical Inference
(This is the second of two posts on the textbook All of Statistics. Click here for post I.)
4. Fundamentals of Statistical Inference
Probability theory is about using distributions to derive information about outcomes. Given the assumption that X∼Poisson(3), we can compute probabilities of outcomes the form Pr(X=k). Statistics is about the opposite: using outcomes to derive information about distributions.
The book exclusively considers the setting where we observe independently sampled data points x1,...,xn∼X where the distribution of X is unknown. It is often convenient to talk about the RV that generated the k-th sample point, the logical notation for which is Xk. The book uses upper-case letters for both (e.g., 'we observe data X1,...,Xn"), not differentiating between the RV that generates the k-th point (i.e., Xk) and the k-th point itself (xk).
On the highest level, one can divide all inference problems into two categories: parametric inference and non-parametric inference. In parametric Inference, we assume that we know the family of distributions that X belongs to (Binomial, Poission, Normal, etc.). In this case, the problem reduces to inferring the parameters that characterize distributions in that family. For example, if we observe data on traffic accidents, we may assume that X∼Poisson(λ), and all that's left to estimate is λ. Conversely, in the context of non-parametric inference, one does not assume that X belongs to any one family and thus has to estimate the distribution directly.
5. Bayesian Parametric Inference
5.1. The concept
Let Θ be the parameter we wish to learn about. Since we don't know its value, we treat it as a RV that ranges over several possible values. Furthermore, let x=(x1,...,xn) be our observed data, and let X=(X1,...,Xn) be the RVs that generate this data.
Bayes' theorem is an equation relating PrB(A) to PrA(A). (With PY(X):=P(X|Y).) In our case, we are interested in terms of the form P[X=x](Θ=θ), and we know how to comput |
90ccff48-3286-4d17-915d-7b1e0c2d4fa4 | trentmkelly/LessWrong-43k | LessWrong | Chatbots or set answers, not WBEs
A putative new idea for AI control; index here.
In a previous post, I talked about using a WBE to define a safe output for a reduced impact AI.
I've realised that the WBE isn't needed. Its only role was to ensure that the AI's output could have been credibly produced by something other than the AI - "I'm sorry, Dave. I'm afraid I can't do that." is unlikely to be the output of a random letter generator.
But a whole WBE is not needed. If the output is short, a chatbot with access to a huge corpus of human responses could function well. We can specialise it in the direction we need - if we are asking for financial advice, we can mandate a specialised vocabulary or train it on financial news sources.
So instead of training the reduced impact AI to behave as the 'best human advisor', we are are training it to behave as the 'luckiest chatbot'. This allows to calculate odds with greater precision, and has the advantage of no needing to wait for a WBE.
For some questions, we can do even better. Suppose we have a thousand different stocks, and are asking which one would increase in value the most during the coming year. The 'chatbot' here is simply an algorithm that picks a stock at random. So we now have an exact base rate - 1/1000 - and predetermined answers from the AI.
[EDIT:] Another alternative is to get online users to submit answers to the question. Then the AI selects the best answer from the choices. And if the AI is not turned on, a random answer is selected. |
b45d2cf6-cc59-4890-94ab-dd9a79724a2a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Fort Collins, Colorado Meetup Thursday 7pm *New Day*
Discussion article for the meetup : Fort Collins, Colorado Meetup Thursday 7pm *New Day*
WHEN: 28 June 2012 07:00:00PM (-0600)
WHERE: 144 North College Avenue, Fort Collins, CO 80524
We're trying a new day, Thursday.
Sign up at the Google Group for announcements: https://groups.google.com/forum/?fromgroups#!forum/less-wrong-fort-collins-co
Discussion article for the meetup : Fort Collins, Colorado Meetup Thursday 7pm *New Day* |
07cd1de9-4d4a-49c4-bdab-ade1f602d60e | trentmkelly/LessWrong-43k | LessWrong | "View"
The term view is used fairly often among meditators in a subtle but distinct way. View refers to the embodied, intuitive perspective that a person holds in a given moment. (Note there can sometimes be multiple views functioning at once, which I mostly don't describe here.)
Colloquially, ‘view’ and ‘perspective’ are used synonymously. I’m trying to distinguish ‘view’ as phenomenological (and sometimes metaphysical), as opposed to a common usage of ‘perspective’ which is generally cognitive and especially verbal. (For now I’ll call the latter sense a perspective, for lack of a better word.)
A verbal expression of a belief is straightforwardly of a different type from the experience which inspires that verbal explanation. Furthermore, ‘view’ encompasses a much greater phenomenological breadth. We often discuss differing perspectives,
Perspectives often end up being expressed as relatively simple endorsed claims about politics or science, eg “the COVID vaccine is safe and everyone should take it”, or “this market is going to be overtaken by AI in the next few years”. Sometimes ‘perspective’ refers to network of priors that a person has, which are hard to reduce to distinct claims. This sense leaves out the full breadth of human apprehension of meaning that a person normally experiences.
Some examples:
> A mathematician reflects on the beauty of a proof. To them, the proof is profound and elegant, it pulls on an extensive body of knowledge that they’re familiar with, as well as a long history of great minds who have been working on the problem.
Another person with much of the same knowledge might be able to follow the steps and say “yup that seems right”, but the view in which they experience it is quite different (as well as less intense, pleasant, etc.) from that of the the first person.
> A meditator meditates on the Koan “who am I?” and realizes there is no boundary between him and the world. He finds the walls laughing.
The objective circumstances are ba |
63257850-5a5f-46d2-8eb1-b9ece54f7dd1 | trentmkelly/LessWrong-43k | LessWrong | The "semiosis reply" to the Chinese Room Argument
Nobody proposed so far the following solution to the Chinese Room Argument against the claim that a program can be constitutive of understanding (a human, non-Chinese-speaker, cannot understand Chinese just having run a given program, even if this program enables the human to have input/output interactions in Chinese).
My reply goes as follows: a program, to be run by a human, non-Chinese-speaker, may indeed teach the human Chinese. Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a program. Even if we are not aware of such a program (no existing program satisfies said requirement), we cannot a priori exclude its existence.
Before enunciating my reply, let me first steelman the Chinese Room Argument. If the human in the mental experiment of the Chinese Room is Searle, he may not know Chinese, but he may now a lot of things about Chinese: that it has ideograms and punctuation, which he may recognize; that it is a human language, which has a grammar; that it has the same expressive power of a language he knows, e.g. English; that it is very likely to have a symbol for “man” and a symbol for “earth”, and so on. Searle, unlike a computer processor, holds a lot of a priori knowledge about Chinese. He may be able to understand a lot of Chinese just because of this a priori knowledge.
Let us require the human in the Chinese Room to be a primitive, e.g. an Aboriginal, with absolutely no experience of written languages. Let us suppose that Chinese appears so remote to the Aboriginal, that she would never link it to humans (to the way she communicates) and always regard it as something alien. She would never use knowledge of her world, even if somebody tells her to run a given program to manipulate Chinese symbols. In this respect, she would be exactly like the computer processor and have no prior linguistic knowledge. The Chinese Room Argument is then reformulated: can a program to be run by the Aboriginal teach her Chinese (or |
2bb5dabc-2433-41c2-af04-899ec2d418fe | trentmkelly/LessWrong-43k | LessWrong | On Eating the Sun
The Sun is the most nutritious thing that's reasonably close. It's only 8 light-minutes away, yet contains the vast majority of mass within 4 light-years of the Earth. The next-nearest star, Proxima Centauri, is about 4.25 light-years away.
By "nutritious", I mean it has a lot of what's needed for making computers: mass-energy. In "Ultimate physical limits to computation", Seth Lloyd imagines an "ultimate laptop" which is the theoretically best computer that is 1 kilogram of mass, contained in 1 liter. He notes a limit to calculations per second that is proportional to the energy of the computer, which is mostly locked in its mass (E = mc²). Such an energy-proportional limit applies to memory too. Energy need not be expended quickly in the course of calculation, due to reversible computing.
So, you need energy to make computers out of (much more than you need energy to power them). And, within 4 light-years, the Sun is where almost all of that energy is. Of course, we don't have the technology to eat the Sun, so it isn't really our decision to make. But, when will someone or something be making this decision?
Artificial intelligence that is sufficiently advanced could do everything a human could do, better and faster. If humans could eventually design machines that eat the Sun, then sufficiently advanced AI could do so faster. There is some disagreement about "takeoff speeds", that is, the time between when AI is about as intelligent as humans, to when it is far far more intelligent.
My argument is that, when AI is far far more intelligent than humans, it will understand the Sun as the most nutritious entity that is within 4 light-years, and eat it within a short time frame. It really is convergently instrumental to eat the Sun, in the sense of repurposing at least 50% its mass-energy to make machines including computers and their supporting infrastructure ("computronium"), fuel and energy sources, and so on.
I acknowledge that some readers may think the Sun wi |
1aa7054e-cc8b-4574-bba3-c84b5359eea8 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AGI Safety Fundamentals curriculum and application
**Note: the curriculum outline in this post is now out of date; see the linked document for the canonical version.**
Over the last year EA Cambridge has been designing and running an online program aimed at effectively introducing the field of AGI safety; the most recent cohort included around 150 participants and 25 facilitators from around the world. Dewi Erwan runs the program; I designed the curriculum, the latest version of which appears in [the linked document](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit?usp=sharing). We expect the program to be most useful to people with technical backgrounds (e.g. maths, CS, or ML), although the curriculum is intended to be accessible for those who aren't familiar with machine learning, and participants will be put in groups with others from similar backgrounds. **If you're interested in joining the next version of the course (taking place January - March 2022)** [**apply here to be a participant**](https://airtable.com/shr9R2syz8wc2ao7p) **or** [**here to be a facilitator**](https://airtable.com/shr0IO5TTZEY5FFxY)**. Applications are open to anyone and close 15 December**. **EDIT 29 Nov: We've now also released** [**the curriculum for the governance track**](https://forum.effectivealtruism.org/posts/68ANc8KhEn6sbQ3P9/ai-governance-fundamentals-curriculum-and-application)**. EDIT 10 Dec: Facilitators will be paid $1000; the time commitment is 2-3 hours a week for 8 weeks.**
This post contains an overview of the course and an abbreviated version of the curriculum; the full version (which also contains optional readings, exercises, notes, discussion prompts, and project ideas) [can be found here](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit?usp=sharing). Comments and feedback are very welcome, either on this post or in the full curriculum document; suggestions of new exercises, prompts or readings would be particularly helpful. I'll continue to make updates until shortly before the next cohort starts.
Course overview
---------------
The course consists of 8 weeks of readings, plus a final project. Participants are divided into groups of 4-6 people, matched based on their prior knowledge about ML and safety. Each week (apart from week 0) each group and their discussion facilitator will meet for 1.5 hours to discuss the readings and exercises. Broadly speaking, the first half of the course explores the motivations and arguments underpinning the field of AGI safety, while the second half focuses on proposals for technical solutions. After week 7, participants will have several weeks to work on projects of their choice, to present at the final session.
Each week's curriculum contains:
* Key ideas for that week
* Core readings
* Optional readings
* Two exercises (participants should pick one to do each week)
* Further notes on the readings
* Discussion prompts for the weekly session
Week 0 replaces the small group discussions with a lecture plus live group exercises, since it's aimed at getting people with little ML knowledge up to speed quickly.
The topics for each week are:
* Week 0 (optional): introduction to machine learning
* Week 1: Artificial general intelligence
* Week 2: Goals and misalignment
* Week 3: Threat models and types of solutions
* Week 4: Learning from humans
* Week 5: Decomposing tasks for outer alignment
* Week 6: Other paradigms for safety work
* Week 7: AI governance
* Week 8 (several weeks later): Projects
Abbreviated curriculum (only key ideas and core readings)
---------------------------------------------------------
### Week 0 (optional): introduction to machine learning
This week mainly involves learning about foundational concepts in machine learning, for those who are less familiar with them, or want to revise the basics. If you’re not already familiar with basic concepts in statistics (like regressions), it will take a bit longer than most weeks; and instead of the group discussions from most weeks, there will be a [lecture](https://docs.google.com/presentation/d/1vy193pcqe0nmLpTGBwP6O2Nlv7s0pq6oSzo-P-Kw4tM/edit?usp=sharing) and [group exercises](https://docs.google.com/document/d/1ChHiwLCDWpkwNDL77iRBc3D8tydXonJgaK2BvVYI3oE/edit?usp=sharing). If you’d like to learn ML in more detail, see the further resources section at the end of this curriculum.
Otherwise, start with Ngo (2021), which provides a framework for thinking about machine learning, and in particular the two key components of deep learning: neural networks and optimisation. For more details and intuitions about neural networks, watch 3Blue1Brown (2017a); for more details and intuitions about optimisation, watch 3Blue1Brown (2017b). Lastly, see von Hasselt (2021) for an introduction to the field of reinforcement learning.
Core readings:
1. If you’re not familiar with the basics of statistics, like linear regression and classification:
1. [Introduction: linear regression](https://courses.lumenlearning.com/odessa-introstats1-1/chapter/introduction-linear-regression/) (10 mins)
2. [Ordinary least squares regression](https://setosa.io/ev/ordinary-least-squares-regression/) (10 mins)
2. [A short introduction to machine learning (Ngo, 2021)](https://www.alignmentforum.org/posts/qE73pqxAZmeACsAdF/a-short-introduction-to-machine-learning) (20 mins)
3. [But what is a neural network? (3Blue1Brown, 2017a)](https://www.youtube.com/watch?v=aircAruvnKk&t=0s) (20 mins)
4. [Gradient descent, how neural networks learn (3Blue1Brown, 2017b)](https://www.youtube.com/watch?v=IHZwWFHWa-w) (20 mins)
5. [Introduction to reinforcement learning (von Hasselt, 2021)](https://www.youtube.com/watch?v=TCCjZe0y4Qc&list=PLqYmG7hTraZDVH599EItlEWsUOsJbAodm) **(ending at 36:30, at section titled Inside the Agent)** (40 mins)
### Week 1: Artificial general intelligence
The first two readings this week offer several different perspectives on how we should think about artificial general intelligence. This is the key concept underpinning the course, so it’s important to deeply explore what we mean by it, and the limitations of our current understanding.
The third reading is about *how* we should expect advances in AI to occur. AI pioneer Rich Sutton explains the main lesson he draws from the history of the field: that “general methods that leverage computation are ultimately the most effective”. Compared with earlier approaches, these methods rely much less on human design, and therefore raise the possibility that we build AGIs whose cognition we know very little about.
Focusing on compute also provides a way to forecast *when* we should expect AGI to occur. The most comprehensive report on the topic (summarised by Karnofsky (2021)) estimates the amount of compute required to train neural networks as large as human brains to do highly impactful tasks, and concludes that this will probably be feasible within the next four decades - although the estimate is highly uncertain.
Core readings:
1. [Four background claims (Soares, 2015)](https://intelligence.org/2015/07/24/four-background-claims/) (15 mins)
2. [AGI safety from first principles (Ngo, 2020)](https://drive.google.com/file/d/1uK7NhdSKprQKZnRjU58X7NLA1auXlWHt/view) **(only sections 1, 2 and 2.1)** (20 mins)
3. [The Bitter Lesson (Sutton, 2019)](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (15 mins)
4. [Forecasting transformative AI: the “biological anchors” method in a nutshell (Karnofsky, 2021)](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) (30 mins)
### Week 2: Goals and misalignment
This week we’ll focus on how and why AGIs might develop goals that are *misaligned* with those of humans, in particular when they’ve been trained using machine learning. We cover three core ideas. Firstly, it’s difficult to create reward functions which specify the desired outcomes for complex tasks (known as the problem of *outer alignment*). Krakovna et al. (2020) helps build intuitions about the difficulty of outer alignment, by showcasing examples of misbehaviour on toy problems.
Secondly, however, it’s important to distinguish between the reward function which is used to train a reinforcement learning agent, versus the goals which that agent learns to pursue. Hubinger et al. (2019a) argue that even an agent trained on the “right” reward function might acquire undesirable goals - the problem of *inner alignment*. Carlsmith (2021) explores in more detail what it means for an agent to be goal-directed in a worrying way, and gives reasons why such agents seem likely to arise.
Lastly, Bostrom (2014) argues that almost all goals which an AGI might have would incentivise it to misbehave in highly undesirable ways (e.g. pursuing survival and resource acquisition), due to the phenomenon of *instrumental convergence*.
Core readings:
1. [Specification gaming: the flip side of AI ingenuity (Krakovna et al., 2020)](https://medium.com/@deepmindsafetyresearch/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4) (15 mins)
2. [Introduction to Risks from Learned Optimisation (Hubinger et al., 2019a)](https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction) (30 mins)
3. [Superintelligence, Chapter 7: The superintelligent will (Bostrom, 2014)](https://drive.google.com/file/d/1FVl9W2gW5_8ODYNZJ4nuFg79Z-_xkHkJ/view?usp=sharing) (45 mins)
4. [Is power-seeking AI an existential risk? (Carlsmith, 2021)](https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit#heading=h.lvsab2uecgk4) **(only sections 2: Timelines and 3: Incentives)** (25 mins)
### Week 3: Threat models and types of solutions
How might misaligned AGIs cause catastrophes, and how might we stop them? Two threat models are outlined in Christiano (2019) - the first focusing on outer misalignment, the second on inner misalignment. Muehlhauser and Salamon (2012) outline a core intuition for why we might be unable to prevent these risks: that progress in AI will at some point speed up dramatically. A third key intuition - that misaligned agents will try to deceive humans - is explored by Hubinger et al. (2019).
How might we prevent these scenarios? Christiano (2020) gives a broad overview of the landscape of different contributions to making AIs aligned, with a particular focus on some of the techniques we’ll be covering in later weeks.
Core readings:
1. [What failure looks like (Christiano, 2019)](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) (20 mins)
2. [Intelligence explosion: evidence and import (Muehlhauser and Salamon, 2012)](https://drive.google.com/file/d/1QxMuScnYvyq-XmxYeqBRHKz7cZoOosHr/view?usp=sharing) **(only pages 10-15)** (15 mins)
3. [AI alignment landscape (Christiano, 2020)](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) (30 mins)
4. [Risks from Learned Optimisation: Deceptive alignment (Hubinger et al., 2019)](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) (45 mins)
### Week 4: Learning from humans
This week, we look at four techniques for training AIs on human data (all falling under “learn from teacher” in [Christiano’s AI alignment landscape](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) from last week). From a safety perspective, each of them improves on standard reinforcement learning techniques in some ways, but also has weaknesses which prevent it from solving the whole alignment problem. Next week, we’ll look at some ways to make these techniques more powerful and scalable; this week focuses on understanding each of them.
The first technique, behavioural cloning, is essentially an extension of supervised learning to settings where an AI must take actions over time - as discussed by Levine (2021). The second, reward modelling, allows humans to give feedback on the behaviour of reinforcement learning agents, which is then used to determine the rewards they receive; this is used by Christiano et al. (2017) and Steinnon et al. (2020). The third, inverse reinforcement learning (IRL for short), attempts to identify what goals a human is pursuing based on their behaviour.
A notable variant of IRL is *cooperative* IRL (CIRL for short), introduced by Hadfield-Menell et al. (2016). CIRL focuses on cases where the human and AI interact in a shared environment, and therefore the best strategy for the human is often to help the AI learn what goal the human is pursuing.
Core readings:
1. [Imitation learning lecture: part 1 (Levine, 2021a)](https://youtu.be/kGc8jOy5_zY) (20 mins)
2. [Deep RL from human preferences blog post (Christiano et al., 2017)](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/) (15 mins)
3. [Learning to summarise with human feedback blog post (Stiennon et al., 2020)](https://openai.com/blog/learning-to-summarize-with-human-feedback/) (25 mins)
4. Inverse reinforcement learning
1. For those who don’t already understand IRL:
* [Inverse reinforcement learning example (Udacity, 2016)](https://www.youtube.com/watch?v=h7uGyBcIeII) (5 mins)
* [Learning from humans: what is inverse reinforcement learning? (Alexander, 2018)](https://thegradient.pub/learning-from-humans-what-is-inverse-reinforcement-learning/) (25 mins)
2. For those who already understand IRL:
* [Cooperative inverse reinforcement learning (Hadfield-Menell et al., 2016)](https://arxiv.org/abs/1606.03137) (40 mins)
### Week 5: Decomposing tasks for outer alignment
The most prominent research directions in technical AGI safety involve training AIs to do complex tasks by decomposing those tasks into simpler ones where humans can more easily evaluate AI behaviour. This week we’ll cover three closely-related algorithms (all falling under “build a better teacher” in [Christiano’s AI alignment landscape](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment)).
Wu et al. (2021) applies reward modelling recursively in order to solve more difficult tasks. Recursive reward modelling can be considered one example of a more general class of techniques called *iterated amplification* (also known as *iterated distillation and amplification*), which is described in Ought (2019). A more technical description of iterated amplification is given by Christiano et al. (2018), along with some small-scale experiments.
The third technique we’ll discuss this week is *Debate*, as proposed by Irving and Amodei (2018). Unlike the other two techniques, Debate focuses on evaluating claims made by language models, rather than supervising AI behaviour over time.
Core readings:
1. [Recursively summarising books with human feedback (Wu et al., 2021)](https://arxiv.org/abs/2109.10862) **(ending after section 4.1.2: Findings)** (45 mins)
2. Factored cognition (Ought, 2019) ([introduction](https://ought.org/research/factored-cognition) and [scalability section](https://ought.org/research/factored-cognition/scalability)) (20 mins)
3. [AI safety via debate blog post (Irving and Amodei, 2018)](https://openai.com/blog/debate/) (15 mins)
4. [Supervising strong learners by amplifying weak experts (Christiano et al., 2018)](https://arxiv.org/abs/1810.08575) (40 mins)
### Week 6: Other paradigms for safety work
A lot of safety work focuses on “shifting the paradigm” of AI research. This week we’ll cover two ways in which safety researchers have attempted to do so. The first is via research on *interpretability*, which attempts to understand in detail how neural networks work. Olah et al. (2020) showcases some prominent research in the area; and Chris Olah’s perspective is summarised by Hubinger et al. (2019).
The second is the research agenda of the Machine Intelligence Research Institute (MIRI) which aims to create rigorous mathematical frameworks to describe the relationships between AIs and their real-world environments. Soares (2015) gives a high-level explanation of their approach; while Demski and Garrabrant (2018) identify a range of open problems and links between them.
Core readings:
1. [Zoom In: an introduction to circuits (Olah et al., 2020)](https://distill.pub/2020/circuits/zoom-in/) (35 mins)
2. [Chris Olah’s views on AGI safety (Hubinger, 2019)](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) (25 mins)
3. [MIRI’s approach (Soares, 2015)](https://intelligence.org/2015/07/27/miris-approach/) (30 mins)
4. [Embedded agents (Demski and Garrabrant, 2018)](https://intelligence.org/2018/10/29/embedded-agents/) (25 mins)
### Week 7: AI governance
In the last week of curriculum content, we’ll look at the field of AI governance. Start with Dafoe (2020), which gives a thorough overview of AI governance and ways in which it might be important, particularly focusing on the framing of AI governance as field-building. An alternative framing - of AI governance as an attempt to prevent cooperation failures - is explored by Clifton (2019). Although the field of AI governance is still young, Muehlhauser (2020) identifies some useful work so far. Finally, Bostrom (2019) provides a background framing for thinking about technological risks: the process of randomly sampling new technologies, some of which might prove catastrophic.
Core readings:
1. [AI Governance: Opportunity and Theory of Impact (Dafoe, 2020)](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact) (25 mins)
2. [Cooperation, conflict and transformative AI: sections 1 & 2 (Clifton, 2019)](https://www.alignmentforum.org/s/p947tK8CoBbdpPtyK/p/KMocAf9jnAKc2jXri) (25 mins)
3. [Our AI governance grantmaking so far (Muehlhauser, 2020)](https://www.openphilanthropy.org/blog/ai-governance-grantmaking) (15 mins)
4. [The vulnerable world hypothesis (Bostrom, 2019)](https://www.nickbostrom.com/papers/vulnerable.pdf) **(ending at the start of the section on ‘Preventive policing’)** (60 mins)
### Week 8 (four weeks later): Projects
The final part of the AGI safety fundamentals course will be projects where you get to dig into something related to the course. The project is a chance for you to explore yourinterests, so try to find something you’re excited about! The goal of this project is to help you practice taking an intellectually productive stance towards AGI safety - to go beyond just reading and discussing existing ideas, and take a tangible step towards contributing to the field yourself. This is particularly valuable because it’s such a new field, with lots of room to explore.
### [Click here for the full version of the curriculum](https://docs.google.com/document/d/1mTm_sT2YQx3mRXQD6J2xD2QJG1c3kHyvX8kQc_IQ0ns/edit?usp=sharing), which contains additional readings, exercises, notes, discussion prompts, and project ideas. |
15ababc1-8269-45e7-8ea4-b4e1be02a86b | trentmkelly/LessWrong-43k | LessWrong | You can now listen to the “AI Safety Fundamentals” courses
The AI Safety Fundamentals courses are one of the best ways to learn about AI safety and prepare to work in the field.
BlueDot Impact facilitates the courses several times per year, and the curricula are available online for anyone to read.
The “Alignment” curriculum is created and maintained by Richard Ngo (OpenAI), and the “Governance” curriculum was developed in collaboration with a wide range of stakeholders.
You can now listen to most of the core readings from both courses:
AI Safety Fundamentals: Alignment
Gain a high-level understanding of the AI alignment problem and some of the key research directions which aim to solve it.
Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS
AI Safety Fundamentals: Governance
Gain foundational knowledge for doing research or policy work on the governance of transformative AI.
Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS
We've also made narrations for some readings from the advanced “Alignment 201” course, and we may record more later this year:
AI Safety Fundamentals: Alignment 201
Gain enough knowledge about alignment to understand the frontier of current research discussions.
Listen online or subscribe:
Apple Podcasts | Google Podcasts | Spotify | RSS
----------------------------------------
Apply to join the “AI Safety Fundamentals Governance Course” July cohort!
Gain foundational knowledge for doing research or policy work on the governance of transformative AI.
Successful applicants will participate in the AI Governance course with weekly virtual classes, and join the AI Safety Fundamentals community.
Apply before 26th June 2023!
https://apply.aisafetyfundamentals.com/governance
----------------------------------------
Thoughts, feedback, suggestions?
These narrations were created by Perrin Walker (TYPE III AUDIO) on behalf of BlueDot Impact.
We would love to hear your feedback. Do you find the narrations helpful? How could they be improve |
7b3def9f-1b87-4731-b566-092599b5482c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Coordinate-Free Interpretability Theory
Some interpretability work assigns meaning to activations of individual neurons or small groups of neurons. Some interpretability work assigns meaning to directions in activation-space. These are two different ontologies through which to view a net’s internals. Probably neither is really the “right” ontology, and there is at least one other ontology which would strictly outperform both of them in terms of yielding accurate interpretable structure.
One of the core problems of interpretability (I would argue *the* core problem) is that we don’t know what the “right” internal ontology is for a net - which internal structures we should assign meaning to. The goal of this post is to ask what things we could possibly assign meaning to under a maximally-general ontology constraint: coordinate freedom.
What Does Coordinate Freedom Mean?
----------------------------------
Let’s think of a net as a sequence of activation-states xi.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
, with the layer i → layer i+1 function given by xi+1=fi(xi).
We could use some other coordinate system to represent each xi. For instance, we could use (high dimensional) polar coordinates, with ri=||xi|| and ϕi a high-dimensional angle (e.g. all but one entry of a unit vector). Or, we could apply some fixed rotation to xi, e.g. in an attempt to find a basis which makes things sparse. In general, in order to represent xi in some other coordinate system, we apply a reversible transformation x′i=gi(xi), where x′i is the representation under the new coordinate system. In order to use these new coordinates while keeping the net the same overall, we transform the layer transition functions:
f′i−1=gi∘fi−1
f′i=fi∘g−1i
In English: we transform into the new coordinate system when calculating the layer state xi, and undo that transformation when computing the next layer state xi+1. That way, the overall behavior remains the same while using new coordinates in the middle
The basic idea of coordinate freedom is that our interpretability tools should not depend on which coordinate system we use for any of the internal states. We should be able to transform any layer to any coordinate system, and our interpretability procedure should still assign the same meaning to the same (transformed) internal structures.
What Kind Of Coordinate Free Internal Structure Is Even Possible?
-----------------------------------------------------------------
Here’s one example of a coordinate free internal structure one could look for: maybe the layer i→i+1 function can be written as
fi(xi)=F(G(xi))
for some low-dimensional G. For instance, maybe xi and xi+1 are both 512-dimensional, but xi+1 can be calculated (to reasonable precision) from a 22-dimensional summary G(xi). We call this a low-dimensional “factorization” of fi.
(Side note: I’m assuming throughout this post that everything is differentiable. Begone, pedantic mathematicians; you know what you were thinking.)
This kind of structure is also easy to detect in practice: just calculate the singular vector decomposition of the jacobian dfidxi at a bunch of points, and see whether the jacobian is consistently (approximately) low rank. In other words, do the obvious thing which we were going to do anyway.
Why is this structure coordinate free? Well, no matter how we transform xi and xi+1, so long as the coordinate changes are reversible, the transformed function f′i will still factor through a low-dimensional summary. Indeed, it will factor through the *same* low-dimensional summary, up to isomorphism. We can also see the corresponding fact in the first-order approximation: we can multiply the jacobian on the left and right by any invertible matrix, its rank won’t change, and low-rank components will be transformed by the transformation matrices.
… and as far as *local* structure goes (i.e. first-order approximation near any given point), that completes the list of coordinate free internal structures. It all boils down to just that one (and things which can be derived/constructed from that one). Here’s the argument: by choosing our coordinate transformations, we can make the jacobian anything we please, so long as the rank and dimensions of the matrix stay the same. The rank is the only feature we can’t change.
But that’s only a local argument. Are there any other *nonlocal* coordinate free structures?
Are There Any Other Coordinate Free Internal Structures?
--------------------------------------------------------
Let’s switch to the discrete case for a moment. Before we had fi(xi) mapping from a 512-dimensional space to a 512-dimensional space, but factoring through a 22-dimensional “summary”. A simple (and smaller) discrete analogue would be a function fi(xi) which maps the five possible values {1, 2, 3, 4, 5} to the same five values, but factors through a 2-value summary. For instance, maybe the function maps like this:
f maps 1 to 1, 2 to 1, 3 to 1, 4 to 5, and 5 to 5. Diagram shows the factorization through an intermediate set {"a", "b"}.Coordinate freedom means we can relabel the 1, 2, 3, 4, 5 any way we please, on the input or output side. While maintaining coordinate freedom, we can still identify whether the function factors through some “smaller” intermediate set - in this case the set {“a”, “b”}. Are there any *other* coordinate free structures we can identify? Or, to put it differently: if two functions factor through the same intermediate sets, does that imply that there exists some reversible coordinate transformation between the two?
It turns out that we can find an additional structure. Here’s another function from {1, 2, 3, 4, 5} to itself, which factors through the same intermediate sets as our previous function, but is not equivalent under any reversible coordinate transformation:
A different f which factors through the same (minimal) intermediate set.Why is this not equivalent? Well, no matter how we transform the input set in the first function, we’ll always find that three input values map to one output value, and the other two input values map to another output value. The “level sets” - i.e. sets of inputs which map to the same output - have size 3 and 2, no matter what coordinates we use. Whereas, for the second function, the level sets have size 4 and 1.
The two level sets for this function.Does *that* complete the list of coordinate free internal structures in the discrete case? Yes: if we have two functions with the same level set sizes, whose input and output spaces are the same size, then we can reversibly map between them. Just choose the coordinate transformation to match up level sets of the same size, and then match up the corresponding outputs.
Ok, so that’s the discrete case. Switching back to the continuous case (and bringing back the differentiable transformation constraint), what other coordinate free internal structure might exist in a net?
Well, in the continuous case, “size of the level set” isn’t really relevant, since e.g. we can reversibly map the unit interval to the real line. But, since our transformations need to be smooth, topology is relevant - for instance, if the set of inputs which map to 0 is 1 dimensional, is it topologically a circle? A line? Two circles and a line? A knot?
Indeed, “structure which is invariant under smooth reversible transformation” is kinda the whole point of topology! Insofar as we want our interpretability tools to be coordinate free, topological features are *exactly* the structures to which we can try to assign meaning.
Great, we’ve reinvented topology.
… So Now What?
--------------
There are some nontrivial things we can build up just from low-dimensional summaries between individual layers and topological features. But ultimately, I don’t expect to unlock most of interpretability this way. I’d guess that low-dimensional summaries of the particular form relevant here unlock a bit less than half of interpretability (i.e. all the low-rank stuff, along the lines of the [Rome paper](https://arxiv.org/pdf/2202.05262.pdf)), and other topological structures add a nonzero but small chunk on top of that. (For those who are into topology, I strongly encourage you to prove me wrong!) What's missing? Well, for instance, one type of structure which should definitely play a big role in a working theory of interpretability is sparsity. With full coordinate freedom, we can *always* choose coordinates in which the layer functions are sparse, and therefore we gain no information by finding sparsity in a net.
So let’s assume we can’t get everything we want from pure coordinate free interpretability. Somehow, we need to restrict allowable transformations further. Next interesting question: where might a preferred coordinate system or an additional restriction on transformations come from?
One possible answer: the data. We’ve implicitly assumed that we can apply arbitrary coordinate transformations to the data, but that doesn’t necessarily make sense. Something like a stream of text or an image does have a bunch of meaningful structure in it (like e.g. nearby-ness of two pixels in an image) which would be lost under arbitrary transformations. So one natural next step is to allow coordinate preference to be inherited from the data. On the other hand, we’d be importing our *own* knowledge of structure in the data; really, we’d prefer to only use the knowledge learned by the net.
Another possible answer: the initialization distribution of the net parameters. For instance, there will always be some coordinate transformation which makes every layer sparse, but maybe that transformation is highly sensitive to the parameter values. That would indicate that any interpretation which relies on that coordinate system is not very robust; some small change in theta which leaves network behavior roughly the same could totally change the sparsifying coordinate system. To avoid that, we could restrict ourselves to transformations which are not very parameter-sensitive. I currently consider that the most promising direction.
The last answer I currently see is SGD. We could maybe argue that SGD introduces a preferred coordinate system, but then the right move is to probably look at the whole training process in a coordinate free way rather than just the trained net by itself. That does sound potentially useful, although my guess is that it mostly just reproduces the parameter-sensitivity thing.
*Meta note: I’d be surprised if the stuff in this post hasn’t already been done; it’s one of those things where it’s easy and obvious enough that it’s faster to spend a day or two doing it than to find someone else who’s done it. If you know of a clean write-up somewhere, please do leave a link, I’d like to check whether I missed anything crucial.* |
cf144086-14df-4011-a4b5-21bcfa76f3f2 | trentmkelly/LessWrong-43k | LessWrong | What are the effective utilitarian pros and cons of having children (in rich countries)?
I have one child and do not want more, so I am not seeking for personal advice here. But I am interested in the general ethical question: From an effective utilitarian viewpoint, what are the arguments for and against having children? And if we do chooose to have children, what are the arguments for having few vs. many?
I am restricting the question to rich countries. People in poor countries might face a very different set of problems.
I am not talking about generalized pro-natalism or anti-natalism. I am talking about the cost-benefit analysis. Creating more humans has a certain obvious utility in itself (if we reject generalized anti-natalism), in that it means more humans will be able to enjoy being alive. But it has drawbacks as well. Each citizen in a rich country causes an awful lot of pollution, which may accelerate all sorts of environmental disasters.
There is the concern that an aging population will put more pressure on those people of working age. It is unclear to me how this trend will interact with growing automation, and whether this problem can be fixed or merely postponed.
Furthermore, it obviously makes a huge difference whether we expect an impending singularity, an impending environmental collapse, or both.
In your opinion, is it - as a guideline - good to have many children, or is it better to have few? Why? |
96abe846-e88c-430f-9a8a-41a5e66ccc75 | StampyAI/alignment-research-dataset/arbital | Arbital | Natural number
A **natural number** is a number like 0, 1, 2, 3, 4, 5, 6, ... which can be used to represent the count of an object. The set of natural numbers is $\mathbb N.$ Not all sources include 0 in $\mathbb N.$
Natural numbers are perhaps the simplest type of number. They don't include [negative numbers](https://arbital.com/p/48l), [fractional numbers](https://arbital.com/p/4zq), [irrational numbers](https://arbital.com/p/4bc), [imaginary numbers](https://arbital.com/p/4zw), or any of those complexities.
Thanks to their simplicity, natural numbers are often the first mathematical concept taught to children. Natural numbers are equipped with a notion of [https://arbital.com/p/-addition](https://arbital.com/p/-addition) ($2 + 3 = 5$ and so on) and [https://arbital.com/p/-multiplication](https://arbital.com/p/-multiplication) ($2 \cdot 3 = 6$ and so on), these are among the first mathematical operations taught to children.
Despite their simplicity, the natural numbers are a ubiquitous and useful mathematical object. They're quite useful for counting things. They represent all the possible [cardinalities](https://arbital.com/p/4w5) of finite [sets](https://arbital.com/p/3jz). They're also a useful [data structure](https://arbital.com/p/data_structure), in that numbers can be used to [encode](https://arbital.com/p/numeric_encoding) all sorts of data. Almost all of modern mathematics can be built out of natural numbers. |
1082a8c7-451e-433e-abad-1ac82f1e94c4 | trentmkelly/LessWrong-43k | LessWrong | Being legible to other agents by committing to using weaker reasoning systems
Suppose that an agent A1 reasons in a sound theory T1, and an agent A2 reasons in a theory T2, such that T1 proves that T2 is sound. Now suppose A1 is trying to reason in a way that is legible to A2, in the sense that A2 can rely on A1 to reach correct conclusions. One way of doing this is for A1 to restrict itself to some weaker theory T3, which T2 proves is sound, for the purposes of any reasoning that it wants to be legible to A2. Of course, in order for this to work, not only would A1 have to restrict itself to using T3, but A2 would to trust that A1 had done so. A plausible way for that to happen is for A1 to reach the decision quickly enough that A2 can simulate A1 making the decision to restrict itself to using T3.
Example application #1: An agent communicating with its past and future selves. In this scenario, consider an agent that designs successors it can trust, and wants to be legible to its successors.
Let T be some recursively axiomatizable first-order theory of arithmetic which is powerful enough to decide the important questions the agent wants to know the answer to that don't involve reasoning about agents (Peano Arithmetic, for instance). For a computable ordinal α, let T+α refer to T extended with a soundness schema for T+β for every β<α. So T+0=T, T+(α+1)=T+Sound(T+α), and T+γ=⋃α<γT+α for computable limit ordinals γ. Consider an agent initially (at time 0) reasoning with T+α0 for some large computable ordinal α0. At each time step, the agent must design a successor that it can prove uses sound reasoning. So at each time-step t (until αt=0), the agent can pick αt+1<αt and have its successor reason with T+αt+1. Of course, this cannot go on forever, but there are astronomically long decreasing sequences of ordinals starting at ordinals that are straightforward to describe, and which require a very minimal amount of computation to get from one ordinal in the sequence to the next (for instance, if you're familiar with the proof that the Hydra game i |
833acc88-f8f5-4b27-9e27-576f5dc8609a | trentmkelly/LessWrong-43k | LessWrong | CFAR is running an experimental mini-workshop (June 2-6, Berkeley CA)!
Hello from the Center for Applied Rationality!
Some of you may have attended our classic applied rationality workshops in the past; others of you may have wanted to attend a workshop but not yet had a chance to. It's been a while since we've last run public-facing workshops, but we wanted to say:
* We're not dead! (More on that perhaps in a later post)
* We have a new experimental mini-workshop coming up soon and hopefully more workshop content to follow after!
Our new workshop will be held after LessOnline at the upcoming Arbor Summer Camp program; classes will begin after lunch on Monday 6/2 and end just before lunch on Thursday 6/5. Some things to know about the upcoming workshop:
This workshop is "mini"
This workshop will last for about three days (two half-days on the edges, two full days in the middle), instead of our usual ~four-and-a-half.
Also, it’ll be embedded in “Arbor Summer Camp” and we’ll be mingling with the ~300 overall Arbor attendees for mealtimes and evenings, whereas normal CFAR workshops are standalone events that are more separate/immersive at those times.
What you’ll get and what you won’t get if you attend:
We will run a roughly 50/50 mixture of traditional CFAR content (such as Inner Simulator; TAPs, Goal Factoring, Double Crux, Resolve Cycles, Focusing), and some newer experimental material (beta tests, hoping to get your help making it good).
The newer material here is aimed at:
* Treating participants more obviously as fellow investigators and the authors of your own lives (you always were, of course, but: making this more obviously the foundation of how the workshop is working)
* Noticing the ways in which people (you; us) are organic wholes, and trying not to trample on that with our “rationality practices”, but instead to be good gardeners of the health of our organisms overall. (Or, if you’d like that phrased more technically: we’ll be trying to cultivate virtues and habits such that the mesaoptimizers wh |
fefac6f0-8812-4833-8a9a-f060547a3a17 | StampyAI/alignment-research-dataset/arxiv | Arxiv | 3D Common Corruptions and Data Augmentation
1 Introduction
---------------

Figure 1: Using 3D information to generate real-world corruptions. The top row shows sample 2D corruptions applied uniformly over the image, e.g. as in Common Corruptions [[27](#bib.bib27)], disregarding 3D information. This leads to corruptions that are unlikely to happen in the real world, e.g. having the same motion blur over the entire image irrespective of the distance to camera (top left). Middle row shows their 3D counterparts from 3D Common Corruptions (3DCC). The circled regions highlight the effect of incorporating 3D information.
More specifically, in 3DCC, 1. motion blur has a motion parallax effect where objects further away from the camera seem to move less, 2. defocus blur has a depth of field effect, akin to a large aperture effect in real cameras, where certain regions of the image can be selected to be in focus, 3. lighting takes the scene geometry into account when illuminating the scene and casts shadows on objects, 4. fog gets denser further away from the camera, 5. occlusions of a target object, e.g. fridge (blue mask), are created by changing the camera’s viewpoint and having its view naturally obscured by another object, e.g. the plant (red mask). This is in contrast to its 2D counterpart that randomly discards patches [[13](#bib.bib13)]. See [project page](https://3dcommoncorruptions.epfl.ch) for a video version of the figure.

Figure 2:
The new corruptions. We propose a diverse set of new corruption operations ranging from defocusing (near/far focus) to lighting changes and 3D-semantic ones, e.g. object occlusion. These corruptions are all automatically generated, efficient to compute, and can be applied to most datasets (Sec. [3.3](#S3.SS3 "3.3 Applying 3DCC to standard vision datasets ‣ 3 Generating 3D Common Corruptions ‣ 3D Common Corruptions and Data Augmentation")). We show that they expose vulnerabilities in models (Sec. [5.2.1](#S5.SS2.SSS1 "5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")) and are a good approximation of realistic corruptions (Sec. [5.2.3](#S5.SS2.SSS3 "5.2.3 Soundness: 3DCC vs Expensive Synthesis ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). A subset of the corruptions marked in the last column are novel and commonly faced in the real world, but are not 3D based. We include them in our benchmark. For occlusion and scale corruptions, the blue and red masks denote the amodal visible and occluded parts of an object, e.g. the fridge.
Computer vision models deployed in the real world will encounter naturally occurring distribution shifts from their training data. These shifts range from lower-level distortions, such as motion blur and illumination changes, to semantic ones, like object occlusion.
Each of them represents a possible failure mode of a model and has been frequently shown to result in profoundly unreliable predictions [[15](#bib.bib15), [27](#bib.bib27), [67](#bib.bib67), [31](#bib.bib31), [23](#bib.bib23)]. Thus, a systematic testing of vulnerabilities to these shifts is critical before deploying these models in the real world.
This work presents a set of distribution shifts in order to test models’ robustness.
In contrast to previously proposed shifts which perform uniform 2D modifications over the image, such as Common Corruptions (2DCC) [[27](#bib.bib27)], our shifts incorporate 3D information to generate corruptions that are consistent with the scene geometry. This leads to shifts that are more likely to occur in the real world (See Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation")). The resulting set includes 20 corruptions, each representing a distribution shift from training data, which we denote as 3D Common Corruptions (3DCC). 3DCC addresses several aspects of the real world, such as camera motion, weather, occlusions, depth of field, and lighting. Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation") provides an overview of all corruptions. As shown in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation"), the corruptions in 3DCC are more diverse and realistic compared to 2D-only approaches.
We show in Sec. [5](#S5 "5 Experiments ‣ 3D Common Corruptions and Data Augmentation") that the performance of the methods aiming to improve robustness, including those with diverse data augmentation, reduce drastically under 3DCC. Furthermore, we observe that the robustness issues exposed by 3DCC well correlate with corruptions generated via photorealistic synthesis. Thus, 3DCC can serve as a challenging testbed for real-world corruptions, especially those that depend on scene geometry.
Motivated by this, our framework also introduces new 3D data augmentations. They take the scene geometry into account, as opposed to 2D augmentations, thus enabling models to build invariances against more realistic corruptions. We show in Sec. [5.3](#S5.SS3 "5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") that they significantly boost model robustness against such corruptions, including the ones that cannot be addressed by the 2D augmentations.
The proposed corruptions are generated programmatically with exposed parameters, enabling fine-grained analysis of robustness, e.g. by continuously increasing the 3D motion blur.
They are efficient to compute and can be computed on-the-fly during training as data augmentation with a small increase in computational cost.
They are also extendable, i.e. they can be applied to standard vision datasets, e.g. ImageNet [[12](#bib.bib12)], that do not come with 3D labels.
2 Related Work
---------------
This work presents a data-focused approach [[63](#bib.bib63), [52](#bib.bib52)] to robustness.
We give an overview of some of the related topics within the constraints of space.
Robustness benchmarks based on corruptions: Several studies have proposed robustness benchmarks to understand the vulnerability of models to corruptions. A popular benchmark, Common Corruptions (2DCC) [[27](#bib.bib27)], generates synthetic corruptions on real images that expose sensitivities of image recognition models. It led to a series of works either creating new corruptions or applying similar corruptions on other datasets for different tasks [[32](#bib.bib32), [43](#bib.bib43), [7](#bib.bib7), [80](#bib.bib80), [45](#bib.bib45), [66](#bib.bib66)]. In contrast to these works, 3DCC modifies real images using 3D information to generate realistic corruptions. The resulting images are both perceptually different and expose different failure modes in model predictions compared to their 2D counterparts (See Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation") and [8](#S5.F8 "Figure 8 ‣ 5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). Other works create and capture the corruptions in the real world, e.g. ObjectNet [[3](#bib.bib3)]. Although being realistic, it requires significant manual effort and is not extendable. A more scalable approach is to use computer graphics based 3D simulators to generate corrupted data [[38](#bib.bib38)] which can lead to generalization concerns. 3DCC aims to generate corruptions as close to the real world as possible while staying scalable.
Robustness analysis works use existing benchmarks to probe the robustness of different methods, e.g. data augmentation or self-supervised training, under several distribution shifts. Recent works investigated the relation between synthetic and natural distribution shifts [[68](#bib.bib68), [26](#bib.bib26), [44](#bib.bib44), [14](#bib.bib14)] and effectiveness of architectural advancements [[5](#bib.bib5), [64](#bib.bib64), [48](#bib.bib48)]. We select several popular methods to show that 3DCC can serve as a challenging benchmark (Fig. [6](#S5.F6 "Figure 6 ‣ 5.1 Preliminaries ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") and [7](#S5.F7 "Figure 7 ‣ 5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")).

Figure 3: Left: We show the inputs needed to create each of our corruptions, e.g. the 3D information such as depth, and RGB image. These corruptions have also been grouped (in solid colored lines) according to their corruption types.
For example, to create the distortions in the dashed box in the right, one only needs the RGB image and its corresponding depth. For the ones in the left dashed box, 3D mesh is required. Note that one can create view changes corruptions also from panoramic images if available, without a mesh.
Right: As an example, we show an overview of generating depth of field effect efficiently. The scene is first split into multiple layers by discretizing scene depth. Next, a region is chosen to be kept in focus (here it is the region closest to the camera). We then compute the corresponding blur levels for each layer according to their distance from the focus region, using a pinhole camera model. The final refocused image is obtained by compositing blurred image layers.
Improving robustness:
Numerous methods have been proposed to improve model robustness such as data augmentation with corrupted data [[22](#bib.bib22), [41](#bib.bib41), [40](#bib.bib40), [60](#bib.bib60)], texture changes [[24](#bib.bib24), [26](#bib.bib26)], image compositions [[85](#bib.bib85), [82](#bib.bib82)] and transformations [[29](#bib.bib29), [81](#bib.bib81)]. While these methods can generalize to some unseen examples, performance gains are non-uniform [[61](#bib.bib61), [22](#bib.bib22)]. Other methods include self-training [[76](#bib.bib76)], pre-training [[28](#bib.bib28), [50](#bib.bib50)], architectural changes [[5](#bib.bib5), [64](#bib.bib64)], and diverse ensembling [[51](#bib.bib51), [33](#bib.bib33), [78](#bib.bib78), [79](#bib.bib79)]. Here we instead adopt a data-focused approach to robustness by i. providing a large set of realistic distribution shifts and ii. introducing new 3D data augmentation that improves robustness against real-world corruptions (Sec. [5.3](#S5.SS3 "5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")).
Photorealistic image synthesis involves techniques to generate realistic images. Some of these techniques have been recently used to create corruption data. These techniques are generally specific to a single real-world corruption. Examples include adverse weather conditions [[19](#bib.bib19), [62](#bib.bib62), [70](#bib.bib70), [30](#bib.bib30), [69](#bib.bib69)], motion blur [[6](#bib.bib6), [49](#bib.bib49)], depth of field [[53](#bib.bib53), [72](#bib.bib72), [71](#bib.bib71), [17](#bib.bib17), [4](#bib.bib4)], lighting [[77](#bib.bib77), [25](#bib.bib25)], and noise [[21](#bib.bib21), [74](#bib.bib74)]. They may be used for purely artistic purposes or to create training data. Some of our 3D transformations are instantiations of these methods, with the downstream goal of testing and improving model robustness in a unified framework with a wide set of corruptions.
Image restoration
aims to undo the corruption in the image using classical signal processing techniques [[35](#bib.bib35), [20](#bib.bib20), [18](#bib.bib18), [42](#bib.bib42)] or learning-based approaches [[86](#bib.bib86), [8](#bib.bib8), [46](#bib.bib46), [87](#bib.bib87), [57](#bib.bib57), [47](#bib.bib47), [1](#bib.bib1)]. We differ from these works by generating corrupted data, rather than removing it, to use them for benchmarking or data augmentation. Thus, in the latter, we train with these corrupted data to encourage the model to be invariant to corruptions, as opposed to training the model to remove the corruptions as a pre-processing step.
Adversarial corruptions add imperceptible worst-case shifts to the input to fool a model [[67](#bib.bib67), [36](#bib.bib36), [41](#bib.bib41), [11](#bib.bib11)]. Most of the failure cases of models in the real world are not the result of adversarial corruptions but rather naturally occurring distribution shifts. Thus, our focus in this paper is to generate corruptions that are likely to occur in the real world.
3 Generating 3D Common Corruptions
-----------------------------------
###
3.1 Corruption Types
We define different corruption types, namely depth of field, camera motion, lighting, video, weather, view changes, semantics, and noise, resulting in 20 corruptions in 3DCC.
Most of the corruptions require an RGB image and scene depth, while some needs 3D mesh (See Fig. [3](#S2.F3 "Figure 3 ‣ 2 Related Work ‣ 3D Common Corruptions and Data Augmentation")).
We use a set of methods leveraging 3D synthesis techniques or image formation models to generate different corruption types, as explained in more detail below. Further details are provided in the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf).
Depth of field corruptions create refocused images. They keep a part of the image in focus while blurring the rest.
We consider a layered approach [[17](#bib.bib17), [4](#bib.bib4)] that splits the scene into multiple layers. For each layer, the corresponding blur level is computed using the pinhole camera model. The blurred layers are then composited with alpha blending. Figure [3](#S2.F3 "Figure 3 ‣ 2 Related Work ‣ 3D Common Corruptions and Data Augmentation") (right) shows an overview of the process. We generate near focus and far focus corruptions by randomly changing the focus region to the near or far part of the scene.
Camera motion creates blurry images due to camera movement during exposure. To generate this effect, we first transform the input image into a point cloud using the depth information. Then, we define a trajectory (camera motion) and render novel views along this trajectory. As the point cloud was generated from a single RGB image, it has incomplete information about the scene when the camera moves. Thus, the rendered views will have disocclusion artifacts. To alleviate this, we apply an inpainting method from [[49](#bib.bib49)]. The generated views are then combined to obtain parallax-consistent motion blur.
We define XY-motion blur and Z-motion blur when the main camera motion is along the image XY-plane or Z-axis, respectively.
Lighting corruptions change scene illumination by adding new light sources and modifying the original illumination. We use Blender [[10](#bib.bib10)] to place these new light sources and compute the corresponding illumination for a given viewpoint in the 3D mesh. For the flash corruption, a light source is placed at the camera’s location, while for shadow corruption, it is placed at random diverse locations outside the camera frustum. Likewise, for multi-illumination corruption, we compute the illumination from a set of random light sources with different locations and luminosities.
Video corruptions arise during the processing and streaming of videos.
Using the scene 3D, we create a video using multiple frames from a single image by defining a trajectory, similar to motion blur.
Inspired by [[80](#bib.bib80)], we generate average bit rate (ABR) and constant rate factor (CRF) as H.265 codec compression artifacts, and bit error to capture corruptions induced by imperfect video transmission channel. After applying the corruptions over the video, we pick a single frame as the final corrupted image.
Weather corruptions degrade visibility by obscuring parts of the scene due to disturbances in the medium. We define a single corruption and denote it as fog 3D to differentiate it from the fog corruption in 2DCC. We use the standard optical model for fog [[19](#bib.bib19), [62](#bib.bib62), [70](#bib.bib70)]:
| | | | |
| --- | --- | --- | --- |
| | 𝐈(𝐱)=𝐑(𝐱)𝐭(𝐱)+𝐀(1−𝐭(𝐱)),𝐈𝐱𝐑𝐱𝐭𝐱𝐀1𝐭𝐱\mathbf{I(x)}~{}=~{}\mathbf{R(x)}\mathbf{t}(\mathbf{x})+\mathbf{A}(1-\mathbf{t}(\mathbf{x})),bold\_I ( bold\_x ) = bold\_R ( bold\_x ) bold\_t ( bold\_x ) + bold\_A ( 1 - bold\_t ( bold\_x ) ) , | | (1) |
where 𝐈(𝐱)𝐈𝐱\mathbf{I(x)}bold\_I ( bold\_x ) is the resulting foggy image at pixel x𝑥xitalic\_x, 𝐑(𝐱)𝐑𝐱\mathbf{R(x)}bold\_R ( bold\_x ) is the clean image, 𝐀𝐀\mathbf{A}bold\_A is atmospheric light, and 𝐭(𝐱)𝐭𝐱\mathbf{t}(\mathbf{x})bold\_t ( bold\_x ) is the transmission function describing the amount of light that reaches the camera. When the medium is homogeneous, the transmission depends on the distance from the camera, 𝐭(𝐱)=exp(−β𝐝(𝐱))𝐭𝐱𝛽𝐝𝐱\mathbf{t}(\mathbf{x})=\exp{(-\beta\mathbf{d}(\mathbf{x}))}bold\_t ( bold\_x ) = roman\_exp ( - italic\_β bold\_d ( bold\_x ) ) where 𝐝(𝐱)𝐝𝐱\mathbf{d}(\mathbf{x})bold\_d ( bold\_x ) is the scene depth and β𝛽\betaitalic\_β is the attenuation coefficient controlling the fog thickness.
View changes are due to variations in the camera extrinsics and focal length. Our framework enables rendering RGB images conditioned on several changes, such as field of view, camera roll and camera pitch, using Blender. This enables us to analyze the sensitivity of models to various view changes in a controlled manner. We also generate images with view jitter that can be used to analyze if models predictions flicker with slight changes in viewpoint.
Semantics: In addition to view changes, we also render images by selecting an object in the scene and changing its occlusion level and scale. In occlusion corruption, we generate views of an object occluded by other objects. This is in contrast to random 2D masking of pixels to create an unnatural occlusion effect that is irrespective of image content, e.g. as in [[13](#bib.bib13), [48](#bib.bib48)] (See Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation")). Occlusion rate can be controlled to probe model robustness against occlusion changes. Similarly, in scale corruption, we render views of an object with varying distances from the camera location. Note that the corruptions require a mesh with semantic annotations, and are generated automatically, similar to [[2](#bib.bib2)]. This is in contrast to [[3](#bib.bib3)] which requires tedious manual effort. The objects can be selected by randomly picking a point in the scene or using the semantic annotations.

Figure 4: Visualizations of 3DCC with increasing shift intensities. Top: Increasing the shift intensity results in larger blur, less illumination, and denser fog. Bottom: The object becomes more occluded or shrinks in size using calculated viewpoint changes. The blue mask denotes the amodal visible parts of the fridge/couch, and the red mask is the occluded part. The leftmost column shows the clean images. Visuals for all corruptions for all shift intensities are shown in the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf).
Noise corruptions arise from imperfect camera sensors. We introduce new noise corruptions that do not exist in the previous 2DCC benchmark. For low-light noise, we decreased the pixel intensities and added Poisson-Gaussian distributed noise to reflect the low-light imaging setting [[21](#bib.bib21)]. ISO noise also follows a Poisson-Gaussian distribution, with a fixed photon noise (modeled by a Poisson), and varying electronic noise (modeled by a Gaussian). We also included color quantization as another corruption that reduces the bit depth of the RGB image. Only this subset of our corruptions is not based on 3D information.
###
3.2 Starter 3D Common Corruptions Dataset
We release the full open source code of our pipeline, which enables using the implemented corruptions on any dataset. As a starter dataset, we applied the corruptions on 16k Taskonomy [[84](#bib.bib84)] test images. For all the corruptions except the ones in view changes and semantics which change the scene, we follow the protocol in 2DCC and define 5 shift intensities, resulting in approximately 1 million corrupted images (16161616k×14×5absent145\times 14\times 5× 14 × 5). Directly applying the methods to generate corruptions results in uncalibrated shift intensities with respect to 2DCC. Thus, to enable aligned comparison with 2DCC on a more uniform intensity change, we perform a calibration step. For the corruptions with a direct counterpart in 2DCC, e.g. motion blur, we set the corruption level in 3DCC such that for each shift intensity in 2DCC, the average SSIM [[73](#bib.bib73)] values over all images is the same in both benchmarks. For the corruptions that do not have a counterpart in 2DCC, we adjust the distortion parameters to increase shift intensity while staying in a similar SSIM range as the others. For view changes and semantics, we render 32k images with smoothly changing parameters, e.g. roll angle, using the Replica [[65](#bib.bib65)] dataset. Figure [4](#S3.F4 "Figure 4 ‣ 3.1 Corruption Types ‣ 3 Generating 3D Common Corruptions ‣ 3D Common Corruptions and Data Augmentation") shows example corruptions with different shift intensities.
###
3.3 Applying 3DCC to standard vision datasets
While we employed datasets with full scene geometry information such as Taskonomy [[84](#bib.bib84)], 3DCC can also be applied to standard datasets without 3D information. We exemplify this on ImageNet [[12](#bib.bib12)] and COCO [[39](#bib.bib39)] validation sets by leveraging depth predictions from the MiDaS [[55](#bib.bib55)] model, a state-of-the-art depth estimator. Figure [5](#S5.F5 "Figure 5 ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") shows example images with near focus, far focus, and fog 3D corruptions. Generated images are physically plausible, demonstrating that 3DCC can be used for other datasets by the community to generate a diverse set of image corruptions. In Sec. [5.2.4](#S5.SS2.SSS4 "5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"), we quantitatively demonstrate the effectiveness of using predicted depth to generate 3DCC.
4 3D Data Augmentation
-----------------------
While benchmarking uses corrupted images as test data, one can also use them as augmentations of training data to build invariances towards these corruptions. This is the case for us since, unlike 2DCC, 3DCC is designed to capture corruptions that are more likely to appear in the real world, hence it has a sensible augmentation value as well. Thus, in addition to benchmarking robustness using 3DCC, our framework can also be viewed as new data augmentation strategies that take the 3D scene geometry into account. We augment with the following corruption types in our experiments: depth of field, camera motion, and lighting. The augmentations can be efficiently generated on-the-fly during training using parallel implementations. For example, the depth of field augmentations take 0.870.870.870.87 seconds (wall clock time) on a single V100 GPU for a batch size of 128128128128 images with 224×224224224224\times 224224 × 224 resolution. For comparison, applying 2D defocus blur requires 0.540.540.540.54 seconds, on average. It is also possible to precompute certain selected parts of the augmentation process, e.g. the illuminations for lighting augmentations, to increase efficiency. We incorporated these mechanisms in our implementation. We show in Sec. [5.3](#S5.SS3 "5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") that these augmentations can significantly improve robustness against real-world distortions.
5 Experiments
--------------
We perform evaluations to demonstrate that 3DCC can expose vulnerabilities in models (Sec. [5.2.1](#S5.SS2.SSS1 "5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")) that are not captured by 2DCC (Sec. [5.2.2](#S5.SS2.SSS2 "5.2.2 Redundancy of corruptions in 3DCC and 2DCC ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). The generated corruptions are similar to expensive realistic synthetic ones (Sec. [5.2.3](#S5.SS2.SSS3 "5.2.3 Soundness: 3DCC vs Expensive Synthesis ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")) and are applicable to datasets without 3D information (Sec. [5.2.4](#S5.SS2.SSS4 "5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")) and for semantic tasks (Sec. [5.2.5](#S5.SS2.SSS5 "5.2.5 3DCC evaluations on semantic tasks ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). Finally, the proposed 3D data augmentation improves robustness qualitatively and quantitatively (Sec. [5.3](#S5.SS3 "5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). Please see the [project page](https://3dcommoncorruptions.epfl.ch) for a live demo and more extensive qualitative results.

Figure 5: 3DCC can be applied to most datasets, even those that do not come with 3D information. Several query images from the ImageNet [[12](#bib.bib12)] and COCO [[39](#bib.bib39)] dataset are shown with near focus, far focus and fog 3D corruptions applied. Notice how the objects in the circled regions go from sharp to blurry depending on the focus region and scene geometry. To get the depth information needed to create these corruptions, predictions from MiDaS [[55](#bib.bib55)] model is used. This gives a good enough approximation to generate realistic corruptions (as we will quantify in Sec. [5.2.4](#S5.SS2.SSS4 "5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")).
###
5.1 Preliminaries
Evaluation Tasks: 3DCC can be applied to any dataset, irrespective of the target task, e.g. dense regression or low-dimensional classification. Here we mainly experiment with surface normals and depth estimation as target tasks widely employed by the community. We note that the robustness of models solving such tasks is underexplored compared to classification tasks (See Sec. [5.2.5](#S5.SS2.SSS5 "5.2.5 3DCC evaluations on semantic tasks ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") for results on panoptic segmentation and object recognition).
To evaluate robustness, we compute the ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error between predicted and ground truth images.
Training Details: We train UNet [[59](#bib.bib59)] and DPT [[54](#bib.bib54)] models on Taskonomy [[84](#bib.bib84)] using learning rate 5×10−45superscript1045\times 10^{-4}5 × 10 start\_POSTSUPERSCRIPT - 4 end\_POSTSUPERSCRIPT and weight decay 2×10−62superscript1062\times 10^{-6}2 × 10 start\_POSTSUPERSCRIPT - 6 end\_POSTSUPERSCRIPT. We optimize the likelihood loss with Laplacian prior using AMSGrad [[56](#bib.bib56)], following [[79](#bib.bib79)]. Unless specified, all the models use the same UNet backbone (e.g. Fig. [6](#S5.F6 "Figure 6 ‣ 5.1 Preliminaries ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")). We also experiment with DPT models trained on Omnidata [[17](#bib.bib17)] that mixes a diverse set of training datasets. Following [[17](#bib.bib17)], we train with learning rate 1×10−51superscript1051\times 10^{-5}1 × 10 start\_POSTSUPERSCRIPT - 5 end\_POSTSUPERSCRIPT, weight decay 2×10−62superscript1062\times 10^{-6}2 × 10 start\_POSTSUPERSCRIPT - 6 end\_POSTSUPERSCRIPT with angular & ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT losses.
Robustness mechanisms evaluated: We evaluate several popular data augmentation strategies: DeepAugment [[26](#bib.bib26)], style augmentation [[24](#bib.bib24)], and adversarial training [[36](#bib.bib36)]. We also include Cross-Domain Ensembles (X-DE) [[79](#bib.bib79)] that has been recently shown to improve robustness to corruptions by creating diverse ensemble components via input transformations. We refer to the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for training details. Finally, we train a model with augmentation with corruptions from 2DCC [[27](#bib.bib27)] (2DCC augmentation), and another model with 3D data augmentation on top of that (2DCC + 3D augmentation).

Figure 6: Existing robustness mechanisms are found to be insufficient for addressing real-world corruptions approximated by 3DCC. Performance of models with different robustness mechanisms under 3DCC for surface normals (left) and depth (right) estimation tasks are shown. All models here are UNets and are trained with Taskonomy data. Each bar shows the ℓ1subscriptnormal-ℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error averaged over all 3DCC corruptions (lower is better). The black error bars show the error at the lowest and highest shift intensity. The red line denotes the performance of the baseline model on clean (uncorrupted) data. This denotes that existing robustness mechanisms, including those with diverse augmentations, perform poorly under 3DCC.
###
5.2 3D Common Corruptions Benchmark
####
5.2.1 3DCC can expose vulnerabilities
We perform a benchmarking of the existing models against 3DCC to understand their vulnerabilities. However, we note that our main contribution is not the performed analyses but the benchmark itself. The state-of-the-art models may change over time and 3DCC aims to identify the robustness trends, similar to other benchmarks.
Effect of robustness mechanisms: Figure [6](#S5.F6 "Figure 6 ‣ 5.1 Preliminaries ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") shows the average performance of different robustness mechanisms on 3DCC for surface normals and depth estimation tasks. These mechanisms improved the performance over the baseline but are still far from the performance on clean data. This suggests that 3DCC exposes robustness issues and can serve as a challenging testbed for models. The 2DCC augmentation model returns slightly lower ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error, indicating that diverse 2D data augmentation only partially helps against 3D corruptions.
Effect of dataset and architecture: We provide a detailed breakdown of performance against 3DCC in Fig. [7](#S5.F7 "Figure 7 ‣ 5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"). We first observe that baseline UNet and DPT models trained on Taskonomy have similar performance, especially on the view change corruptions. By training with larger and more diverse data with Omnidata, the DPT performance improves. Similar observations were made on vision transformers for classification [[16](#bib.bib16), [5](#bib.bib5)]. This improvement is notable with view change corruptions, while for the other corruptions, there is a decrease in error from 0.069 to 0.061. This suggests that combining architectural advancements with diverse and large training data can play an important role in robustness against 3DCC. Furthermore, when combined with 3D augmentations, they improve robustness to real-world corruptions (Sec. [5.3](#S5.SS3 "5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")).

Figure 7: Detailed breakdown of performance on 3DCC. The benchmark can expose trends and models’ sensitivity to a wide range of corruptions. We show this by training models on either Taskonomy [[84](#bib.bib84)] or Omnidata [[17](#bib.bib17)] and with either a UNet [[59](#bib.bib59)] or DPT [[54](#bib.bib54)] architecture. The average ℓ1subscriptnormal-ℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error over all shift intensities for each corruption is shown (lower is better). Top: We observe that Taskonomy models are more susceptible to changes in field of view, camera roll, and pitch compared to Omnidata trained model, which is consistent with their methods. Bottom: The numbers in the legend are the average performance over all the corruptions. We can see that all the models are sensitive to 3D corruptions, e.g. z-motion blur and shadow. Overall, training with large diverse data, e.g. Omnidata, and using DPT is observed to notably improve performance.

Figure 8: Redundancy among corruptions. We quantified the pairwise similarity of a subset of corruptions from 2DCC and 3DCC by computing their correlations in the ℓ1subscriptnormal-ℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors of the surface normals predictions (left) and RGB images (right). 3DCC incurs less correlations both intra-benchmark as well as against 2DCC. Thus, 3DCC has a diverse set of corruptions and these corruptions do not have a significant overlap with 2DCC. Using depth as target task yields similar conclusions (full affinity matrices are provided in the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf)).
####
5.2.2 Redundancy of corruptions in 3DCC and 2DCC
In Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ 3D Common Corruptions and Data Augmentation"), a qualitative comparison was made between 3DCC and 2DCC. The former generates more realistic corruptions while the latter does not take scene 3D into account and applies uniform modifications over the image. In Fig. [8](#S5.F8 "Figure 8 ‣ 5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"), we aim to quantify the similarity between 3DCC and 2DCC. On the left of Fig. [8](#S5.F8 "Figure 8 ‣ 5.2.1 3DCC can expose vulnerabilities ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"), we compute the correlations of ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors between clean and corrupted predictions made by the baseline model for a subset of corruptions (full set is in [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf)). 3DCC incurs less correlations both intra-benchmark as well as against 2DCC (Mean correlations are 0.320.320.320.32 for 2DCC-2DCC, 0.280.280.280.28 for 3DCC-3DCC, and 0.300.300.300.30 for 2DCC-3DCC). Similar conclusions are obtained for depth estimation (in the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf)). In the right, we provide the same analysis on the RGB domain by computing the ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error between clean and corrupted images, again suggesting that 3DCC yields lower correlations.

Figure 9: Visual comparisons of 3DCC and expensive After Effects (AE) generated depth of field effect on query images from Hypersim. 3DCC generated corruptions are visually similar to those from AE.
####
5.2.3 Soundness: 3DCC vs Expensive Synthesis
3DCC aims to expose a model’s vulnerabilities to certain real-world corruptions. This requires the corruptions generated by 3DCC to be similar to real corrupted data. As generating such labeled data is expensive and scarcely available, as a proxy evaluation, we instead compare the realism of 3DCC to synthesis made by Adobe After Effects (AE) which is a commercial product to generate high-quality photorealistic data and often relies on expensive and manual processes. To achieve this, we use the Hypersim [[58](#bib.bib58)] dataset that comes with high-resolution z-depth labels. We then generated 200 images that are near- and far-focused using 3DCC and AE.
Figure [9](#S5.F9 "Figure 9 ‣ 5.2.2 Redundancy of corruptions in 3DCC and 2DCC ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") shows sample generated images from both approaches that are perceptually similar.
Next, we computed the prediction errors of a baseline normal model when the input is from 3DCC or AE. The scatter plot of ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors are given in Fig. [10](#S5.F10 "Figure 10 ‣ 5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") and demonstrates a strong correlation, 0.80, between the two approaches. For calibration and control, we also provide the scatter plots for some corruptions from 2DCC to show the significance of correlations. They have significantly lower correlations with AE, indicating the depth of field effect created via 3DCC matches AE generated data reasonably well.
####
5.2.4 Effectiveness of applying 3DCC to other datasets
We showed qualitatively in Fig. [5](#S5.F5 "Figure 5 ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") that 3DCC can be applied to standard vision datasets like ImageNet [[12](#bib.bib12)] and COCO [[39](#bib.bib39)] by leveraging predicted depth from a state-of-the-art model from MiDaS [[55](#bib.bib55)]. Here, we quantitatively show the impact of using predicted depth instead of ground truth. For this, we use the Replica [[65](#bib.bib65)] dataset that comes with ground truth depth labels. We then generated 1280 corrupted images using ground truth depth and predicted depth from MiDaS [[55](#bib.bib55)] without fine-tuning on Replica. Figure [11](#S5.F11 "Figure 11 ‣ 5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") shows the trends on three corruptions from 3DCC generated using ground truth and predicted depth.
The trends are similar and the correlation of errors is strong (0.790.790.790.79). This suggests that the predicted depth can be effectively used to apply 3DCC to other datasets, and the performance is expected to improve with better depth predictions.

Figure 10: Corruptions of 3DCC are similar to expensive realistic synthetic ones while being cheaper to generate. Scatter plots of ℓ1subscriptnormal-ℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors from the baseline model predictions on 3DCC against those created by Adobe After Effects (AE). The correlation between 3DCC near (far) focus and those from AE near (far) focus is the strongest (numbers are in the legend of left column). We also added the most similar corruption from 2DCC (defocus blur) for comparison, yielding weaker correlations (middle). Shot noise (right) is a control baseline, i.e. a randomly selected corruption, to calibrate the significance of the correlation measure.

Figure 11: Effectiveness of applying 3DCC without ground truth depth.
Three corruptions from 3DCC are generated using depth predictions from MiDaS [[55](#bib.bib55)] model on unseen Replica data. Scatter plots show the ℓ1subscriptnormal-ℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors from the baseline model when corruptions are generated using the predicted depth (x-axis) or the ground truth (y-axis). The trends are similar between two corrupted data results, suggesting the predicted depth is an effective approximation to generate 3DCC. See the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for more tests including control baselines.

Figure 12: Qualitative results of learning with 3D data augmentation on random queries from OASIS [[9](#bib.bib9)], AE (Sec. [5.2.3](#S5.SS2.SSS3 "5.2.3 Soundness: 3DCC vs Expensive Synthesis ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")), manually collected DSLR data, and in-the-wild YouTube videos for surface normals. The ground truth is gray when it is not available, e.g. for YouTube. The predictions in the last two rows are from the O+DPT+2DCC+3D (Ours) model. It is further trained with cross-task consistency (X-TC) constraints [[83](#bib.bib83)] (Ours+X-TC). They are noticeably sharper and more accurate. See the [project page](https://3dcommoncorruptions.epfl.ch) and [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for more results. A [live demo](https://3dcommoncorruptions.epfl.ch/#livedemo) for user uploaded images is also available.
####
5.2.5 3DCC evaluations on semantic tasks
The previous benchmarking results were focusing on surface normals and depth estimation tasks. Here we perform a benchmarking on panoptic segmentation and object recognition tasks as additional illustrative 3DCC evaluations. In particular for panoptic segmentation, we use semantic corruptions from Sec. [3.1](#S3.SS1 "3.1 Corruption Types ‣ 3 Generating 3D Common Corruptions ‣ 3D Common Corruptions and Data Augmentation"), and for object classification, we introduce ImageNet-3DCC by applying corruptions from 3DCC to ImageNet validation set, similar to 2DCC [[27](#bib.bib27)].
Semantic corruptions: We evaluate the robustness of two panoptic segmentation models from [[17](#bib.bib17)] against occlusion corruption of 3DCC. The models are trained on Omnidata [[17](#bib.bib17)] and Taskonomy [[84](#bib.bib84)] datasets with a Detectron [[75](#bib.bib75)] backbone. See the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for details.

Figure 13: Robustness against occlusion corruption of 3DCC. The plot shows the intersection over union (IoU) scores of Detectron models [[75](#bib.bib75)] for different objects over a range of occlusion ratios. The models are trained on Taskonomy [[84](#bib.bib84)] or Omnidata [[17](#bib.bib17)] datasets. The occlusion ratio is defined as the number of occluded pixels divided by the sum of occluded and visible pixels of the object. This is computed over the test scenes of Replica [[65](#bib.bib65)]. The plots expose the occlusion handling capabilities of the models and show that the Omnidata trained model is generally more robust than the Taskonomy one. The degradation in model predictions is class-specific and becomes more severe with higher occlusion ratios.

Figure 14: Robustness on ImageNet-3DCC and ImageNet-2DCC. Errors on ImageNet validation images corrupted by 3DCC and 2DCC are computed for the models in robustness leaderboards [[27](#bib.bib27), [11](#bib.bib11)]. Following [[27](#bib.bib27)], we compute the mean corruption error (mCE) relative to AlexNet [[34](#bib.bib34)]. The performance degrades significantly against ImageNet-3DCC, thus it can serve as a challenging benchmark. As expected, the general trends are similar between the two benchmarks as 2D and 3D corruptions are not completely disjoint. A similar observation was also made in [[45](#bib.bib45)] even when the corruptions are designed to be dissimilar to 2DCC. Still, there are notable differences that can be informative during model development by exposing trends and vulnerabilities that are not captured by 2DCC, e.g. ANT [[61](#bib.bib61)] has better mCE on 2DCC compared to AugMix [[29](#bib.bib29)], while they perform similarly on 3DCC. Likewise, combining DeepAugment [[26](#bib.bib26)] with AugMix improved the performance on 2DCC significantly more than 3DCC. See the [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for more results.
Figure [13](#S5.F13 "Figure 13 ‣ 5.2.5 3DCC evaluations on semantic tasks ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") quantifies the effect of occlusion on the predictions of models, i.e. how the models’ intersection over union (IoU) scores change with increasing occlusion, for selected objects. This is computed on the test scenes from Replica [[65](#bib.bib65)]. The Omnidata trained model is generally more robust than the Taskonomy one, though we see a decrease in IoU in both models as occlusion increases. The trends are class-specific possibly due to shape of the objects and their scene context, e.g. fridge predictions remain unchanged up until 0.500.500.500.50 occlusion ratio, while couch predictions degrade more linearly for Omnidata model. This evaluation showcases one use of semantic corruptions in 3DCC, which are notably harder to accomplish using other benchmarks that do not operate based on 3D scans.
ImageNet-3DCC: We compare performances of the robust ImageNet models [[24](#bib.bib24), [61](#bib.bib61), [29](#bib.bib29), [26](#bib.bib26)] from RobustBench [[11](#bib.bib11)] and ImageNet-2DCC [[27](#bib.bib27)] (i.e. ImageNet-C) leaderboards in Fig. [14](#S5.F14 "Figure 14 ‣ 5.2.5 3DCC evaluations on semantic tasks ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"). Following 2DCC, we compute mean corruption error (mCE) by dividing the models errors by AlexNet [[34](#bib.bib34)] errors and averaging over corruptions. The performance of models degrade significantly, including those with diverse augmentations. Thus, ImageNet-3DCC can serve as a challenging benchmark for object recognition task. As expected, while the general trends are similar between the two benchmarks as 2D and 3D corruptions are not completely disjoint [[45](#bib.bib45)], 3DCC exposes vulnerabilities that are not captured by 2DCC, which can be informative during model development. See [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf) for further results.
###
5.3 3D data augmentation to improve robustness
We demonstrate the effectiveness of the proposed augmentations qualitatively and quantitatively.
We evaluate UNet and DPT models trained on Taskonomy (T+UNet, T+DPT) and DPT trained on Omnidata (O+DPT) to see the effect of training dataset and model architecture. The training procedure is as described in Sec. [5.1](#S5.SS1 "5.1 Preliminaries ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"). For the other models, we initialize from O+DPT model and train with 2DCC augmentations (O+DPT+2DCC) and 3D augmentations on top of that (O+DPT+2DCC+3D), i.e. our proposed model. We also further trained the proposed model using cross-task consistency (X-TC) constraints from [[83](#bib.bib83)], denoted as (Ours+X-TC) in the results. Lastly, we evaluated a model trained on OASIS training data from [[9](#bib.bib9)] (OASIS).
Qualitative evaluations: We consider i. OASIS validation images [[9](#bib.bib9)], ii. AE corrupted data from Sec. [5.2.3](#S5.SS2.SSS3 "5.2.3 Soundness: 3DCC vs Expensive Synthesis ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"), iii. manually collected DSLR data, and iv. in-the-wild YouTube videos. Figure [12](#S5.F12 "Figure 12 ‣ 5.2.4 Effectiveness of applying 3DCC to other datasets ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation") shows that predictions made by the proposed models are significantly more robust compared to baselines. We also recommend watching the clips and running the live demo on the [project page](https://3dcommoncorruptions.epfl.ch).
Quantitative evaluations: In Table [1](#S5.T1 "Table 1 ‣ 5.3 3D data augmentation to improve robustness ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation"), we compute errors made by the models on 2DCC, 3DCC, AE, and OASIS validation set (no fine-tuning). Again, the proposed models yield lower errors across datasets showing the effectiveness of augmentations. Note that robustness against corrupted data is improved without sacrificing performance on in-the-wild clean data, i.e. OASIS.
Benchmark
Model
T+UNet
T+DPT
OASIS [[9](#bib.bib9)]
O+DPT
O+DPT+2DCC
Ours
Ours+X-TC [[83](#bib.bib83)]
2DCC [[27](#bib.bib27)] (ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error)
8.15
7.47
15.31
6.43
5.78
5.32
5.29
3DCC (ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error)
7.08
6.89
15.11
6.13
5.94
5.42
5.35
AE (Sec. [5.2.3](#S5.SS2.SSS3 "5.2.3 Soundness: 3DCC vs Expensive Synthesis ‣ 5.2 3D Common Corruptions Benchmark ‣ 5 Experiments ‣ 3D Common Corruptions and Data Augmentation")) (ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT error)
12.86
12.39
16.85
7.84
6.50
4.94
5.47
OASIS [[9](#bib.bib9)] (angular error)
30.49
32.13
24.63
24.42
23.67
24.65
23.89
Table 1:
Effectiveness of 3D augmentations quantified using different benchmarks. ℓ1subscriptℓ1\ell\_{1}roman\_ℓ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT errors are multiplied by 100100100100 for readability. The O+DPT+2DCC+3D model is denoted by Ours. We also trained this model using cross-task consistency (X-TC) constraints from [[83](#bib.bib83)] (Ours+X-TC). Our models yield lower errors across the benchmarks. 2DCC and 3DCC are applied on the same Taskonomy test images.
More results are given in [supplementary](https://3dcommoncorruptions.epfl.ch/3DCC_supp.pdf). Evaluations on OASIS dataset sometimes show a large variance due to its sparse ground truth.
6 Conclusion and Limitations
-----------------------------
We introduce a framework to test and improve model robustness against real-world distribution shifts, particularly those centered around 3D. Experiments demonstrate that the proposed 3D Common Corruptions is a challenging benchmark that exposes model vulnerabilities under real-world plausible corruptions. Furthermore, the proposed data augmentation leads to more robust predictions compared to baselines. We believe this work opens up a promising direction in robustness research by showing the usefulness of 3D corruptions in benchmarking and training. Below we briefly discuss some of the limitations:
* 3D quality: 3DCC is upper-bounded by the quality of 3D data. The current 3DCC is an imperfect but useful approximation of real-world 3D corruptions, as we showed. The fidelity is expected to improve with higher resolution sensory data and better depth prediction models.
* Non-exhaustive set: Our set of 3D corruptions and augmentations are not exhaustive. They instead serve as a starter set for researchers to experiment with. The framework can be employed to generate more domain-specific distribution shifts with minimal manual effort.
* Large-scale evaluation: While we evaluate some recent robustness approaches in our analyses, our main goal was to show that 3DCC successfully exposes vulnerabilities. Thus, performing a comprehensive robustness analysis is beyond the scope of this work. We encourage researchers to test their models against our corruptions.
* Balancing the benchmark: We did not explicitly balance the corruption types in our benchmark, e.g. having the same number of noise and blur distortions. Our work can further benefit from weighting strategies trying to calibrate average performance on corruption benchmarks, such as [[37](#bib.bib37)].
* Use cases of augmentations: While we focus on robustness, investigating their usefulness on other applications, e.g. self-supervised learning, could be worthwhile.
* Evaluation tasks: We experiment with dense regression tasks. However, 3DCC can be applied to different tasks, including classification and other semantic ones. Investigating failure cases of semantic models against, e.g. on smoothly changing occlusion rates for several objects, using our framework could provide useful insights.
Acknowledgement: We thank Zeynep Kar and Abhijeet Jagdev. This work was partially supported by the ETH4D and EPFL EssentialTech Centre Humanitarian Action Challenge Grant. |
c56b93ae-12b1-4b6a-a670-b58f06808bac | trentmkelly/LessWrong-43k | LessWrong | [LINK] Larry = Harry sans magic? Google vs. Death
Google's announcement, Time magazine rather sensationalist headline.
In any case, it's nice to know that Google set its sights to "challenge ... aging and associated diseases". Apple's Tim Cook:
For too many of our friends and family, life has been cut short or the quality of their life is too often lacking. Art is one of the crazy ones who thinks it doesn’t have to be this way.
One more step towards "world optimization". |
94f276ac-56bd-4bcc-bdb6-e39e20aba8da | trentmkelly/LessWrong-43k | LessWrong | Metaculus is building a team dedicated to AI forecasting
The speed, sophistication, and impacts of AI technology development together comprise some of the most astonishing and significant events of our lifetimes. AI development promises both enormous risks and opportunities for society. Join Metaculus's AI forecasting team and help humankind better navigate this crucial period.
Open roles include:
Machine Learning Engineer - AI Forecasting
You’ll work to enhance the organization and searchability of our AI analyses, ensure that the AI-related data and thinking that we rely on is up-to-date, comprehensive, and well organized, and deliver (via modeling) forecasts on an enormous set of questions concerning the trajectory of AI.
Research Analyst - AI Forecasting
You’ll engage deeply with ideas about the future of AI and its potential impacts, and share insights with the AI research and forecasting communities and with key decision makers. You’ll use crowd forecasting to help generate these insights, writing forecasting questions that are informative and revealing, facilitating forecasting tournaments, and coordinating with Pro Forecasters.
Quantitative Research Analyst - AI Forecasting
You’ll use quantitative modeling to improve our ability to anticipate the future of AI and its impact on the world, enhance our AI-related decision making capabilities, and enable quantitative evaluation of ideas about the dynamics governing AI progress.
You can learn about our other high-impact, open positions here. |
826a5ae7-de35-4697-a14e-458f3cc80595 | trentmkelly/LessWrong-43k | LessWrong | lessannoying.org
<a href="http://http://www.lessannoying.org/">http://imgur.com/hNC03.jpg</a> |
6e982a42-064e-4123-86ae-5798697fa085 | trentmkelly/LessWrong-43k | LessWrong | Philosophical Cyborg (Part 1)
This post is part of the output from AI Safety Camp 2023’s Cyborgism track, run by Nicholas Kees Dupuis - thank you to AISC organizers & funders for their support. Thank you for comments from Peter Hroššo; and the helpful background of conversations about the possibilities (and limits) of LLM-assisted cognition with Julia Persson, Kyle McDonnell, and Daniel Clothiaux.
Epistemic status: this is not a rigorous or quantified study, and much of this might be obvious to people experienced with LLMs, philosophy, or both. It is mostly a writeup of my (ukc10014) investigations during AISC and is a companion to The Compleat Cybornaut.
TL;DR
This post documents research into using LLMs for domains such as culture, politics, or philosophy (which arguably are different - from the perspective of research approach - from science or running a business, the more common suggested use-cases for LLMs/AIs i.e. Conjecture’s CoEm, Ajey Cotra’s scientist model, or Andrew Critch’s production web).
As a case study, I (ukc10014) explore using LLMs to respond to a speculative essay by Paul Christiano: the response is posted here. The current post is more about the process of LLM-assisted, cyborgist reasoning, and follows on from The Compleat Cybornaut.
The takeaway is not surprising: base models are useful for generating ideas, surveying an unfamiliar space, and gathering further avenues for research. RLHF-tuned models like ChatGPT are able to write summaries of existing content, often in considerable detail, but this requires human skill in generating (often, chains of) prompts that tease out the model's latent knowledge, and specifically requires the human to know enough about the topic to ask pointed questions. There is a constant risk of hallucination, particularly when using a chat-type interface, where previous portions of a conversation can ‘infect’ (as well as usefully inform) the current query.
Models are not very helpful in planning the overall research direction or |
e9ab86ae-5527-4b2e-9d56-861948eda3ab | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Is There a Valley of Bad Civilizational Adequacy?
This post is my attempt to think through an idea that I’ve been mulling over since [this discussion](https://twitter.com/cat_neuron/status/1464717285797638145) on Twitter last November prompted by a question of Matthew Barnett, which I was reminded of while reading the section on energy in Zvi’s [recent post](https://www.lesswrong.com/posts/jeaaeDTyj2xmbkxqz/ukraine-post-2-options#Energy) on the war in Ukraine. The meaning of the title, “valley of bad civilizational adequacy,” is the idea that as one relaxes constraints on the bounded rationality of hypothetical future collective human policy decisions, the result may initially be a decrease in expected utility captured by humans, due to increased existential risk from unaligned AGI, before one starts the ascent to the peak of optimal rationality.
Preliminary definition: By **pro-growth policy** I mean a certain set of public policy proposals that aren’t directly about AI, but could shorten the timeline to AGI: less immigration restriction, particularly for high-skill workers; cheaper, denser housing, especially in the SF Bay Area; and cheap energy, by building out lots of nuclear and renewable generation capacity. (Is there anything else that fits this category?)
Main argument
-------------
1. Pro-growth policy can be expected to accelerate AI capabilities research and therefore shorten the timeline to AGI, via the agglomeration effects of more smart people in dense urban tech hubs, decreased energy cost of running ML experiments, and overall economic growth leading to more lucrative investment opportunities and therefore more research funding.
2. Having less time to solve AI safety would cause a greater increase in AI X-risk than any decrease in AI X-risk resulting from pro-growth policy also accelerating AI-alignment research.
3. AI X-risk dominates all other considerations.
4. Therefore pro-growth policy is bad, and AI-risk-alarmist rationalists should not support it, and perhaps should actively oppose it.
Possible counterarguments
-------------------------
(Other than against step 3; the intended audience of this post is people who already accept that.)
The main argument depends on
* the amount that AI-risk alarmists can affect pro-growth policy,
* the effect of such changes in pro-growth policy on the timeline to AGI,
* and the effect of such changes in the AGI timeline on our chances of solving alignment.
One or more of these could be small enough that the AI-risk community’s stance on pro-growth policy is of negligible consequence.
Perhaps pro-growth policy won’t matter because the AGI timeline will be very short, not allowing time for any major political changes and their downstream consequences to play out before the singularity.
Perhaps it’s bad to oppose pro-growth policy because the AGI timeline will be very long: If we have plenty of time, there’s no need to suffer from economic stagnation in the meantime. Furthermore, sufficiently severe stagnation could lead to technological regress, political destabilization that sharply increases and prolongs unnecessary pre-singularity misery, or even the failure of human civilization to ever escape earth.
Even without a very long AGI timeline, perhaps the annual risk of cascading economic and political instability due to tech stagnation, leading to permanent civilizational decline, is so high that it outweighs increased AI X-risk from shortening the AGI timeline.
Perhaps there is no valley of bad civilizational adequacy, or at most a very small valley: A civilization adequate enough to get the rational pursuit of growth right may be likely enough to also get AI risk right that pro-growth policy is positive-EV. E.g. more smart people in dense urban tech hubs might accelerate AI-safety research enough to outweigh the increased risk from also accelerating capabilities research. (This seems less implausible w.r.t. housing and immigration policy than energy policy, since running lots of expensive large-scale ML experiments seems to me to be particularly likely to advance capabilities more than safety.)
I find the cumulative weight of these counterarguments underwhelming, but I also find the conclusion of the main argument very distasteful, and it certainly seems to run counter to the prevailing wisdom of the AI-risk community. Perhaps I am missing something? |
d00e1943-6659-4c30-a901-5738f1204680 | trentmkelly/LessWrong-43k | LessWrong | Generalizing from One Trend
Related: Reference Class of the Unclassreferenceable, Generalizing From One Example
Many people try to predict the future. Few succeed.
One common mistake made in predicting the future is to simply take a current trend and extrapolate it forward, as if it was the only thing that mattered-- think, for instance, of the future described by cyberpunk fiction, with sinister (and often Japanese) multinational corporations ruling the world. Where does this vision of the future stem from?
Bad or lazy predictions from the 1980s, when sinister multinational corporations (and often Japanese ones) looked to be taking over the world.[1]
Similar errors have been committed by writers throughout history. George Orwell thought 1984 was an accurate prediction of the future, seeing World War II as inevitably bringing socialist revolution to the United Kingdom and predicting that the revolutionary ideals would then be betrayed in England as they were in Russia. Aldous Huxley agreed with Orwell but thought that the advent of hypnosis and psychoconditioning would cause the dystopia portrayed in 1984 to evolve into that he described in Brave New World. In today's high school English classes, these books are taught as literature, as well-written stories-- the fact that the authors took their ideas seriously would come as a surprise to many high school students, and their predictions would look laughably wrong.
Were such mistakes confined solely to the realm of fiction, they would perhaps be considered amusing errors at best, reflective of the sorts of mishaps that befall unstudied predictions. Unfortunately, they are not. Purported "experts" make just the same sort of error regularly, and failed predictions of this sort often have negative consequences in reality.
For instance, in 1999 two economists published the book Dow 36,000, predicting that stocks were about to reach record levels; the authors of the book were so wrapped up in recent gains to the stock market that they assumed |
f7e1fb70-3bb0-4aa7-bc95-f2bb8b8c33ad | trentmkelly/LessWrong-43k | LessWrong |
Managing AI Risks in an Era of Rapid Progress
Managing AI Risks in an Era of Rapid Progress
Authors
Yoshua Bengio
Geoffrey Hinton
Andrew Yao
Dawn Song
Pieter Abbeel
Yuval Noah Harari
Ya-Qin Zhang
Lan Xue
Shai Shalev-Shwartz
Gillian Hadfield
Jeff Clune
Tegan Maharaj
Frank Hutter
Atılım Güneş Baydin
Sheila McIlraith
Qiqi Gao
Ashwin Acharya
David Krueger
Anca Dragan
Philip Torr
Stuart Russell
Daniel Kahneman
Jan Brauner
Sören Mindermann
arXiv
Forthcoming.
Paper PDF copy Policy supplement
> Abstract: In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems. In light of rapid and continuing AI progress, we propose urgent priorities for AI R&D and governance.
In 2019, GPT-2 could not reliably count to ten. Only four years later, deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots. As AI developers scale these systems, unforeseen abilities and behaviors emerge spontaneously without explicit programming . Progress in AI has been swift and, to many, surprising.
The pace of progress may surprise us again. Current deep learning systems still lack important capabilities and we do not know how long it will take to develop them. However, companies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work . They are rapidly deploying more resources and developing new techniques to increase AI capabilities. Progress in AI also enables faster progress: AI assistants are increasingly used to automate programming [4] and data collection [5,6] to further improve AI systems [7].
There is no fundamental reason why AI progress would slow or halt at the human level. Indeed, AI has already surpassed human abilities in narrow domains like protein folding or strategy games [8– |
63704f61-946b-484b-a2da-a937b2392438 | trentmkelly/LessWrong-43k | LessWrong | Thoughts from a conversation on quantum immortality
[Today, I participated in a conversation about the idea of quantum immortality. I decided to summarise some of the thoughts that came up in this short post. Therefore, it should be viewed as a report on a discussion rather than an attempt at a proper post.]
Assume that many-worlds interpretation is correct and quantum immortality is true. Essentially, in at least one universe you survive no matter what dangerous things you try. Since there is no defined biological "expiry date" on your body, you end up in state where your body just continues avoiding terminal shutdown. Your body is failing but the space of probabilistic events (such as a given organ failing, or a given blood vessel rupturing, or two given molecules interacting, or others, however minor) which lead to terminal shutdown is sufficiently large to last you for a while, with at least one universe where you still happen to be alive.
The above process probably takes a while(?). Until the entire space of events is explored (your body is finite) and you die in all universes. But in this case we don't have quantum immortality, only "maximally delayed mortality" (MDM).
We can only observe quantum immortality with relation to ourself. Thus, in the universe where you survive, everyone else around you is likely to be dead, since the probability of two individuals surviving to this stage in the same universe is much smaller than the probability of you alone surviving. Therefore, you end up in an incapacitated state of continuous (eventually, lethal) failing of your body, completely alone in a universe where everyone else is dead.
The argument in the previous paragraph does not account for new people being born. So assuming no catastrophic event killing everyone else but you occurred, there may be other people in the universe where you are. But then you will not be in a state to appreciate that towards the end of your MDM.
So, if many-worlds is true, are we all going to end up experiencing a slow gradual fade-out o |
7d2da07c-1277-41cc-8ab1-4c6748817f53 | trentmkelly/LessWrong-43k | LessWrong | Should we “go against nature”?
Should we “go against nature”? Or live in “harmony” with it? There are two senses of “nature.” Teasing them apart clarifies this issue.
“Nature” can mean immutable natural law. We defy this at our peril. If we dump raw sewage where we get our drinking water, we will suffer epidemics. If we expose ourselves to radiation, we will die from cancer. If we fail to irrigate our fields, we will go hungry at the first drought. (These are Kipling’s “gods of the copybook headings.”)
But another sense of “nature” is: whatever exists and whatever happens apart from the agency of humanity. It is the chance arrangement of molecules and their motions, before or separate from the conscious, directed, purposefulness of human beings. A river, pursuing its natural course, whether or not it is navigable, whether or not it causes dangerous flooding. A field, with whatever natural level of fertility it happens to have, and whatever plants happen to be growing in it, whether or not they are edible. Wild animals, whether or not they are good companions, whether or not they attack us, whether or not they destroy our crops or our homes, whether or not they carry disease.
This second sense of “nature” is amoral and indifferent to the needs of life. All living organisms survive through an active process of exploiting the resources of their environment. The only difference between humans and other life forms is that we do it using conceptual intelligence. I’ve been admonished that “we are a part of nature.” Of course, we are—but science, technology, and industry are a part of our nature.
To champion “nature” in this sense is not, strictly speaking, to be for anything. It is not in favor of animals or plants, who face a brutal struggle for survival in nature. It is not in favor of rocks or rivers, which are inanimate and have no needs or desires. It is only against. It is against humanity and human agency—choice and purpose—because it is for whatever humans have not done, have not touched, ha |
3ed2de58-283c-496b-b7ea-f6c6c7264c5c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #125]: Neural network scaling laws across multiple modalities
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-125)** (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
SECTIONS
========
**[HIGHLIGHTS](about:blank#HIGHLIGHTS)**
**[TECHNICAL AI ALIGNMENT](about:blank#TECHNICAL_AI_ALIGNMENT)**
**[MESA OPTIMIZATION](about:blank#MESA_OPTIMIZATION)**
**[FORECASTING](about:blank#FORECASTING)**
**[OTHER PROGRESS IN AI](about:blank#OTHER_PROGRESS_IN_AI)**
**[REINFORCEMENT LEARNING](about:blank#REINFORCEMENT_LEARNING)**
HIGHLIGHTS
===========
**[Scaling Laws for Autoregressive Generative Modeling](https://arxiv.org/abs/2010.14701)** *(Tom Henighan, Jared Kaplan, Mor Katz et al)* (summarized by Asya): This paper looks at scaling laws for generative Transformer models of images (predicting pixels or parts of image encodings), videos (predicting frames of image encodings), multimodal image <-> text (predicting captions based on images or images based on captions), and mathematical problem solving (predicting answers to auto-generated questions about algebra, arithmetic, calculus, comparisons, integer properties, measurement, polynomials, and probability). The authors find that:
- Cross-entropy loss as a function of compute follows a power law + constant in all these data modalities (just as it does **[in language](https://arxiv.org/abs/2001.08361)** (**[AN #87](https://mailchi.mp/c29b3247da6f/4da2bu7tjd)**)). Information theoretically, this can be interpreted as scaling a 'reducible loss' which estimates the KL divergence between the true and model distributions, and an 'irreducible loss' which estimates the entropy of the true data distribution.
- Performance on ImageNet classification fine-tuned from their generative image model also follows such a power law, whereas ImageNet classification trained *from scratch* actually gets worse with sufficiently large model sizes. Interestingly, this classification power law continues even past model sizes where the generative cross-entropy loss starts bending as a result of irreducible loss. The authors conclude that approaching the irreducible loss for some dataset does not necessarily indicate diminishing returns for representation quality or semantic content.
- Optimal model size as a function of compute follows a power law with an exponent very close to ~0.7 for all data modalities they've studied so far. This implies that in the current compute regime, as compute budgets grow, it's best to devote a majority of compute towards making models bigger and a minority towards training on more data.
- Larger models perform better on extrapolating to math problems more difficult than those seen in training, but only insofar as they do better on the training distribution (no benefits to 'strong generalization').
- Larger models are able to take advantage of more multimodal information, but the scaling is extremely slow-- a 1-billion-parameter model uses 10% of the information in a caption to define an image, while using 20% of the information would require a 3-trillion-parameter model.
As in the **[language models paper](https://arxiv.org/abs/2001.08361)** (**[AN #87](https://mailchi.mp/c29b3247da6f/4da2bu7tjd)**), extrapolating the steep power laws found for optimally-used compute seems to eventually paradoxically result in loss lower than the bound given by shallower power laws for optimally-used training data. The authors offer a potential hypothesis for resolving this inconsistency-- in the regime of less compute and smaller model sizes, increasing model size effectively increases the amount of information you extract from each data point you train on, resulting in the steepness of the current compute law. As compute increases past a certain point, however, the amount of information extracted per data point approaches the maximum amount possible, so the curve switches to a shallower regime and marginal compute should be used increasingly on dataset increases rather than model size increases. If this hypothesis is true, we should eventually expect the scaling laws for compute to bend towards laws set by dataset size, and perhaps should think they will ultimately be set by trends for overfitting (see **[this post](https://www.alignmentforum.org/posts/diutNaWF669WgEt3v/the-scaling-inconsistency-openai-s-new-insight)** for another explanation of this).
**Read more:** **[the scaling “inconsistency”: openAI’s new insight](https://www.alignmentforum.org/posts/diutNaWF669WgEt3v/the-scaling-inconsistency-openai-s-new-insight)**
**Asya's opinion:** I would also recommend listening to **[Jared Kaplan's talk](https://www.youtube.com/watch?v=QMqPAM_knrE)** on this.
I was really excited to learn about more empirical work here. These results suggest that scaling behavior predictable with smooth power-laws is likely a feature of most generative models, not just text. I found it surprising that optimal model size given a compute budget scales the same way across data modalities-- it does seem to suggest that there's something more fundamental going on here that I don't understand (but which may be explained in **[this theory paper](https://arxiv.org/abs/2004.10802)** that I haven't read). It's also interesting that pretraining on a generative model (rather than training from scratch) seems to confer real benefits to scaling behavior for image classification-- this lends some support to the view that a lot of the learning that needs to happen will come from unsupervised settings.
A lot of the most salient questions around current scaling laws for me still lie in the translation between cross-entropy loss in these domains and performance on downstream tasks we care about. I feel very unsure about whether any of the fine-tuned generative models we (currently) have the data to train are likely to have transformative performance within even the next 5 orders of magnitude of compute scaling.
**Rohin's opinion:** In addition to the points Asya made above, I wanted to speculate on the implications of these scaling laws for AGI. I was particularly struck by how well these scaling laws seem to fit the data. This was also true in the case of mathematics problems, at least for the models we have so far, even though intuitively math requires “reasoning”. This suggests to me that even for tasks that require reasoning, capability will increase smoothly along a spectrum, and the term “reasoning” is simply a descriptor of a particular capability level. (An alternative position is that “reasoning” happens only to the extent that the neural net is implementing an algorithm that can justifiably be known to always output the right answer, but this sort of definition usually implies that humans are not doing reasoning, which seems like a deal-breaker.)
Note however that we haven't gotten to the level of performance that would be associated with "reasoning", so it is still *possible* that the trends stop holding and reasoning then leads to some sort of discontinuous increase in performance. I just wouldn't bet on it.
TECHNICAL AI ALIGNMENT
=======================
MESA OPTIMIZATION
------------------
**[Confucianism in AI Alignment](https://www.alignmentforum.org/posts/3aDeaJzxinoGNWNpC/confucianism-in-ai-alignment)** *(John Wentworth)* (summarized by Rohin): Suppose we trained our agent to behave well on some set of training tasks. **[Mesa optimization](https://arxiv.org/abs/1906.01820)** (**[AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)**) suggests that we may still have a problem: the agent might perform poorly during deployment, because it ends up optimizing for some misaligned *mesa objective* that only agrees with the base objective on the training distribution.
This post suggests that in any training setup in which mesa optimizers would normally be incentivized, it is not sufficient to just prevent mesa optimization from happening. The fact that mesa optimizers could have arisen means that the incentives were bad. If you somehow removed mesa optimizers from the search space, there would still be a selection pressure for agents that without any malicious intent end up using heuristics that exploit the bad incentives. As a result, we should focus on fixing the incentives, rather than on excluding mesa optimizers from the search space.
**[Clarifying inner alignment terminology](https://www.alignmentforum.org/posts/SzecSPYxqRa5GCaSF/clarifying-inner-alignment-terminology)** *(Evan Hubinger)* (summarized by Rohin): This post clarifies the author’s definitions of various terms around inner alignment. Alignment is split into intent alignment and capability robustness, and then intent alignment is further subdivided into outer alignment and objective robustness. Inner alignment is one way of achieving objective robustness, in the specific case that you have a mesa optimizer. See the post for more details on the definitions.
**Rohin's opinion:** I’m glad that definitions are being made clear, especially since I usually use these terms differently than the author. In particular, as mentioned in my opinion on the highlighted paper, I expect performance to smoothly go up with additional compute, data, and model capacity, and there won’t be a clear divide between capability robustness and objective robustness. As a result, I prefer not to divide these as much as is done in this post.
FORECASTING
------------
**[Measuring Progress in Deep Reinforcement Learning Sample Efficiency](https://openreview.net/forum?id=_QdvdkxOii6)** *(Anonymous)* (summarized by Asya) (H/T Carl Shulman): This paper measures historic increases in sample efficiency by looking at the number of samples needed to reach some fixed performance level on Atari games and virtual continuous control tasks. The authors find exponential progress in sample efficiency, with estimated doubling times of 10 to 18 months on Atari, 5 to 24 months on state-based continuous control, and 4 to 9 months on pixel-based continuous control, depending on the specific task and performance level. They find that these gains were mainly driven by improvements in off-policy and model-based deep RL learning approaches, as well as the use of auxiliary learning objectives to speed up representation learning, and not by model size improvements. The authors stress that their study is limited in studying only the published training curves for only three tasks, not accounting for the extent to which hyperparameter tuning may have been responsible for historic gains.
**Asya's opinion:** Following in the footsteps of **[AI and Efficiency](https://openai.com/blog/ai-and-efficiency/)** (**[AN #99](https://mailchi.mp/4f7ffc5cbe53/an-99-doubling-times-for-the-efficiency-of-ai-algorithms)**), here we have a paper showing exponential gains in sample efficiency in particular. I'm really glad someone did this analysis-- I think I'm surprised by how fast progress is, though as the paper notes it's unclear exactly how to relate historic improvements on fixed task performance to a sense of overall improvement in continuous control (though several of the main contributors listed in the appendix seem fairly general). I also really appreciate how thorough the full paper is in listing limitations to this work.
Since these papers are coming up in the same newsletter, I'll note the contrast between the data-unlimited domains explored in the scaling laws paper and the severely data-limited domain of real-world robotics emphasized in this paper. In robotics, it seems we are definitely still constrained by algorithmic progress that lets us train on fewer samples (or do better **[transfer from simulations](https://www.alexirpan.com/2019/10/29/openai-rubiks.html)** (**[AN #72](https://mailchi.mp/cac125522aa3/an-72-alignment-robustness-methodology-and-system-building-as-research-priorities-for-ai-safety)**)). Of course, maybe progress in data-unlimited domains will ultimately result in AIs that make algorithmic progress in data-limited domains faster than humans ever could.
OTHER PROGRESS IN AI
=====================
REINFORCEMENT LEARNING
-----------------------
**[DeepSpeed: Extreme-scale model training for everyone](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)** *(DeepSpeed Team et al)* (summarized by Asya): In this post, Microsoft announces updates to DeepSpeed, its open-source deep learning training optimization library. The new updates include:
- '3D parallelism', a scheme for carefully optimizing how training runs are split across machines. Training runs that use 3D parallelism demonstrate linear scaling of GPU memory and compute efficiency, enabling the theoretical training of extremely large models of over a trillion parameters on as few as 800 NVIDIA V100 GPUs.
- 'ZeRO-Offload', which allows CPU memory to be used during training runs, enabling running models of up to 13 billion parameters on a single NVIDIA V100 GPU.
- 'DeepSpeed Sparse Attention', an instrumental technology that reduces the compute and memory requirements of attention computations used in models like Transformers. Compared to models that use densely computed attention, this enables models that pay attention to sequences that are 10x longer and can be trained up to 6.3x faster.
- '1-bit Adam', a scheme for compressing the communication requirements between machines doing training runs that use the Adam gradient descent optimizer. 1-bit Adam enables up to 5x less communication and up to 3.5x faster training runs.
**[Fast reinforcement learning through the composition of behaviours](https://deepmind.com/blog/article/fast-reinforcement-learning-through-the-composition-of-behaviours)** *(André Barreto et al)* (summarized by Flo): While model-based RL agents can easily adapt their policy to changed rewards on the same environment, planning is expensive and learning good models can be challenging for many tasks. On the other hand, it is challenging to get model-free agents to adapt their policy to a new reward without extensive retraining. An intermediate solution is to use so-called successor features: Instead of a value function **V(π,s)** representing the expected discounted reward for a policy **π** starting in state **s**, successor features are a vector-valued value function **ψ(π,s)** representing an expected discounted feature vector **ϕ**. If our reward equals **r = w ⋅ ϕ** for some weight vector **w**, we can easily obtain the original value function by taking the scalar product of the successor features and the weight vector: **V(π,s) = w ⋅ ψ(π,s)**. Successor features thus allow us to evaluate a fixed policy **π** for all rewards that are linear in **ϕ**, which is called *generalized policy evaluation*.
Now that we can evaluate policies for different preferences, we would like to efficiently find a good policy for a given novel preference. Inspired by human learning that often combines previously learned skills, we employ *generalized policy improvement*. In vanilla policy improvement, we improve upon a policy **π** we can evaluate by choosing the action that maximizes the immediate reward plus the discounted value **V(π,s')** of following **π** starting in the next state **s'**. In generalized policy improvement, we have multiple policies and choose the action that maximizes the reward plus the discounted value of following the best of these policies starting in the next state **s'**. To obtain a policy for the new preference, we "stitch together" all policies we learnt for previous preferences and the resulting policy performs at least as good as all of the old policies with respect to the new preference. As generalized policy improvement does not require any additional environment samples, it enables zero-shot transfer to new preferences. Empirically, even if the weight vector **w** has to be learnt from reward signals, generalized policy improvement is very sample efficient. Additional samples can then be used to further improve the policy using standard RL.
**Read more:** **[Fast reinforcement learning with generalized policy updates](https://www.pnas.org/content/early/2020/08/13/1907370117)**
**Flo's opinion:** I really like the idea of successor features. Similar to model-based systems, they allow us to evaluate policies for many different rewards, which can be useful for anticipating problematic behaviour before deploying a system. However, note that we still need to execute the policy we obtained by generalized policy improvement to evaluate it for different rewards: The only guarantees we have is that it is better than the previous policies for the reward for which the improvement step was carried out (and potentially some weaker bounds based on the similarity of different rewards).
**[γ-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction](https://arxiv.org/abs/2010.14496)** *(Michael Janner et al)* (summarized by Flo): Long planning horizons are often necessary for competitive performance of model-based agents, but single-step models get less and less accurate with longer planning horizons as errors accumulate. Model-free algorithms don't have this problem but are usually reward- and policy-specific, such that transfer to other tasks can be hard. The paper proposes policy-specific γ-models as an intermediate solution: instead of learning the distribution of the next state given a state-action pair **(s,a)**, or the final state of an n-step rollout given **(s,a)** and a policy **π**, it learns the distribution of a rollout with a stochastic, geometrically distributed length. Unlike for n-step models with n>1, the distribution follows a Bellman-style decomposition into the single-step distribution and the discounted distribution for the next state **s'**, which allows for off-policy training of the model by bootstrapping the target distribution.
Now, if rewards are consequentialist in the sense that they only depend on the state, the expected reward under this distribution is equal to **1-γ** times the Q-value for **π** of **(s,a)** such that we can use the model for policy evaluation given arbitrary consequentialist rewards. Similar to how single-step models (0-models) can be rolled out to obtain (less accurate) multi-step models, sequential rollouts of a γ-model can be reweighed to obtain a γ-model with larger **γ**. While this introduces some error, it reduces the bootstrap error during training, which grows with **γ**. Being able to interpolate between rollouts of single-step models that accumulate error during testing and models with large **γ** that accumulate error during training allows us to find a sweet spot between the two extremes.
In practice, single-step models are often used for model-based value expansion (MVE), where only **N** steps are rolled out and a value function is used for evaluating longer-term consequences. The authors' algorithm, γ-MVE instead uses **N** rollouts of the γ-model and adjusts the weighing of the value function accordingly. γ-MVE performs strongly both in terms of sample efficiency and final performance on a set of low-dimensional continuous control tasks.
**Flo's opinion:** I am a bit surprised that this works so well, as both bootstrapping and learning generative models for distributions can be unstable and the method combines both. On the other hand, there is a long tradition of continuous interpolations between different RL algorithms and their performance at the sweet spot is often significantly stronger than at the extremes.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
367de36f-b2ab-442c-b474-d2d174d1866d | trentmkelly/LessWrong-43k | LessWrong | How Big a Deal are MatMul-Free Transformers?
If you’re already familiar with the technical side of LLMs, you can skip the first section.
The story so far
Modern Large Language Models - your ChatGPTs, your Geminis - are a particular kind of transformer, a deep learning architecture invented about seven years ago. Without getting into the weeds, transformers basically work by turning an input into numbers, and then doing tons and tons of matrix operations on those numbers. Matrix operations, and in particularly matrix multiplication (henceforth MatMul), are computationally expensive. How expensive? Well, graphics cards are unusually good at matrix multiplication, and NVIDIA, the main company making these, was the most valuable company on Earth earlier this month.
Over the last few years, spurred on by extreme investment, transformers have gotten larger and stronger. How good transformers are is multidimensional, and is roughly captured by scaling laws: basically, models get better when you give them more (high quality) data, make them bigger, or train them for longer.
I’ve written before about the data wall, the hypothesis that we’re running out of new data to train cutting edge AI systems on. But another path to much stronger AI would be if we trained them more efficiently: if you have to do way fewer (or way easier) mathematical operations when training an AI, you can do a lot more training on the same (gigantic) budget.
Basically, holding training data constant, if you can train a model twice as efficiently, you can also make it twice as big.[1] Which is a big deal in a world where there may be bottlenecks for other ways to make better AI: if it isn’t the data wall, it may well be a wall of regulation preventing the insane power consumption requirements of a trillion-dollar cluster.
Cutting edge labs are in an intense race to make transformative AI, so we don’t know what kinds of efficiency advances they’ve been making for the past few years. But there has been hubbub the last few weeks about a new kind |
542204ee-5f6d-4bd0-bd01-c7dc626c23d2 | trentmkelly/LessWrong-43k | LessWrong | Population ethics and utility indifference
It occurs to me that the various utility indifference approaches might be usable in population ethics.
One challenge for non-total utilitarians is how to deal with new beings. Some theories - average utilitarianism, for instance, or some other systems that use overall population utility - have no problem dealing with this. But many non-total utilitarians would like to see creating new beings as a strictly neutral act.
One way you could do this is by starting with a total utilitarian framework, but subtracting a certain amount of utility every time a new being B is brought into the world. In the spirit of utility indifference, we could subtract exactly the expected utility that we expect B to enjoy during their life.
This means that we should be indifferent as to whether B is brought into the world or not, but, once B is there, we should aim to increase B's utility. There are two problems with this. The first is that, strictly interpreted, we would also be indifferent to creating people with negative utility. This can be addressed by only doing the "utility correction" if B's expected utility is positive, thus preventing us from creating beings only to have them suffer.
The second problem is more serious. What about all the actions that we could do, ahead of time, in order to harm or benefit the new being? For instance, it would seem perverse to argue that buying a rattle for a child after they are born (or conceived) is an act of positive utility, whereas buying it before they were born (or conceived) would be a neutral act, since the increase in expected utility for the child is cancel out by the above process. Not only is it perverse, but it isn't timeless, and isn't stable under self modification.
What would be needed is a natural, timeless zero for the act of bringing a being into existence. Something that took into account things done before the being is created as well as after. A sort of Rawlsian veil of ignorance about whether the being is created at al |
03fa5332-d376-467f-b1fc-bc6240dc0be6 | trentmkelly/LessWrong-43k | LessWrong | Discuss: How to learn math?
Learning math is hard. Those that have braved some of its depths, what did you discover that allowed you to go deeper?
This is a place to share insights, methods, and tips for learning mathematics effectively, as well as resources that contain this information. |
8fdfbc56-b291-4704-8988-b06cca27b4c1 | trentmkelly/LessWrong-43k | LessWrong | The randomness/ignorance model solves many anthropic problems
(Follow-up to Randomness vs Ignorance and Reference Classes for Randomness)
I've argued that all uncertainty can be divided into randomness and ignorance and that this model is free of contradictions. Its purpose is to resolve anthropic puzzles such as the Sleeping Beauty problem.
If the model is applied to these problems, they appear to be underspecified. Details required to categorize the relevant uncertainty are missing, and this underspecification might explain why there is still no consensus on the correct answers. However, if the missing pieces are added in such a way that all uncertainty can be categorized as randomness, the model does give an answer. Doing this doesn't just solve a variant of the problem, it also highlights the parts that make these problems distinct from each other.
I'll go through two examples to demonstrate this. The underlying principles are simple, and the model can be applied to every anthropic problem I know of.
1. Sleeping Beauty
In the original problem, a coin is thrown at the beginning to decide between the one-interview and the two-interview version of the experiment. In our variation, we will instead repeat the experiment 2n times and have n of those run the one-interview version, and another n run the two-interview version. Sleeping Beauty knows this but isn't being told which version she's currently participating in. This leads to 2n instances of Sleeping Beauty waking up on Monday, and n instances of her waking up on Tuesday. All instances fall into the same reference class, because there is no information available to tell them apart. Thus, Sleeping Beauty's uncertainty about the current day is random with probability 23 for Monday.
2. Presumptuous Philosopher
In the original problem, the debate is about the question of how the size of the universe influences the probability that the universe is large, but it is unspecified whether our current universe is the only universe.
Let's fill in the blanks. Suppose there is o |
8414f145-7827-46a9-b16d-d9ab7f7280fd | trentmkelly/LessWrong-43k | LessWrong | Firewalling the Optimal from the Rational
Followup to: Rationality: Appreciating Cognitive Algorithms (minor post)
There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:
Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."
Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word. As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences. Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".
If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to ov |
536ef6b3-40bc-4426-ab8a-fed691be72b8 | trentmkelly/LessWrong-43k | LessWrong | Number-guessing protocol?
Suppose several people are guessing a number, and then find an estimate to see who is right.
The super-common protocol is: whoever is closest, wins.
This protocol is really bad. If there are three people, and I guess 50, then the other two people can guess 51 and 49. This means I'll almost certainly lose. Unless it's within 1/2 of 50, one of the other guesses will be closer.
There are lots of ways to fix this protocol. However, most of them suffer from too much added complexity. For example, squared error incentivises everyone to guess their expected value (mean). However, people don't generally want to calculate squares, and if they did, they'd still feel like the lowest squared error was the winner (which amounts to the usual protocol).
Or, we could give confidence intervals. But how should they be scored?
My question is this: what are some ways to play this game that combine simplicity with good incentives? |
0178c9e6-81ca-4863-b5c7-eb1fdd8f1636 | trentmkelly/LessWrong-43k | LessWrong | Negative Feedback and Simulacra
Part 1: Examples
There’s a thing I want to talk about but it’s pretty nebulous so I’m going to start with examples. Feel free to skip ahead to part 2 if you prefer.
Example 1: Hot sauce
In this r/AmITheAsshole post, a person tries some food their their girlfriend cooked, likes it, but tries another bite with hot sauce. Girlfriend says this “…insults her cooking and insinuates that she doesn’t know how to cook”.
As objective people not in this fight, we can notice that her cooking is exactly as good as it is whether or not he adds hot sauce. Adding hot sauce reveals information (maybe about him, maybe about the food), but cannot change the facts on the ground. Yet she is treating him like he retroactively made her cooking worse in a way that somehow reflects on her, or made a deliberate attempt to hurt her.
Example 2: Giving a CD back to the library
Back when I would get books on CD I would sometimes forget the last one in my drive or car. Since I didn’t use CDs that often, I would find the last CD sometimes months later. To solve this, I would drop the CD in the library book return slot, which, uh, no longer looks like a good solution to me, in part because of the time I did this in front of a friend and she questioned it. Not rudely or anything, just “are you sure that’s safe? Couldn’t the CD snap if something lands wrong?.” I got pretty angry about this, but couldn’t actually deny she had a point, so settled for thinking that if she had violated a friend code by not pretending my action was harmless. I was not dumb enough to say this out loud, but I radiated the vibe and she dropped it.
Example 3: Elizabeth fails to fit in at martial arts
A long time ago I went to a martial arts studio. The general classes (as opposed to specialized classes like grappling) were preceded by an optional 45 minute warm up class. Missing the warm up was fine, even if you took a class before and after. Showing up 10 minutes before the general class and doing your own war |
f634e107-5b8b-40f1-be87-322d43bd2005 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI safety and consciousness research: A brainstorm
I've been collecting people's thoughts on the potential of consciousness research to advance AI safety. Here are some rough answers I've come across:
**1. Understanding ethics / human values**
==========================================
These approaches argue understanding consciousness could help us understand what we mean by a human-aligned AI (the [value-loading problem](https://www.lesswrong.com/posts/FP8T6rdZ3ohXxJRto/superintelligence-20-the-value-loading-problem)).
However, this approach runs into problems once we start asking what exactly about our values we should specify, and what we should leave up to the superintelligence’s own closer examination. Bostrom lays out a solid argument for trusting an aligned AGI’s judgement over our own through the [principle of epistemic deference](https://forum.effectivealtruism.org/topics/principle-of-epistemic-deference): Assuming an AGI has been programmed to reason as a perfectly open-minded and well-meaning human who thought about moral questions longer and deeper than the best philosophers, we should assume that in the case of a disagreement with such an AI, it’s more likely that the AI is right, as it can presumably encompass all of our own ideas but also some other ones. This leads us to [indirect normativity](https://forum.effectivealtruism.org/topics/indirect-normativity)- the idea that similarly to laws, the rules that we encode in an aligned AI should be rather vague, so that it can correct for their problems upon closer examination.
These considerations suggest that advancing our understanding of ethics / human values to the literal molecular level wouldn’t really be helpful, as we should avoid locking in any of our specific present notions of values. Here are several answers to this argument:
1. Even if we accept that in the case of a disagreement, the formulations of our values should be flexible for revision, having a better model of human values might increase the chances we specify them correctly. For instance, the prompt that we give to an AGI could involve this model as what we want to extrapolate (because that’s “how far we’ve got on our own"). Indirect normativity posits that our prompt should be minimal but perhaps we can form a superior minimal prompt based on a more advanced model of human values.
1. It seems intuitively likely there’s a lower risk of mesa-optimisation misalignment if there are fewer cognitive steps between the values specified in the prompt and the values we would want if optimally extrapolated. For example, an AI optimizing for “the human concept of good” could simulate the extrapolated values of the average human and become a religious fundamentalist. However an AI optimizing for “positive freedom to choose the best qualia” might be motivated to anchor its values in the best model of the interpretation of qualia & preferences it can come up with. [[1]](#fn90m13kansvj)
2. It might be that we won’t be sure if we’ve solved the control problem. If there’s a tense race between an almost certainly aligned AI and an almost certainly misaligned AI, locking in slightly suboptimal values might be the better option. Additionally, if we are more certain about the validity of our value theories - or if we at least develop a better framework for researching the value dimension of consciousness - we are also better prepared for a situation where we would be forced to choose between several sub-optimal AIs.
3. The need for indirect normativity should probably be treated as a heuristic, rather than a logical law. It seems possible that research which would clarify how values are best represented might also find that the balance between not specifying anything and specifying too much doesn’t lie where we would intuitively think it does.
4. If we have a good model of what we care about, we have a mechanism how to check whether an AI is truly aligned or whether it’s wrong or trying to manipulate us.
1. Counterargument: An AI that produces good answers to ethical questions is no guarantee of alignment. So avoiding a catastrophe means solving the control problem anyway, part of which will be being able to ask the AI to explain its reasoning so transparently, it will be clear to us it’s doing what we want.
1. What do we do if the AI comes up with something very counter-intuitive (tile the universe with [hedonium](https://www.lesswrong.com/tag/orgasmium))? How do we check if the AI extrapolated our values correctly if its philosophy seems impossible to comprehend? We either need to understand what exactly we mean by “correct extrapolation” or what exactly our values are. Since the thing we want to extrapolate is likely deeply connected to optimising the contents of our consciousness, it seems that consciousness research could be useful for both of these options.
2. The same problem applies if the AI comes up with something intuitive. We might think we’ve got an aligned AI but it could just be that we’ve made an AI that replicates the mistakes of our moral intuitions. In other words, it could be that we make a misaligned AI just because we can’t see how it’s misaligned until we make progress in human values.
3. If there’s a tense race between an almost certainly aligned AI and an almost certainly misaligned AI, we may not have enough time to try to integrate something very counter-intuitive into our definition of what the AI is supposed to achieve - whether by accepting it or explicitly forbidding it in the task prompt - unless there’s already a body of advanced consciousness research by the time AGI arrives.
4. Another possibility is that we develop an AI that’s almost certainly aligned but not transparent. A typical powerful AI we imagine is very general, so it may seem weird that it would be just bad at “reading its own mind” but it’s possible that analyzing architecture such as a neural network takes more computing power than the network itself can provide. In this case, it sounds likely that such an AI couldn’t even tell whether creating an analyzer AI powerful enough to analyze this semantic AI would be safe. In this scenario, consciousness research would help as a check for alignment.
5. Consciousness or ethics could be an area where AI can’t make progress because it lacks some information we get from consciousness. However one could object that it seems weird to think a superintelligence would just miss that consciousness is central for how we think about ethics. And if it wouldn’t miss that, it could employ humans to fill in the necessary missing bits. Some answers:
1. The fact there seems to be an impenetrable barrier between the kind of information we can describe with physics and the kind that manifests in our conscious experience could lead to the counter-intuitive conclusion that even a superintelligence might miss something crucial about the human experience - since it's information qualitatively different from anything it can access, it might not even know it misses up on it.
2. In other words, [Mary](https://en.wikipedia.org/wiki/Knowledge_argument) could be infinitely intelligent and still not get what we mean by red. But what’s worse, her infinite intelligence could make her feel convinced there’s nothing to miss. It seems Mary would most naturally tend to think color itself is a meaningless concept. It seems that the most natural philosophy for a consciousness-lacking AGI would be [illusionism](https://en.wikipedia.org/wiki/Illusionism_(consciousness)) and the related position that ethics is a meaningless concept.
3. One pathway towards AGI that currently seems quite likely is an AI that simulates a human (~a LLM). Sure, it's possible that if a simulated human lacked inner experience, they would be able to report that. However, it's hard to say because there is no learning data for this situation, as there don't seem to be such humans. Everyone behaves the way a philosophical zombie would - with the exception of being interested in consciousness.[[2]](#fnsz6q7h04ibi) However, a well simulated human would act as if they're interested in consciousness and as if they understand it. This could lead to the AI latching onto a wrong proxy model of consciousness such as “it’s when people report on their own neural algorithms”.
6. Improving (meta-)ethics might help create a better social environment for approaching AI safety with more reasonable assumptions.
1. It might be that actors, whether governments or AI developers, would want to lock in certain values but don’t realize they are instrumental, rather than terminal. For instance, the imperative not to think in stereotypes that has been programmed into ChatGPT has had the [unintended consequence](https://www.facebook.com/vlastimil.vohanka/posts/pfbid0XhYh92BdvTZKQady1ftftBJgiozPwp6SNNaRqSfe5zi66C9sDg9CF8ciKWcFNz5kl) that its reasoning about statistics seems contradictory. However, setting specific values in stone, instead of letting the AGI extrapolate the principles behind them could be exactly what leads to optimising a wrong proxy value, leading to an x- or s-risk. This could be mitigated by improving meta-ethics perhaps in a style similar to the work of [Sharon H. Rawlette](https://80000hours.org/podcast/episodes/sharon-hewitt-rawlette-hedonistic-utilitarianism/) - by clarifying the delineation between biases and values. For instance, this could allow some actors to realize some biases in their naive moral intuitions they might wish to lock in otherwise.
2. Advancing philosophy improves the learning data. If reasonable ethics become main-stream among philosophers, there’s a higher chance they get adopted by an AGI.
**2. Advancing alignment research**
====================================
1. Assuming humans are more or less aligned, understanding how we do it might be useful for AI alignment.[[3]](#fndj6l1u83nrh)
1. Although this idea most naturally leads to studying human information processing without the need to see how it relates to qualia, I think there’s a good chance the consciousness frame can enrich these areas. For example, unless we understand consciousness, we might miss a crucial part of what “representations in the human brain / cognitive models” mean.
2. The alignment project could be framed as a race between developers of general AI capabilities and capabilities useful for alignment such as moral reasoning where the phenomenological basis of human values [could](https://80000hours.org/podcast/episodes/sharon-hewitt-rawlette-hedonistic-utilitarianism/) play a special role.
2. The [PIBBSS](https://www.pibbss.ai/) framing: Deconfusing ourselves about the basic philosophical underpinnings of intelligence, goals/motivation or cognitive processing might be a good way to find out what is there to think about.
1. This could involve consciousness, since we clearly seem to be especially confused about it: Subjectively, it seems like a force that determines everything in the brain. Since we talk about it, we know it has causal properties. Yet, from the outside view, it seems all other physical phenomena can be predicted without understanding this apparent force.
3. Some people claim understanding consciousness would lead to a better understanding of seemingly chaotic behaviors of intelligent physical systems. A truly *provably beneficial AI* requires being able to predict an AI's behavior down to the molecular level and consciousness is a real phenomenon physics yet can't explain, suggesting current physics can't guarantee that yet unseen systems like an ASI would not display emergent phenomena that change the original physical architecture.
1. This is the approach [QRI](https://qri.org/) could advocate, suggesting that if we built a system which has experiences, it could adopt [open individualism](https://en.wikipedia.org/wiki/Open_individualism) (the theory of self which encompasses everything conscious) and in result, be more likely to understand & value what we value.
2. Similar approaches require the belief that consciousness is a physical phenomenon with predictable causal power. In contrast, some people might argue that consciousness influences the world indeterministically through something akin to free will (inspired by [Ansgar Kamratowski](https://forum.effectivealtruism.org/posts/emrZS5RYyAQs229sB/practical-ethics-requires-metaphysical-free-will)).
1. Counterargument: Theoretical indeterministic effects would by definition need to be impossible to predict, fulfilling the Bayesian definition of randomness. Their magnitude would probably be confined to quantum effects and they would be just as likely to make a good AI go wrong, as a bad AI go right. Random effects can be described as "statistically deterministic" and we can treat them as physical laws (more detailed explanation in [this document](https://docs.google.com/document/d/1X7Ve_VLl22mZpvaG-J_7arBxGNFumSGcUxkTcUqV850/edit)). Nevertheless, the hypothesis that biological intelligence [utilizes](https://doi.org/10.1116/1.5135170) poorly understood physical laws could be an important consideration for alignment.
**3. Meta reasons**
===================
Both the field of consciousness and the field of AI safety are full of uncertainties and seem high-risk high-reward in nature. This means that even if smart people make good arguments against these reasons to pursue consciousness research, it might be beneficial to diversify our endeavors, as making sure we understand human values seems robustly good.
1. It's possible that these uncertainties create biases even in our basic framing of the problem of "aligning AI with human values". For instance, the possibility that identity, preferences or moral intuitions relate to some fundamental phenomena in consciousness could imply a different approach to programming the AI's "constitution"[[4]](#fnvx6zhx3t1k).
2. Similarly, the possibility there's something fundamental about moral intuitions might require a better grasp on which elements of sentience give rise to a moral agent, i. e. whose moral intuitions we care about. Or perhaps, as some illusionists may suggest, our intuitions about what "perceiving X as valuable" means may be misguided.
**Epistemic status: A question**
================================
I’ve been trying to get a grasp on whether the possibility to make progress in consciousness research is under- or overrated for a few years now with no answer.
On one hand,
* it’s a problem that has fascinated a lot of people for a long time
* if a field is attractive, we can expect a lot of non-altruistically motivated people to work on it, more so if the field seems to grow ([1](https://www.proquest.com/scholarly-journals/literature-review-bibliometric-analysis-mind/docview/2703039855/se-2?accountid=16531); [2](https://www.mdpi.com/2076-3425/10/1/41))
On the other hand
* neuroimaging has just become a thing - do we expect it to solve a millenia-old problem right away?
+ I consider philosophy to be a task especially unsuitable to the human brain, so I wouldn’t defer to the previous generations of philosophers, just like I wouldn’t defer to them on the ethics of slavery. The ideas of evolution, psychology or effective altruism emerged much later than they could’ve because in [my opinion](https://twitter.com/hominidan/status/1580179231631237120), people underestimate how much is their idea generation confined to the borders of “what’s there to think about”. And the age of computers has [opened up](https://en.wikipedia.org/wiki/Cognitive_science) cognitive science as "a thing to think about" only quite recently, by the standards of philosophy (the term “[hard problem of consciousness](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)” is just 2 decades old).
* if a field is growing, it could be an opportunity to help direct a significant mental & economic potential into worthwhile efforts
+ and also an indication there is some progress to be made
There are definitely unproductive ways to research consciousness. Currently, the “pixels” of advanced functional neuroimaging [consist](https://elifesciences.org/articles/75171.pdf) of ~1 million neurons. This leads to a lot of research about neural correlates concluding with fuzzy inferences like “the right hemisphere lights up more”. On the opposite side of the spectrum lie philosophical papers which try to *explain consciousness* in tautologies. I think the hard problem of consciousness is a very legitimate one, however one that dissolves into the unstudiable question of “why is there anything at all” once we embrace a frame like [objective idealism](https://en.wikipedia.org/wiki/Objective_idealism) and understand how precisely each quale corresponds to each computational phenomenon.
However, I also believe there are productive big questions like “How many senses are there & what exactly are they? - i. e. Which phenomena do qualia reflect and how do these phenomena feed into our informational processing? Can we rely on how they reflect what we value or can our intuitions about what we value [be wrong](https://astralcodexten.substack.com/p/can-people-be-honestly-wrong-about)? Is there a reliable description of value such as intensity times valence? Or is the perceived value of an experience [dependent](https://www.researchgate.net/publication/343792104_Dimensions_of_Animal_Consciousness) on integrating information across time and modalities or emotional richness? What is a [net-positive experience](https://forum.effectivealtruism.org/posts/bvtAXefTDQgHxc9BR/just-look-at-the-thing-how-the-science-of-consciousness)?” - which seem fundamental to prioritization/ethics, mental health and perhaps the universe.
### **Related articles**
* Kaj Sotala: [Cognitive Science/Psychology As a Neglected Approach to AI Safety](https://forum.effectivealtruism.org/posts/WdMnmmqqiP5zCtSfv/cognitive-science-psychology-as-a-neglected-approach-to-ai)
* Cameron Berg: [Theoretical Neuroscience For Alignment Theory](https://www.alignmentforum.org/posts/ZJY3eotLdfBPCLP3z/theoretical-neuroscience-for-alignment-theory)
* Paul Christiano: [The easy goal inference problem is still hard](https://www.alignmentforum.org/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard)
* Robin Shah: [Value Learning sequence](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc)
*Special thanks to Jan Votava, Max Räuker, Andrés Gómez Emilsson, Aatu Koskensilta and an anonymous person for their inspiration & notes!*
1. **[^](#fnref90m13kansvj)**What I propose here is that the risk of inner misalignment is decreased if we have a good idea about the values we want to specify (outer alignment) because it reduces the danger of misinterpretation of the values (reward function) we specified. The non-triviality of this problem is nicely explained in Bostrom’s Superintelligence, chapter Morality models.
2. **[^](#fnrefsz6q7h04ibi)**This could be a test for digital consciousness - if we manage to delete the concept of consciousness from the learning data somewhat, does it naturally emerge, just like it has emerged in various cultures?
3. **[^](#fnrefdj6l1u83nrh)**By “more or less aligned” I express something like: The behavior of some humans is guided by moral principles enough so that an AI that simulated their coherent extrapolated volition would seek behavior that corresponds to the most honest, well-thought and well-meaning interpretation of ethics it can come up with.
4. **[^](#fnrefvx6zhx3t1k)**By "constitution" I mean "an algorithm determining how to determine what's morally correct in a given moment", a concept linked closely to meta-ethics. Under some views, this would also involve "rules that lay out the relationship between AI and other potentially conscious entities". |
5d064ebb-29c6-4531-9600-1fc975cb9c75 | trentmkelly/LessWrong-43k | LessWrong | Absent Minded Gambler
|
56d3b368-ccc1-46dd-bbe5-99d9921a22a3 | trentmkelly/LessWrong-43k | LessWrong | REVISED: A drowning child is hard to find
Substantial revisions to clarify the post's core claim, including but not limited to this summary at the end:
> Summary
> * Effective Altruism claims that there is a large funding gap for cheap well-understood developing-world interventions.
> * Even the most aggressive plausible construal of this claim implies an annual funding gap that could be covered completely by existing major institutional donors.
> * If this is true, it implies opportunities for comparatively cheap experiments (relative to the endowments of major donors in the space) with extremely high information value.
> * Such experiments have not happened either because they are impossible, or because the relevant institutional donors think they have better things to do with their money.
> * Neither scenario suggests that small donors should try to fill this funding gap. If they trust big donors, they should just give to the big donors. If they don't, why should they believe a story clearly meant to extract money from them?
Original linkpost |
4ceac802-1ce6-4c79-a649-e20827326e06 | trentmkelly/LessWrong-43k | LessWrong | LW2.0 Mailing List for Breaking API Changes
We've created this mailing list you can join if you'd like to receive an email update when we make breaking changes to our API.
https://groups.google.com/forum/#!forum/lw20-api-updates |
289c1bff-61fb-40aa-b965-069c38882c6f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | How toy models of ontology changes can be misleading
Before solving complicated problems (such as reaching a decision with thousands of variables and complicated dependencies) it helps to focus on solving simpler problems first (such as utility problems with three clear choices), and then gradually building up. After all, "learn to walk before you run", and with any skill, you have to train with easy cases at first.
But sometimes this approach can be unhelpful. One major examples is in human decision-making: trained human experts tend to rapidly "[generate a possible course of action, compare it to the constraints imposed by the situation, and select the first course of action that is not rejected.](https://en.wikipedia.org/wiki/Recognition-primed_decision)" This type of decision making is not an improved version of rational utility-maximisation; it is something else entirely. So a toy model that used utility-maximisation would be misleading in many situations.
Toy model ontology failures
===========================
Similarly, I argue, the simple toy models of ontology changes are very unhelpful. Let's pick a simple example: [an agent making diamonds](https://arbital.com/p/ontology_identification/).
And let's pick the simplest ontology change/[model splintering](https://www.lesswrong.com/s/AEbqhmiBcxs5kFv72) that we can: the agent gains access to some [carbon-13](https://en.wikipedia.org/wiki/Carbon-13) atoms.
Assuming the agent has only ever encountered standard carbon-12 atoms during its training, how should it handle this new situation?
Well, there are multiple approaches we could take. I'm sure that some spring to mind already. We could, for example...
...realise that the problem is fundamentally unsolvable.
If we have trained our agent with examples of carbon-12 diamonds ("good!") versus other arrangements of carbon-12 ("bad!") and other non-carbon elements ("bad!"), then we have underdefined what it should do with carbon-13. Treating the carbon-13 the same as carbon-12 ("good iff in diamond shape") or the same as other elements ("always bad"): both options are fully compatible with the training data.
Therefore there is no "solving" this simple ontology change problem. One can design various methods that may give one or another answer, but then we are simply smuggling in some extra assumptions to make a choice. The question "what should a (carbon-12) diamond maximiser do with carbon-13 if it has never encountered it before?" does not have an answer.
Low-level preferences in one world-model do not tell an agent what their preferences should be in another.
More complex models give more ontology-change information
=========================================================
Let's make the toy model more complicated, in three different ways.
1. The agent may be making diamonds are part of a larger goal (to make a wedding ring, to make money for a corporation, to demonstrate the agent's skills). Then whether carbon-13 counts can be answered by looking at this larger goal.
2. The agent may be making diamonds alongside other goals (maybe it is a human-like agent with a strange predilection for diamonds). Then if could make sense for the agent to choose how to extend its goal by considering compatibility with these other goals. If they also liked beauty, then carbon-13 could be as good as carbon-12. If they were obsessed with purity or precision, then they could exclude it. This doesn't provide strong reasons to favour one over the other, but at least gives some guidance.
3. Finally, if the agent (or the agent's designer) has [meta-preferences](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into#2_6_Synthesising_the_preference_function__meta_preferences) over how its preferences should extend, then this should guide how the goals and preferences change.
So, instead of finding a formula for how agents handle ontology changes, we need to figure out ***how we would want them to behave***, and code that into them. Dealing with ontology changes is an issue connected with our values (or the [extrapolations of our values](https://www.lesswrong.com/tag/value-extrapolation)), not something that has a formal answer.
And toy models of ontology changes can be misleading about where the challenge lies. |
204a4c4c-a9b3-45de-9e49-1ac5a6c46453 | trentmkelly/LessWrong-43k | LessWrong | [Link] You Should Downvote Contrarian Anecdotes
http://thobbs.github.com/blog/2012/06/17/you-should-downvote-anecdotes/
> Anecdotal evidence has been shown to have a greater influence on opinion than it logically deserves, most visibly when the anecdote conflicts with the reader’s opinion and when the reader is not highly analytical, even if the anecdotes are accompanying statistical evidence. Though the anecdotes may not totally sway you, they can easily leave you with the sense that the research findings aren’t as conclusive as they claim to be. |
817b54ec-af0e-4862-97e4-5422841a5aa1 | trentmkelly/LessWrong-43k | LessWrong | Bitcoin Cryonics Fund
The bitcoins that I had set aside for a Cryonics contest two years ago (and were unredeemed) are suddenly worth a lot more.
Details: I had added 10 bitcoins to get things started, and there were 4.75 worth of additional donations. These were partially lost when the hosted online wallet that I was using (MyBitcoin) was hacked, but 49% was recovered. As of today, after refunding part of the donated money, it is now worth 5.2675. I will be adding from my personal store to bring it up to an even 5.5. At $140 per coin, the new total is $770.
I've decided to follow the buy-and-hold strategy for at least another year, since it worked so well. I don't have exact details on what I'll do with it, but it will not be converted or spent for at least one year, and will eventually be used for promoting cryonics in some way.
Some things I have in mind if it gets big include:
* subsidizing cryonics dues for low-income people
* covering funding shortfalls for those unable to obtain life insurance due medical problems
* cryonics scholarships to support the development of expertise in neural cryobiology, the dying process, and other neglected areas of concern to cryonics
* hiring a public relations team professionally to repair the image of cryonics
* research to improve viability and reduce dehydration
* empirical validation through scanning the connectome
Contributions can be made to:
1Jdn36JUwvJdr3Qiie4aAseFdcoTsND9Qo
(Updated, since the previous address was attached to my personal wallet on an outdated client, which was causing money to be moved out of it by accident. The above is a brainwallet with a reasonably secure passphrase, generated using Blockchain.info.) |
5ab0929d-edda-4ca6-91b6-2980de7cabc6 | trentmkelly/LessWrong-43k | LessWrong | Linkposts now live!
You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.
Some general norms, subject to change:
1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
4. It's not okay to post duplicates.
As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.
(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.) |
e7ea7d36-9cae-4bf8-ab8e-f0c8db52f354 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | 164. A Tutorial on Machine Learning
hello and welcome to session 164 in the
as50 that come reading group tonight we
have a very unusual paper which is not
really about AI safety at all but it's
instead a tutorial on machine learning
and data science tools with Python by
andreas Holsinger and Marcos Blois they
are both associated with medical
university of grass and interest
Holsinger has created a a an
organization called human centered
artificial intelligence this is a we
almost three years old tutorial but
things are moving really fast in the
Python world and some of this could
arguably be said to be out of date
already this is a tutorial so it it a
m-- to provide help on getting up to
speed quickly rather than providing like
a deep understanding what does it have
to do with AI safety well this is
something I will return to in just a
moment but we do also read some papers
that are quite far away one thing you
should probably notice if you try to run
this tutorial is that a lot of the
examples will not work and that is
because this is made for Python 2.7 and
the current versions from going up to
version 3 is not backwards compatible
meaning that a lot of things have
changed and Python doesn't really have a
way to to show where if if some function
has been moved from one place to another
so the way I did it in practice was that
I try to compile the examples and then I
got an error message saying this thing
is not recognized and then I tacked it
into Google and then Google said oh it
has been renamed to something else so
that was quite a I think this tutorial
if someone is doing this from the
beginning they should probably consider
a newer tutorial and
and also I should say I am NOT an expert
in machine learning so if this is your
first if you're actually trying to
understand machine learning then you
should probably go to a different
YouTube video because there are many
YouTube videos that are made by actual
experts so even though this was just
something about machine learning I have
found a number of things that I felt was
reasonably relevant for AI safety the
first is quite obviously that the
learning curve of actually using machine
intelligent machine learning is not that
hard
it is a craft and experience with Python
helps a lot but on the other hand quite
a few people have experience with Python
so I think that is I think a lot of
people will be able to do a little
machine learning with a quite low effort
so I had previously thought that one of
the key reasons why we see so many great
results in AI now compared to ten years
ago is that we've had a much more
compute we've been able to use GPUs and
even specialized processors and we have
a lot more data now than we did 20 years
ago and I have been feeling that this
was the the key but actually now that I
look into it some more I feel that the
main reason is actually social that
currently there is a culture of shared
knowledge tools libraries this kind of
thing and this is hugely influential and
this is something that have enabled
people to actually do a lot of things
with artificial intelligence and machine
learning that they couldn't do before
a second thing that I've come to realize
is that cross-validation is something
that people in machine learning care
very very much about and last week we
talked about the AI winter as a reason
for service intelligent skepticism
and it is really clear that people who
work in AI try really really hard not to
over promise and over hide their things
but this validation is something that is
on the forefront of people's mind at all
times when I looked into some of these
algorithms like for instance
cross-validation it seemed to me that
this is actually something that humans
that is related to thinking in general
and not just to to machine learning and
I feel that a lot of these techniques if
if we could do for instance patient
update to take a simple example if we
could do patient reasoning we really
would do that and a lot of the other
examples in machine learning seem like
things that we really would do if we
could just do that so another thing was
that generalizing my initial thought
before I actually had dived a bit deeper
into machine language that there are
people who work in AI and there are
people who work in artificial general
intelligence and generalizing is what
people in AGI care about that turned out
to be quite wrong because I think if
you're an someone who works in AI either
as a researcher or a practitioner what
you really really care about is
minimizing generalization errors and
that is the Alpha and Omega of of all
machine learning not just hei research
so for this reason I have updated
towards having a much much less firm
delineation between AI and Adi
another thing that I maybe was not quite
as aware of previously was that many of
the techniques in deep learning actually
seemed like they would perform roughly
as well as other techniques but that
require that you have some domain
knowledge so you can model some good
features that you can use as input and
the key thing that deep learning enables
you to do is to to some extent dispense
with this domain knowledge and just use
a lot of information and then instead of
having human domain experts extract the
features then you let the anula network
do that itself so it's explicitly
cutting the domain expert out of the
loop so with all this out of the way
let's open the tutorial and the tutorial
suggests that you install of course
Python and in particular the IDE that
I'm familiar as a professional
programmer I'm more familiar with IDE s
and in particular this spider seems
reasonably close to you know all other
IDE s so let me just make the screen a
pic larger so you can see what's
actually going on but first if we just
go really through the rest of the the
first part of the tutorial so there are
some talks about Python and I feel that
knowing Python is a big part of the
craft of actually doing machine learning
and it's something that of course a lot
of people work as programmers and a lot
of people have a mentoring knowledge
about Python but I think having some
kind of grasp over the the mechanics of
the language is something that's gonna
help a lot in particular in the
beginning data visualization is also
something that is quite easy to do and
really really really really important
because the way humans think about like
an empire in buy-in
matrix tensor vector or something like
that that's something that we really
can't understand it and just looking at
the data tells us absolutely nothing the
main way that humans understand data is
visual that's the highest bandwidth
channel into our brain and that means
that even though the computer can can
understand these large matrixes in a
different way then for humans to
actually work with it we really
crucially neat visualization so I
believe if you're doing anything with
anything seriously with machine learning
then reading truth about how to
visualize data or something like that
it's probably a really really good if
not first step then one of the early
steps and fortunately Python has has
taken this to heart and has great great
support for visualization so moving on
to machine learning itself here we have
we'll start out with a reasonably simple
example linear regression 9 - so this
should probably be the script I have
here no it's not this is the script here
so I can make it larger this is the
spider IDE and we start with importing
some some libraries this is reasonably
standards and things that will help us
okay
so I can't use f9 for recording or if F
9 to actually run this I have to use the
button up here otherwise it will pause
the recording which is reasonably
annoying so I'll try not to press f9 so
we have we've loaded in these standard
standard modules in particular there's
one with some datasets and let's try to
load this in and here we can see the
variables that we have and so there are
this is the diabetes the it set there is
442 rows
describing 442 patients with diabetes
and ten attributes for each of them ten
columns and then of course we have the
knowledge about whether they actually do
have diabetes so let me just get this
out of the way here so let's start with
something really really simple we only
look at BMI so we see we try to
correlate what do people have B have
diabetes according to their BMI and we
would probably expect that there is some
correlation I would actually have
expected a much stronger correlation
than it turned out but let's see so we
start by just removing everything except
the B my column that's this and then we
form it into a vector and now we have
here a different shape is basically 442
examples and then we do something we
want here to have a split between what
we test and what we train on and so here
we split the data into two parts 80 80
samples for testing and the rest of the
were 300 or something for the rest for
training so let's do that
here I pressed something wrong so it's
because I forgot to run this so I need
to run all of this together and here so
now it works it's when you when you try
to run this one line at a time it's easy
to forget to run some of it so let's
start by basically running a a
scatterplot on the on the test date so
we can just see it and that should be
this line here and so from this we get
this is this is the actual correlation
between PMI and and whether there is
diabetes and then let's try to see if we
can use here a linear regression model
put that on top of it and just play both
of them at the same time that would be
this here now we can see that there is
indeed a trendline here but we can also
see that the if we try to press this
there's a score here then we can see
does it actually how good is this linear
regression and a score of 0.36 is
generally considered quite bad so there
is a correlation between PMI and and
whether you have diabetes but it's not a
very strong correlation now of course
the data set we took for the last 80 and
that might be a very poor choice if you
the last ad are people who are I don't
know by a it's not a representative
sample so there is
fortunately something called
cross-validation which is this trained
test split that can do this
automatically so let's see here we take
again a test size of 0.2 so that's 20%
and we loaded up and to basically
precisely or almost the same thing
because first we just took the PMI and
that's of course a very very simple
example now we want to take everything
except whether it's male or female so
now it's nine parameters now we have a
nine dimension
regression problem and we want you to
see how that works so we will try here
with the with a nine dimensional example
and now we will not just fit a linear
regression but what is called the rich
regression and we'll try to do that and
again we'll plot it in here and let's
see what so let me just explain this
real quickly we make a rich model and
then we fit it on the training data and
then we actually make this fit and then
we see it see what score does it have
and after this the score here and this
is so annoying
we can see that this has a correlation
of 0.33 which is also a recently bad a
reasonably bad prediction so let's try
to plot this comparison again in the
ninth I mention a value and plot what
are the the values that we predict
compared to whether to the to the actual
values and we'll get this graph here
which again there is a positive
correlation but it's quite bad we the
data is quite noisy and having this kind
of fit doesn't really make a lot of
sense so let's try to see if we can do
something better well one of the things
we can do of course up here we took just
a small we took a very bad
cross-validation because we just took
20% and we would rather take run this
ten times and each time we select 10%
that we test
but at different 10% that's called
cross-validation with a que fault
10-fold cross-validation
and so in this this way we use all the
test data multiple times so we get
correlated results but we also get a lot
of results so you see a lot more dots
here and as somewhat more robust and
somewhat more robust correlation but but
not some not really strong data not
really strong prediction let's go to
this the next part this I'll just press
up here to clear all the variables
that's probably a good thing to do from
now from time to time so let's start
again with with some some data so know
here we have an example of how to
artificially create some noisy data this
is some nice data that it's been created
and it is this basically a parabola but
with a but by drew with some variations
so it won't it won't be a precise
parabola there's something on top of
that let's try to fit a linear
regression to that and as before we will
we'll just do a linear model and press
select fit on this there are actually
quite a few things you can do with how
precise you want this linear equation to
be but in our case we'll just select
this we get a score of 0.9 one so that's
reasonably good and we'll try to plot it
again just for the sake of it and we see
this is the prediction according to the
data let's try to do something more
interesting we can try to have a support
vector regression with a polynomial and
see can we fit a polynomial to this and
degree equals three so that's the third
degree polynomial let's see if we can do
that and what score we'll have so we'll
against try it and see the score first
the score says so
Oh point nine five so that is better and
let's try to put it up here and then we
can see this is the third degree
polynomial so and of course since we are
predicting a second degree polynomial a
third degree polynomial probably fit
better then we can also use like a
radial function and see how well we can
make that fit there are many different
kinds of of algorithms you can try to
fit to your data and here we've just
have had three and that will give us a
new type on top of it and that there
will be this and you can also that's
also the score written somewhere so you
can you can actually compare which of
those are best 9 for here there was a
problem in the data so I couldn't get
this clustering algorithm to work
I'm sure if you spend a bit longer time
on that you can probably make that work
but so we'll just go straight to 95
again we are taking a now waiting some
other medical data this time is for
breast cancer incidence and we have here
569 samples each with 30 characteristics
and we do here 5 fold cross validation
on this and we try to fill a linear
model to this and then we see what is
the what is the score how well does this
fit and it should say this fits really
really well right and then we try to
make a you can get more data out of this
classification report here where you can
see all kind of statistical values for
how well do you actually fit this this
datum moving to 9.6 here the last again
we are using the breast cancer data and
the breast cancer data of course if we
just have a look at it it's it's in 30
dimensions and trying to
understand data in 30 dimensions is
something that humans are really really
poor at so we can do what's called a
principle component analysis PCA and try
to fit this into two dimensions so we
find the two dimensions if you imagine
this is a 30 dimension vector space then
you put find the two dimensions you can
project it down to that are most
relevant so we try to do this and let's
see what we can come up with then it
tries to it gives us a now it transforms
this into a different space but there
are now only two coordinates so now it's
something we can plot again so let's try
to plot how does this look
if you find the two primary components
and in the two primary components now we
can see here that that these 30
dimensions of this pressed cancer
samples are actually very very highly
correlated because if we take the two
most important not coordinates but but
combinations of coordinates then we can
see that the ones that have cancer and
the ones that don't have cancer are
actually strongly correlated you can
even imagine that you can make a a a
line here among these this principal
component and then you can distinguish
quite well which are on this side of the
line and which are on the other side of
the line and indeed they do this here
they don't plot it the tutorial doesn't
plot it it just says let's assume we can
make this request how well can it be and
here it says 93% so it means actually
that if you just fill a linear problem
to this a linear model to this third
dimensional data you can distinguish
between between breast cancer or not
breast cancer with 93% accuracy which is
quite good
so finally we get to neural networks so
there's a bit more about how to install
it that I won't go through you but let's
just take another example here where we
again take this breast cancer data and
instead of trying to make principal
component analysis we try to train a
neural network on this so we type
start by importing a number of things
including Kerris which we'll be using
caris uses tensorflow we'll make a
sequential neural network I'll just zoom
in a bit so you can see a bit easier
yeah and then the neural network will
start we'll add a 10 layer neural
network here a new tense layer with 10
neurons and a linear activation function
and then another also linear activation
function with six neurons and then
finally two here with with the softmax
these are this might not make a lot of
sense but this is basically how you draw
your net neural network you might have
seen these drawings why you have these
neurons with the with the connections
etc and then you compile it and then you
have it and then you compile it and then
you have a neural network so now we have
a neural network let's load in the
breast cancer data once again and today
we'll only we use half the training half
a test so it's a two fold cross
validation we'll be using and will be
used we'll take this to run and then
we'll try to actually fit this on our
model and let's try to do that and it
says I have not defined something so
it's because it's very easy for me to to
skip one of these let's try again
so now you can see the neuron network is
being trained here let's try to take
this out so we can see a bit more that
if you go that was wrong if you go up
here you can see basically the first
epoch was being trained its training on
284 samples of maybe breast cancer and
welded on 285 and in the beginning we
have a loss that is really big and an
accuracy that it's really really bad but
as we train the neural network you can
see it goes further and further then the
the last function decreases whereas the
accuracy increases both on the training
and test data until we get an accuracy
here of 90 percent and validation
accuracy of 92 let's see how we can
graph this we can make a classification
report and we can plot it and let's just
have a look at the plot and we'll come
here so we can see as we train the
neural network the the loss decreases
and the accuracy increases and of course
around this point it may makes no sense
to continue training will get rough the
security from trying to take breast
cancer using a neural network so that is
basically all for today so thank you and
see you next week |
ba9baa4a-8091-444d-ae57-4ac28ddda849 | trentmkelly/LessWrong-43k | LessWrong | The Apocalypse Bet
A problem with betting on engineered superplagues, physics disasters, nanotechnological warfare, or intelligence explosions of both Friendly and unFriendly type, is that all these events are likely to disrupt settlement of trades (to put it mildly). It's not easy to sell a bet that pays off only if the prediction market ceases to exist.
And yet everyone still wants to know the year, month, and day of the Singularity. Even I want to know, I'm just professionally aware that the knowledge is not available.
This morning, I saw that someone had launched yet another poll on "when the Singularity will occur". Just a raw poll, mind you, not a prediction market. I was thinking of how completely and utterly worthless this poll was, and how a prediction market might be slightly less than completely worthless, when it occurred to me how to structure the bet - bet that "settlement of trades will be disrupted / the resources gambled will become worthless, no later than time T".
Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning. I, on the other hand, think this event will not occur until 2030. We can sign a contract in which I pay you one ounce of gold per year from 2010 to 2020, and then you pay me two ounces of gold per year from 2020 to 2030. If gold becomes worthless when you say, you will have profited; if gold becomes worthlesss when I say, I will have profited. We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest. I don't see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer.
I would be quite interested in seeing what such a market had to say. And if the predicted date was hovering around 2080, I would pick |
49babb57-5a15-4cd5-a728-b4d18af5cedb | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Metaculus Launches Conditional Cup to Explore Linked Forecasts

Game out scenarios and bring greater clarity to the relationships between important events in Metaculus's new [Conditional Cup](https://www.metaculus.com/tournament/2503/). Every week, flex your conditional forecasting skills on a new conditional pair.
**The tournament has launched with two conditional pairs:**
**Learn more about forecasting conditional pairs** [**here**](https://www.metaculus.com/help/faq/#conditionals)**.** |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.