id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
4971ec0f-1ec0-404e-81d4-f8440fe31b6c | trentmkelly/LessWrong-43k | LessWrong | 🇫🇷 Announcing CeSIA: The French Center for AI Safety
Executive Summary
The Centre pour la Sécurité de l'IA[1] (CeSIA, pronounced "seez-ya") or French Center for AI Safety is a new Paris-based organization dedicated to fostering a culture of AI safety at the French and European level. Our mission is to reduce AI risks through education and information about potential risks and their solutions.
Our activities span four main areas:
* Academic field-building: We began our work by creating the ML4Good bootcamps, which have been replicated a dozen times worldwide. We also offer accredited courses at ENS Ulm—France's most prestigious university for mathematics and computer science—and the MVA master program, the only and first accredited AI Safety courses in France[2]. We are currently adapting and improving the course into an AI safety textbook.
* AI governance: We are actively engaged in informing AI policy through our work on the EU General-Purpose AI Code of Practice and contributions to the French AI Action summit. We advise the government on establishing the French National AI Evaluation Lab (the French AISI). Recently, we organized the Paris AI Safety symposium featuring keynote speakers Yoshua Bengio, Stuart Russell, and Dan Nechita.
* Technical R&D: CeSIA's current technical research focuses on creating benchmarks for AI safeguards systems, establishing the first framework of its kind evaluating the monitoring systems (which is different from evaluating the model itself).
* Public outreach: We advised prominent French YouTubers, with one resulting video reaching three million views. We published an Op-Ed with Yoshua Bengio.
Our team consists of 8 employees, 3 freelancers, and numerous volunteers.
Figure: An overview of how our current projects fit within a broader theory of change (in blue: CeSIA and its projects, in green: different actors and events, and what they do if we intervene on them, in red our four main focus).
By pursuing high-impact opportunities across multiple domains, we create synergistic |
f026b91a-21ab-49ee-8012-e5ba2bbc3a54 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Conditioning Generative Models with Restrictions
*This is a followup to* [*Conditioning Generative Models*](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models) *based on further discussions with Evan Hubinger, Nicholas Schiefer, Abram Demski, Curtis Huebner, Hoagy Cunningham, Derek Shiller, and James Lucassen, as well as broader conversations with many different people at the recent ARC/ELK retreat. For more background on this general direction see Johannes Treutlein’s “*[*Training goals for large language models*](https://www.alignmentforum.org/posts/dWJNFHnC4bkdbovug/training-goals-for-large-language-models)*”.*
Background
==========
Previously, I wrote about ways we could use a generative language model to produce alignment research. There I proposed two approaches:
1. Simulate a superhuman alignment researcher by conditioning on them generating superhuman machine-verifiable results in a variety of related domains. Try to align this superhuman agent by crafting scenarios where a misaligned agent has an incentive to defect ("Honeypots").
2. Simulate humans performing alignment research for a long time, by conditioning on observations spaced many years apart ("Patient Research").
I think that both approaches are an improvement over doing nothing, but I suspect that no amount of honeypot construction actually ensures an aligned agent, and so it seems that simulating humans ("Patient Research") is the more promising direction.
Overview
========
If we truly have a perfect generative model I think the “Patient Research” approach really does well. We can set up a world where machine learning practice is strictly banned and where researchers spend thousands of years constructing a machine-verifiable scheme for AI alignment. The ban prevents that world from containing malign superintelligences, and the time-scale means that if alignment is something humans can solve, the model should produce that solution.
The problem is that we won’t have perfect generative models, so we’ll have to accept some limitations.
In this post I’ll explore what changes if we cannot condition on worlds too different from our own. The idea is to understand what happens if our generative model has trouble generalizing too far away from its training data.
Failure Modes
=============
The main *dangerous* failure mode with using generative models to produce alignment research is that we accidentally ask for a future that contains a deceptive AGI.
If we get such a future, the AGI might spoof observations to trick us into importing it into our world. For instance it could pretend to be a human producing really good alignment research, but produce work which is subtly flawed and, once implemented, allows it to take over the world.
There are generally two ways we hit this failure mode.
The first is that we ask for a prediction of the distant future. The future 50 years from now has a decent chance of containing a deceptive AGI, and if we naively ask for that future there’s a good chance we’ll simulate that AGI, be deceived, and import that AGI into our world.
The second way is to ask for a world that is very unlikely *in ways that make deceptive AGI more likely*. For instance, we could condition on observing that next year someone produces what appears to be a perfect solution to AI alignment. When we do this, the hypothesis “a deceptive AGI took over and pretended to solve alignment” becomes a lot more likely, and so the generative model is more likely to simulate that scenario.
The name of the game, then, is to craft strategies that simultaneously (1) make deceptive AGI’s less likely and (2) accelerate alignment research.
Ground Rules
============
Observations
------------
We have a generative model trained on observations of the world. The training objective is to predict later observations given earlier ones. We assume that the model is inner-aligned, so that it really does try to predict the future conditioned on observations.
Observations are multi-modal, and can be anything measurable about the world (e.g. images, video, audio, text, contents of an SD card, etc.). Predicted observations do not need to be made by actual sensors, but do need to be ones that could in principle be made were the right sensors in the right places.
Conditioning
------------
It is possible that our model could know how to construct arbitrary trajectories through time, and so might be able to produce good counterfactual histories. A worry though is that it won’t generalize fully, and in particular that it will over-index on observations in the training set.
For instance, it may be that counterfactual conditionals that *should* avert the industrial revolution nevertheless result in trajectories that happen to lead to industrialization so that airplanes continue to appear in the observations later on, because airplane featured prominently in the training set. If this happens, the model is producing samples from an artificially narrow (and improbable!) distribution, which could cause all manner of unlikely and/or unpredictable outputs.
To avoid this issue we assume that the model is able to generalize to futures consistent with the observations in its training set, but not to counterfactual pasts. As a result, all conditioning has to be made with respect to *future observations*. Specifically, observations must come after the end of the training set. This is a much stricter requirement than the one in [Conditioning Generative Models](https://www.alignmentforum.org/posts/nXeLPcT9uhfG3TMPS/conditioning-generative-models), and gives us much less room to craft counterfactual realities.
In principle we can make some counterfactual historical trajectories by only training the model on data up to some past date. The tradeoff is that the model will require enormous amounts of data to train, and pushing that date back to e.g. before the Internet probably makes the training set too small to be useful.
No Fixed Points
---------------
We don’t want the model to try to solve for fixed points in its output. This sort of thing arises most naturally when we try to use the model recursively (e.g. suggesting scenarios where human researchers use the model to simulate human researchers using the model, etc.). In such cases the model may try to produce self-fulfilling prophecies, resulting in optimization pressure to steer the future in particular (likely simple) ways.
This [seems](https://www.alignmentforum.org/posts/JqnkeqaPseTgxLgEL/conditioning-generative-models-for-alignment#Self_fulfilling_prophecies) [very dangerous](https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic), so I’d like to rule out all scenario that could result in simulated queries to simulated copies of the model. Unfortunately this is not as easy as just not conditioning on the model’s output.
To see why, note that we are right now contemplating querying this model in the future. It seems very likely that simulated humans will want to query the model too. We can try to condition on human researchers never querying generative models, but that’s a very unlikely future. An unfortunate consequence of this is that the model may think that a likely reason the humans don’t query the model is because there is a deceptive AGI manipulating the world (see “Failure Modes” above), and so this conditional raises the odds that we see outputs from worlds with deceptive AGI’s, which is bad.
I see two ways out of this problem:
1. Push the end of the training set back far enough that the model itself doesn’t appear in the training data, and that no one would plausibly build the model during the simulated time-span. That restricts our questions to near-past counterfactuals (see “Conditioning” above), but does seem like a robust solution.
2. Condition on a very early disaster that prevents the model from being used (e.g. global GPU shortage, policy change banning generative models, etc.). This one pushes down the chance that the model gets used but *also* pushes down the chance of any other advanced AI existing, which makes it much better than just conditioning on the model not being used. This may still net increase the odds of simulating a world with deceptive AGI, but it’s better than the alternatives.
Neither of these is totally satisfactory, and I think this general challenge of avoiding self-fulfilling prophecies is an important one to think more about. It seems conceivable that there are just ways to dissolve the problem entirely, which would be great. Broader discussions of this challenge and possible approaches can be found in [Training goals for large language models](https://www.alignmentforum.org/posts/dWJNFHnC4bkdbovug/training-goals-for-large-language-models) and [Conditioning Generative Models for Alignment](https://www.alignmentforum.org/posts/JqnkeqaPseTgxLgEL/conditioning-generative-models-for-alignment#Self_fulfilling_prophecies).
Finally, none of this prevents us from asking the model a question, getting an answer, and then using that answer to craft new questions. The problems I’m worried only apply to recursive (rather than sequential) applications of the model.
Factored Cognition Approaches
=============================
Generate 5 Years of Alignment Research
--------------------------------------
A simple approach is to ask for a copy of the AI Alignment Forums 5 years into the future. This doesn’t require any conditioning, and gives us a five year lead on where we would have been without the generative model. There is a small chance that we can research produced by a misaligned AGI, but that chance can be made very small by tuning the amount of time. If you think that five years is too long, and makes deceptive AGI’s too likely, you can substitute a shorter timeframe into this approach.
Direct 5 Years of Alignment Research
------------------------------------
We can do better than just generating five years of research though! Instead, we can *direct* that research. We can condition on researchers spending more of their time on what we believe to be the most important parts of the problem. Mild interventions like this seem unlikely to make the generative model invoke deceptive AGI’s, and could plausibly significantly increase the progress we see.
Ask What to Ask
---------------
Even better than directing research, we can allow research to proceed for five years and then ask the model what humans wish they had done instead. This could be done with a prompt of the form
> The research note with the title “What I wish we had worked on 5 years ago” was posted by Paul Christiano to the AI Alignment Forums in 2027. Its contents follow.
>
>
Now we might disagree with the arguments in such a post, but this seems like an extremely valuable way to understand the pitfalls of different agendas without actually having to explore them ourselves.
Build up a Picture
------------------
The above three approaches, which I’ll refer to as “Generate”, “Direct”, and “Ask”, can be run in parallel. We can generate many trajectories of alignment research. We can increase the diversity of those trajectories, and answer specific questions, by directing research towards the topics that seem most crucial. And we can solicit feedback from humans in these simulated worlds on what they wish they had done, which we can then incorporate into our judgements of what to ask next.
Incidentally, this approach is a special case of [Factored Cognition](https://www.lesswrong.com/tag/factored-cognition). \*\*\*\*It is also closely related to [HCH](https://www.lesswrong.com/tag/humans-consulting-hch), but is importantly not recursive (e.g. the humans we simulate do not themselves consult simulations of themselves).
What if we need more time?
==========================
To the extent that AI Alignment turns out to be a *broad* subject, by which I mean one that factors nicely into many parallel subproblems, I expect the above approaches to work well.
Unfortunately the problem may turn out to be very deep. It may be that what we need is a great deal of sequential research. That’s much harder to get out of our model.
To see why, note that the obvious thing to do here is to ask for research conducted over longer time horizons. The more we extend the horizon, though, the more likely it is that we sample from research produced by a deceptive agent. That seems very bad.
Alternatively, we can try to be clever and “import” research from the future repeatedly. For instance we can first ask our model to produce research from 5 years out. Then, we can condition our model on that research existing today, and again ask it for research 5 years out. The problem with this approach is that conditioning on future research suddenly appearing today almost guarantees that there is a powerful AGI involved, which could well be deceptive, and that again is very bad.
If we want more time, then, we need to condition the model in ways that make AGI timelines longer.
No More GPU’s
-------------
Prosaic AGI is likely to require ML chips (e.g. GPU’s, TPU’s, etc.). Most of these are designed in the US and manufactured abroad.
Suppose we condition the model on the US banning the export of ML chip designs. The US has a [long](https://thezvi.substack.com/p/repeal-the-foreign-dredge-act-of) [history](https://marginalrevolution.com/marginalrevolution/2022/07/a-pox-on-the-fda.html) of [misguided](https://en.wikipedia.org/wiki/Canada%E2%80%93United_States_softwood_lumber_dispute) [protectionist](https://en.wikipedia.org/wiki/Merchant_Marine_Act_of_1920) [policies](https://en.wikipedia.org/wiki/Smoot%E2%80%93Hawley_Tariff_Act), and ML chips could be called a national security issue, so this sort of conditional should be pretty mild.
The effect, though, is that ML chip manufacturing is set back by a few years as new fabs are built in the US. This extends the horizon where we can safely assume there are no deceptive AGI’s, which lets us answer deeper questions.
Redirecting Civilization
------------------------
We can plausibly get even more time by a three-step process:
1. Push AGI off into the future.
2. Steer civilization with conditionals such that we don’t *want* to build AGI.
3. Conduct thousands of years of alignment research.
For example, it seems unlikely but not implausible that a religion emerges that sees dignity only in the fruits of human work. If such a religion were sufficiently prominent it could result in efforts steering away from AI research and towards e.g. human exploration of space.
I think this kind of approach could work, but the order of operations is important! If we start conditioning on lots of unlikely and strongly correlated shifts in human values and society (as in (2)), the most likely story for how this happens is that there’s a deceptive AGI behind the scenes manipulating humanity. To make sure the model doesn’t generate that story, we step (1) needs to come first and make AGI in the relevant time period extremely unlikely.
Pulling this off seems tricky, but we can use the generative model to our advantage. We can buy ourselves a few extra years on (1), then explore timelines conditioned in different ways to try to arrange (2). To be careful we could start with very mild conditioning and explore them gradually (e.g. one month at a time), searching for a likely (but entirely human) path away from developing AGI.
Summary
=======
Even with more restrictions, generative models still look pretty good! In particular, even if we can’t condition on past counterfactuals, it looks like there are plausible ways to safely extract an awful lot of good alignment research by conditioning generative models.
One theme that comes up again and again in thinking through these scenarios is that there are only really two levers we can pull on. The first is to make human alignment research more productive by guiding it towards more promising paths and by parallelizing across different simulations as much as possible. The second is to make AGI less likely over some timeframe, buying us more time to safely simulate human researchers.
Both approaches seem important, but which one matters most depends a lot on the (unknown) nature of the problem. If we can solve AI alignment by exploring in great breadth, the first lever is more useful. On the other hand, if the problem requires great depth, meaning sequential thinking, we the second lever is where we should focus. |
0d3be2f0-d404-4d1c-83af-f42fc3b0da49 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Incentives affecting alignment-researcher encouragement
My hypothesis: I think the incentives for "cultivating more/better researchers in a preparadigmatic field" lean towards "don't discourage even less-promising researchers, because they could luck out and suddenly be good/useful to alignment in an unexpected way".
Analogy: This is like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder's best bet is to found a startup.
If timelines are short enough that [our survival depends on [unexpectedly-good paradigms]], and [unexpectedly-good paradigms] come from [black-swan researchers], then the AI alignment field is probably (on some level, assuming some coordination/game theory) incentivized to [black-swan farm](http://www.paulgraham.com/swan.html) researchers.
Note: **This isn't necessarily bad** (and in fact it's probably good overall), it just puts the incentives into perspective. So individual researchers don't feel so bad about "not making it" (where "making it" could be "getting a grant" or "getting into a program" or...)
The questions: Is this real or not? What, **if anything**, should anyone do, with this knowledge in hand? |
6d94df24-3d13-4827-ac68-4d6e82ff97d3 | trentmkelly/LessWrong-43k | LessWrong | Politics are not serious by default
[This is part of a series I’m writing on how to convince a person that AI risk is worth paying attention to.]
tl;dr: People’s default reaction to politics is not taking them seriously. They could center their entire personality on their political beliefs, and still not take them seriously. To get them to take you seriously, the quickest way is to make your words as unpolitical-seeming as possible.
I’m a high school student in France. Politics in France are interesting because they’re in a confusing superposition. One second, you'll have bourgeois intellectuals sipping red wine from their Paris apartment writing essays with dubious sexual innuendos on the deep-running dynamics of power. The next, 400 farmers will vaguely agree with the sentiment and dump 20 tons of horse manure in downtown Bordeaux. (This happened last Monday.)
The ground is horse manure. I bike through this spot every morning to go to school.
Take one of my classmates: he has an encyclopedic knowledge of obscure French philosophers, a high g factor, and a fantastic written ability. He also wears only black, calls himself a trotskyist, and doesn’t know the barest basics of Economy 101.
He and his teenage friends want to topple the French government (a common ambition in France). When I asked him what he wanted to do after they succeeded, he told me he “didn’t have the arrogance to know how he would lead". I am blown away by his wisdom. What humility! Surely, a modest leader like himself couldn't screw things up. [1] This man has read over 1 million words' worth of Hegel, and has led a protest which consisted of waking up at 6AM to build an improvised barricade out of garbage cans at the school entrance. (They cooked barbecues and played cards for the rest of the day.) It's all a game to him: he aims to become a high school teacher, which isn't instrumentally useful to his revolutionary ends at all.
The rest of my classmates aren't much better. Politics are a category labeled "serious adult |
2456d71e-f2cd-4833-b12c-346671fe83e6 | trentmkelly/LessWrong-43k | LessWrong | The medical test paradox: Can redesigning Bayes rule help?
A great coverage of one of my favorite Youtuber's of the (on LessWrong) classical reformulation of Bayes rule in terms of odds instead of probabilities. I expect to link a bunch of people to this, in particular if they get stuck reading Eliezer's intuitive explanation of Bayes.
|
34b01731-757b-47bf-9139-e66141c98bab | trentmkelly/LessWrong-43k | LessWrong | The Dead Planet Theory
Hi, this is my first post on LessWrong but I have been in rationalist adjacent circles for the last three or four years thanks to Twitter. Like many others here I read HPMOR in high school and thought it was fascinating. I was heavily into forum culture growing up but it was focused on competitive gaming scenes, even going pro on World of Tanks. I spent over six years working in offensive cybersecurity, help manage my family farm and ranch, and am currently attending law school after I got sucked into the field because I thought the LSAT was a fun test.
Through Twitter I learned more about rationalism as a community rather than a concept, and met several people in person through meetups and one-on-one meetings. I recently started writing to get some of the ideas bouncing around my head out on paper. Nothing I have written about yet is too novel conceptually, so I try to use data, real world examples, or lived experiences to bring these concepts to life. One thing I've noticed is that rationalists are very intelligent, and can use that intelligence to find very good reasons not to do something. I am a massive proponent for action, and believe that by pursuing life you will be rewarded.
No one does anything, so with just a little effort you can automatically join the top 20% of any activity or pursuit. I wrote an article on how few people even start, as well as some ways to enable extraordinary outcomes with marginally more effort. I thought this audience would enjoy it as it was very well received on rationalist Twitter. |
e493abe2-ce91-4d70-9065-9700f5e8c82d | trentmkelly/LessWrong-43k | LessWrong | So, just why do GPTs have to operate by continuing an existing string?
Cross-posted from New Savanna.
Why can’t they, you know, just come out with it, like we do?
Yeah, I know, that’s how they’re trained; they’re trained to predict the next token. I’m not sure what that tells me.
So, when I give ChatGPT the prompt, “Tell me a story,” that isn’t what is actually presented to the underlying LLM. There isn’t anything that ‘naturally’ follows such a string. It could be literally almost anything. So, a bit of manipulation has to take place in order to give the LLM something to work with. But why?
I decided to put the question to GPT-4:
> GPTs (that is, transformers) are constructed to continue an existing string of tokens. Why is that? Why can’t they simply answer a question without continuing an existing string?
>
> GPTs, or Generative Pre-trained Transformers, are language models that are pre-trained on large amounts of text data using unsupervised learning. These models are designed to generate coherent and contextually relevant text based on the input provided to them.
>
> The reason GPTs are constructed to continue an existing string of tokens is because they are designed to capture the context and meaning of the text, rather than just providing a one-word answer to a question. By generating a continuation of the input text, the model can incorporate the surrounding context and generate a more complete and accurate response.
>
> In other words, GPTs are not just simple question-answering systems, but rather complex language models that can generate text that is contextually relevant and meaningful. By training on a large corpus of text, GPTs can learn to identify patterns and relationships in language, allowing them to generate text that is more than just a simple answer to a question.
>
> That being said, GPTs can be fine-tuned to answer specific questions or perform specific tasks, such as question-answering or text classification, by providing them with appropriate training data and modifying their architecture accordingly |
21d70f04-751d-4412-997e-445752428601 | StampyAI/alignment-research-dataset/arxiv | Arxiv | An Offline Risk-aware Policy Selection Method for Bayesian Markov Decision Processes
An O ine Risk-aware Policy Selection Method for Bayesian
Markov Decision Processes
Giorgio Angelottia,b,, Nicolas Drougarda,b, Caroline P. C. Chanela,b
aANITI - Artificial and Natural Intelligence Toulouse Institute, University of Toulouse, France
bISAE-SUPAERO, University of Toulouse, France
Abstract
In Oine Model Learning for Planning and in O ine Reinforcement Learning, the limited data
set hinders the estimate of the Value function of the relative Markov Decision Process (MDP).
Consequently, the performance of the obtained policy in the real world is bounded and possibly
risky, especially when the deployment of a wrong policy can lead to catastrophic consequences.
For this reason, several pathways are being followed with the scope of reducing the model error (or
the distributional shift between the learned model and the true one) and, more broadly, obtaining
risk-aware solutions with respect to model uncertainty. But when it comes to the final application
which baseline should a practitioner choose? In an o ine context where computational time is not
an issue and robustness is the priority we propose Exploitation vs Caution (EvC), a paradigm that
(1) elegantly incorporates model uncertainty abiding by the Bayesian formalism, and (2) selects
the policy that maximizes a risk-aware objective over the Bayesian posterior between a fixed
set of candidate policies provided, for instance, by the current baselines. We validate EvC with
state-of-the-art approaches in di erent discrete, yet simple, environments o ering a fair variety
of MDP classes. In the tested scenarios EvC manages to select robust policies and hence stands
out as a useful tool for practitioners that aim to apply o ine planning and reinforcement learning
solvers in the real world.
Keywords: Risk-aware Markov Decision Process, Bayesian Markov Decision Process, O ine
Policy Selection, O ine Model Learning for Planning, O ine Reinforcement Learning
1. Introduction
The deployment of autonomous agents in an unknown, and possibly stochastic, environment is a
delicate task that usually requires a continuous agent-environment interaction which is not always
aordable in real-life situations. For instance, in applications such as the training of medical
robots and automated vehicles [ 1,2] the interaction with the environment can be both too risky
(e.g. proximity to a human) and costly since: (i) any mistake could lead to catastrophic aftermaths;
or (ii) the data collection phase requires a direct human involvement, which is usually expensive
and time demanding. Hence, it can be convenient to exploit previously collected data sets in order
to limit additional (dangerous or superfluous) interaction.
Corresponding author
Preprint submitted to Artificial Intelligence April 12, 2023arXiv:2105.13431v2 [cs.LG] 11 Apr 2023
Oine model learning for planning and o ine Reinforcement Learning (RL) are the branches
of machine learning that leverage previously collected batches of experiences with the aim of
establishing an optimal behavioral policy o ine. In recent years, the RL community published
a great number of papers on the subject, as for instance, the works in [ 3,4,5,6,7,8,9,10,11],
demonstrating the growing interest in the field. The proposed algorithms try to improve the
performance of a policy obtained either with model-free or model-based RL approaches. The
intuition behind these methods is always the same: optimizing a trade-o between exploitation
and caution . The policy optimization procedure is usually tailored in order to generate strategies
that are not too distant from the one originally used to collect the batch. In this way, the agent
will follow a strategy that will not drive him towards regions of the state-action space for which it
possesses a high degree of uncertainty.
For instance, in the o ine RL literature tailored policy optimization procedures for Markov
Decision Processes (MDPs) are implemented: (i) in Conservative Q-Learning (CQL) approach
[12], by limiting the overestimation of Out-Of-Distribution transitions; (ii) in the Behavior
Regularized Actor-Critic (BRAC) paradigm [ 6], as a penalty in the value function proportional to
an estimate of the policies’ distributional shift; (iii) in the Model-based O ine Policy Optimization
(MOPO) algorithm [ 9], as a penalization added to the reward function which is proportional to an
estimate of the distributional shift in the dynamical evolution of the system - also called epistemic
(model) error; and, (iv) in the Model-Based O ine Reinforcement Learning (MOReL) approach
[10], by creating an additional and highly penalized absorbing state and by forcing the agent to
transit to it when the model uncertainty for a specific state-action pair is above a given threshold.
On top of a non-trivial estimate of per-transition uncertainty, which is often performed with
Deep Neural Networks, the said baselines notably require the fine-tuning of domain-dependent
hyperparameters [ 13]. Such an empirical calibration demands additional interaction with the
environment and thus betrays the original purpose of o ine learning.
How to select the best set of hyperparameters for o ine RL baselines based on Deep Neural
Networks? This question does not have a trivial answer [ 11]. Usually, the policies obtained are
evaluated o ine with opolicy-evaluation (OPE) approaches like Fitted Value Iteration [ 14]
or Fitted Q Evaluation (FQE) [ 15]. Indeed, [ 13] showed that ORL baselines using DNN are
not robust with respect to hyperparameter selection and presented a comparison between o ine
hyperparameter selection methods. Nevertheless, algorithms like FQE require hyperparameters to
be tuned, shifting then the problem of selecting the best hyperparameters for the ORL baseline
to the one of selecting the hyperparameters for the OPE. Recently, a totally hyperparameter-free
method called BVFT for OPE has been proposed in the work in [ 16]. However, the said method
is aected by limited data e ciency and computational complexity that scales quadratically with
the number of models to compare. Concurrently, the work in [ 17] presented a pessimistic method
to estimate and select models for O ine Deep RL.
From another perspective, the Safe Policy Improvement with Baseline Bootstrapping (SPIBB)
methodology [3] and the Batch Optimization of Policy and Hyperparameter (BOPAH) approach
[7] were developed with the aim of obtaining a robust policy with a true hyperparameter agnostic
approach. The former generates a safe policy improvement over the data collecting policy
with theoretical guarantees similar to the ones achievable in Probably Approximately Correct
approaches, the latter uses a classic batch (o ine) RL approach with a gradient-based optimization
of the hyperparameters using held-out data.
Aside from the RL community, researchers whose field is mostly o ine model learning
for planning deal with the problem of solving MDPs under model uncertainty by focusing on
the resolution of robust MDPs [18,19]. In this context, the model dynamics (e.g. a stochastic
2
transition function) lie in a constrained ambiguity set which is a subset included in the whole
set of distributions. The problem is hence formulated as a dynamic game against a malevolent
nature which at every time step chooses the worst model in the set according to the agent
action. Subsequently to these works, the research in [ 20] introduced the chance constrained
MDP approach which optimizes policies for the percentile criterion : the Value-at-Risk ( VaR)
metric. Reference [ 20] proved that robust MDPs can generate overly conservative strategies
depending on the size and shape of the ambiguity set. And, reference [ 21] proposed approximate
solutions to generate safe policy improvements of the data collector policy. Recently, the works
in [22,23] incorporated prior knowledge upon a Bayesian methodology in order to obtain less
conservative ambiguity sets that can yield tighter safe returns estimates. In particular, the study
in [22] proposes Bayesian Credible Region (BCR), an algorithm that constructs ambiguity sets
from Bayesian credible (or confidence) regions and uses them to optimize the risk-aware problem
(the robust MDP). Reference [ 24] introduced the Soft-Robust Value Iteration (SRVI) algorithm to
optimize for the soft-robust criterion , a weighted average between the classic value function and
the Conditional VaRrisk metric with epistemic model uncertainty, to solve a Robust MDP.
In parallel to these works, reference [ 25] showed that planning in an MDP context using a
discount factor
lower than the one used in the final evaluation phase
evyields more performing
policies when a trivially learned MDP model is considered. A trivially learned MDP model is said
to be the one that maximizes the likelihood of the transitions collected in the batch. Nevertheless,
the mathematical expression that should be optimized in order to find
is intractable. In the work
in [25], the optimal discount factor is finally found by cross-validation which requires additional
interaction with the true environment.
O-policy evaluation for finite, discrete MDPs usually resorts to techniques based on Im-
portance Sampling [ 26,11]. The Importance Sampling procedure assigns di erent weights to
samples when one exploits them to estimate values from a distribution that is di erent from the
one that was used to generate the samples. This weight, called the Importance Sampling ratio,
in o-policy evaluation is the probability of sampling a specific trajectory using the new policy
over the probability of obtaining the same trajectory using the policy used to collect the data. This
ratio is independent of the models’ transition function and can be simplified as the ratio between
the probabilities of generating that given sequence of actions while deploying the two di erent
policies. The Universal O -Policy Evaluation (UnO) [ 27] has been proposed to estimate not only
the average value and the variance of policy performance but also risk-sensitive metrics based on
quantiles like the VaRor the CVaR . UnO is a non-parametric and model-free estimator based on
Importance Sampling for the cumulative distribution of returns of a fixed policy starting from a
pre-collected batch of experiences. Estimating the full cumulative distribution allows computing
risk-sensitive metrics like the Value at Risk and the Conditional Value at Risk.
Unfortunately, UnO and other Importance Sampling based techniques manage to properly
estimate the MDP’s value function only for stochastic policies while many policies generated by
state-of-the-art approaches are deterministic. On one hand, it is true that a deterministic policy is
just a specific type of stochastic policy, but on the other hand, the computation of the importance
sampling ratio for deterministic policies collapses onto a Kronecker delta. Eventually, the o ine
evaluation of deterministic policies with Importance Sampling ratio is not accurate. There is a
necessity of developing a technique to evaluate and select o ine robust deterministic policies.
Concurrently, with the aim of finding a policy that optimizes the trade-o between exploitation
and exploration in an online setting, model uncertainty has been included in a Bayesian extension
of the MDP framework called Bayesian (Adaptive) MDP (BAMDP) [ 28]. Fixing a prior for the
distribution over transition models, a posterior distribution is computed from the likelihood of the
3
sampled trajectories. Some years later, the work in [ 29] suggested that risk-aware utility functions
can replace the common BAMDP value function. In doing so, the said work proposed an algorithm
that trades o exploration, exploitation, and robustness (caution). The said works (e.g. [ 28,29])
deal with an online context when interaction with the environment is always possible while, as
we stated before in this manuscript, we will tackle an o ine problem. The works in [ 30,31]
exploit Bayesian Neural Networks with latent variables to encode the uncertainty in model-based
Reinforcement Learning with the generator model represented as a Deep Neural Network. On top
of this, optimization for a risk-sensitive utility function that is the future expected return plus a
variance term that includes both noise and model uncertainty is performed. Unfortunately, the
said risk-sensitive objective can unlikely mitigate e ciently the risk in environments where the
distribution of returns is far from Gaussian, e.g.multi-modal distributions.
Solving a Bayesian MDP is NP-hard [ 32]. Moreover, the space Mover which the posterior
distribution of models is defined is an infinite set. To overcome this computational constraint,
the work in [ 32] proposes to find the policy that maximizes a utility criterion over a finite list of
models. The said framework is called Multi-model MDP. While the maximization of the utility
criterion using a finite list of models is more treatable than solving a Bayesian MDP, constraining
a possibly infinite set of models to a set of candidate ones could in many cases be a too strong
approximation. In this work, we will not limit the space of possible models, but instead of solving
a Bayesian MDP, we will rather focus on the problem of selecting the most performant policy from
a set of candidates, according to a risk-sensitive criterion, over the full distribution of possible
models.
Interestingly, the problem of selecting the right algorithm for a machine learning problem was
studied in the work in [ 33] that formalized the protocol to be followed to analyze and solve an
algorithm selection problem. And, following the work in [ 33], Upper Confidence Bound (UCB-1)
for Multi-Armed Bandits has been applied in [ 34] for algorithm (and hence policy) selection in
Reinforcement Learning, however, the said approach was not risk-aware and was limited to an
online setting.
In fact, in light of the current limitations of state-of-the-art, we notice that selecting a robust
policy for an o ine data-driven MDP taking into account the uncertainty in the model learning
phase is still an open problem. In this context, our work presents Exploitation vs Caution (EvC),
anoineBayesian paradigm to evaluate the performance of the policies provided by the state-
of-the-art solvers and select the best policy between a set of candidates according to di erent
risk-metrics ( VaRand also the Conditional VaR). The set of candidate policies is initially obtained
using the available baselines. Taking inspiration from the works in [ 25] and [ 29], the set of
candidate policies is then enriched considering strategies that are obtained by solving several
MDPs with di erent discount factors and di erent transition functions which are sampled from
the Bayesian posterior inferred from the original fixed batch of pre-collected transitions. The
distribution of the performance of every policy taking into account the model uncertainty is
evaluated by alternating Monte Carlo model sampling and Policy Evaluation. In the end the best
policy according to the risk-sensitive criterion is selected.
4
In summary, the contributions of this work are:
1.it proposes a paradigm to select the best risk-aware policy between a set of candidate
policies for a Bayesian MDP in the o ine setting;
2.it gives probabilistic guarantees on the performance of the selected policy by computing
risk-aware criteria through a confident estimation method for a given risk level (or quantile
order);
3.the empirical results demonstrate the validity of this approach in toy environments compared
to the current policy selection baseline (UnO).
The paper is organized as follows: Section 2 starts with a recap of the MDP and of the Bayesian
MDP (BMDP) formalisms; risk-aware measures following the prescriptions of the research in
[35] and the Risk-aware BMDP are introduced in Section 3; in Section 4 the EvC approach is
presented; then in Section 5, the policy selected by EvC is compared against selected baseline
approaches, and to a risk-sensitive policy selection approach achieved by evaluating the set of
candidate policies with UnO; Section 6 concludes the paper by discussing its limitations and
pointing to future work perspectives.
2. Background
Definition 1. A Markov Decision Process (MDP) is a 6-tuple Mdef=hS;A;T;r;
; 0iwhereS
is the set of states, Athe set of actions, T:SAS! [0;1]is the state transition function
T(s;a;s0)defining the probability that dictates the evolution from s2Stos02Safter taking
the action a2A,r:SA! [rmin;rmax], with rmax;rmin2R, is the reward function r(s;a)
that indicates what the agent gains when the system state is s2Sand action a2A is applied,
2[0;1)is called the discount factor and 0:S! [0;1]is the distribution over initial states:P
s2S0(s)=1.
Definition 2. A policy is a mapping from states to a probability distribution over actions, such as
:AS! [0;1].
Definition 3. Solving an MDP amounts to finding a policy which,8s2S, maximizes the value
function:
V
M(s)def=EAt
StT26666641X
t=0
tr(St;At)S0=s3777775: (1)
It has been proved that an MDP for which the value function is defined as Eq. (1) admits a
deterministic optimal policy (a map from states to actions) [36]:
(s)=argmaxV
M(s): (2)
Definition 4. The performance of a policy in an MDP Mwith value function V
Mis defined as:
u(M)=ES0V
M(S): (3)
5
Definition 5. A BMDP is a 8-tupledef=hS;A;T;R;;;
;BiwhereSis the set of states; A
the set of actions;Tis a parametric family of transition functions Tof any MDP compatible with
SandA:T=T:SAS! [0;1]s.t.P
s02ST(s;a;s0)=1 ;Ris a parametric family of
reward functions rof any MDP compatible with SandA:R=r:SA! [rmin;rmax] ;is a
non-informative prior distribution uniform over T:R
T2Td=1with0;is a non-informative
prior distribution uniform over R:R
r2Rd=1with0;
2[0;1)is the discount factor, and
B=f(st;at;rt;st+1)gis a batch of transitions generated by acting in a fixed, unknown MDP
compatible withSandAand initial state distribution 0.
For instance, in a finite state and action spaces environment, Tis the set of alljSjjAj dierent
discrete distributions and is made ofjSjjAj uniform (uninformative) Dirichlet probability
density functions – the conjugate prior of the said distribution.
Definition 6. pis a posterior distribution over Tobtained by updating the uniform (uninforma-
tive) Dirichlet prior with the information contained in B.
In particular, thejSjprobability values Xiwith i2f1;:::;Sgdescribing the probability of
(S=s;A=a)!(S0=si) can be distributed as:
s;a
p
x1;:::; xjSjn1;:::; njSj
= ()jSjY
i=1xni
i
(ni+1)(4)
where, is the Euler gamma function, nicounts how many times the transition (s;a)!si
appears inBand=PjSj
k=1(nk+1).
Remark. Notice that the mode (the most likely configuration) of the posterior in Eq. (4) is given
by ˆxi=niPjSj
k=1nkwhile its expected value is Ep[Xi]=ni+1
.
Since we consider discrete environments, the most likely transition model with respect to p
is the one for which the transition probabilities are given by the transition frequencies in B. We
refer to these distributions as the trivial model, noted as ˆT. We note that a similar reasoning can
be applied toRand, however for simplicity’s sake we assume to know the reward function rin
this work.
Remark. It would be possible to define a prior over the initial states and obtain a posterior taking
into account the information contained in the batch B. For simplicity, we will also assume that 0
is known.
Definition 7. A solution to a BMDP is a policywhich maximizes the following utility function:
U
def=EMpu(M)(5)
where, u(M)def=ES0h
V
M(S)i
is the expected value of an MDP , averaged on the initial state,
with transition function sampled from p.
The optimal performance with respect to Eq. (5) will be the one that, on average, works the
best on the BMDP when the model is distributed according to the Bayesian posterior:
U
=max
U
(6)
Remark. Since the true MDP is unknown, leveraging the Bayesian framework is an elegant
way to incorporate uncertainty. However, the additional expected value makes Eq. (5) hard to
be computed with Bellman’s recursive approaches or approximated with temporal di erences
methods. Indeed, the objective stated in Eq. (6) is NP-hard [32].
6
3. Risk-aware measures
We advocate that solving a BMDP deals with uncertainty more elegantly than solving an MDP
for the most likely model ˆM=(ˆT;ˆr). However, the utility function defined in Eq. (5) does not
minimize the risk of obtaining a bad performant policy in the real environment.
For instance, let Pr(u(M)=ujB)be the probability density function (pdf) of the performance
of the policy when the model is distributed according to the Bayesian posterior p. Given
two policies 0and1, we can have that E[u0(M)]>E[u1(M)](see Figure 1). In this case,
following the BMDP optimization criterion, 0is better than 1. However, when fixing a value u
less than both expected values, it can happen that Pr
u0(M)<uB
>Pr
u1(M)<uB
.
Figure 1 Pdf for the performance u(M)=ugiven the batchBand two di erent policies 0and
1. Dashed lines correspond to the expected value. Both curves are filled up to their 0:25-quantiles
(a0:25). The red curve has a higher expected value while the blue one corresponds to a safer policy.
a0:25Pr
u1(M)=uB
Pr
u0(M)=uB
E[u0(M)] E[u1(M)]uPr
u(M)B
With this in mind, it can be useful to define risk-aware utility functions. Risk measures are
widely studied in mathematical finance [ 37] since they are a way to rationally quantify risk. Their
application to MDPs under model uncertainty was investigated in the work in [35].
In the following, we introduce two risk measures: the Value at Risk (VaR)and the Conditional
Value at Risk (CVaR )both inspired by the work in [38].
LetMbe a random variable governed by a probability measure Pron its domain M, and
u:M!Rbe a measurable function such that Eu(M)<+1. Following the work in [ 38], we
define the cumulative density function of u(M) as:
(a)=Pr u(M)a(7)
While in finance or insurance industry, losses that should be minimized (by looking for the optimal
decision) are considered, in the MDP framework, the function uis a utility function that should be
maximized (by seeking the optimal strategy). Let us consider a (low) risk level q2(0;1), that
corresponds to the (high) confidence level in [38].
Definition 8 (Value at Risk) .The Value at Risk ( VaR) of the utility function u, at the risk level q
is
aq=mina2Rj (a)q : (8)
The minimum in 8 is attained because is non-decreasing and right continuous. The definition
of the Conditional Value at Risk is slightly di erent from the one in [ 38], because again, in the
MDP context, the lowest gains (to be maximized) are considered, and not the highest losses (to be
minimized) like in finance.
7
Definition 9 (Conditional Value at Risk) .Let the cumulative density function of u(M)conditional
tofu(M)6aqgbe
q(a)=8>>><>>>: (a)
(aq)foraaq;
1 for a>aq:(9)
The Conditional Value at Risk ( CVaR ) of the utility function uat risk level qis the expectation of
the variable drawn by the cumulative density function 9
q=EX q[X]: (10)
Note that (aq)may be higher than q, that is why (aq)is used instead of qin the denominator
of the Eq. 9, following the rescaling solution proposed in [ 38] in case of a probability atom at aq.
With these definitions, the (Conditional) Value at Risk of the utility function uat risk level qis
also the (Conditional) Value at Risk of uat risk level (aq)q.
Remark. Since the space of possible transition functions, that can be sampled from a Dirichlet
posteriorp, has the cardinality of the continuum, the distribution of the performance with respect
to the uncertainty is continuous.
3.1. Risk-aware Solutions to BMDPs
In the following, the BMDP utility of Eq. (5) is generalized to take the risk into account.
Definition 10. Letbe a BMDP and let V
M(s)be the value function at state swhile following
a policyin the MDP Mwith transitions distributed according to the posterior p. Let also
Pr(u(M)jB)be the pdf over the possible values assumed by u(M)=ES0h
V
M(S)i
given the
batchB. Then a risk-aware utility function is defined as:
U
;def=Mpu(M)(11)
where,is a risk measure.
Remark. As a consequence of Definition 9 if is the Conditional Value at Risk at risk level 1
then the BMDP utility of Eq. (5) is a particular case of Eq. (11).
4. Solving O ine a Risk-aware BMDP
The expectation over the distribution of models makes the solution of a BMDP an intractable
computational task. Moreover, a Risk-aware BMDP also presents an additional di culty: the risk
measure requires an estimate of the quantiles of the unknown value distribution given a policy.
An analytical maximization of the performance defined in Eq. (11) is often either impossible or
too computationally demanding. In order to tackle the maximization problem, a valuable choice
is resorting to a Monte Carlo estimate of the performance. We will then look for a sub-optimal
policy, rather than an optimal one, by constraining the search to a set of candidate policies .
However, what number L2Nof models would be necessary to be sampled in order to have
an accurate estimate of the performance of a policy within a chosen confidence interval? Ideally,
Lshould be as small as possible because Policy Evaluation has to be performed Ltimes in order
to obtain the Bayesian posterior distribution of values assumed by the Value Functions.
8
Fortunately, this problem has been addressed by the work in [ 39]. It proposes a procedure that
allows iteratively sampling values from a distribution whose quantile is required until the estimate
of the said quantile will fall within a confidence interval with a required probabilistic significance.
In the present work, we exploit the idea of estimating a quantile through sampling to pro-
pose the Monte Carlo Confident Policy Selection (MC2PS) algorithm. MC2PS is presented in
Algorithm 1. MC2PS identifies a robust policy for a Risk-aware Bayesian MDP among a set of
candidate policies. In detail, for a given set of policies and for every policy 2, the algorithm
Algorithm 1 MC2PS
Input : set of policies , significance level 2[0;1], sampling batch size k2N, relative
error tolerance "rel2(0;1], posterior distribution p, risk level q2(0;1), risk measure , initial
state distribution 0, evaluation discount factor
.
Output: best policy.
1:for2do
2:U
; RiskEvaluation
;; p;0;"rel;;q;k;
3:end for
4:return=arg max
2U
;
5:
6:procedure RiskEvaluation
7: Input : policy, risk measure 2fVaR;CVaRg, posterior distribution p, initial state
distribution 0, relative error threshold "rel2[0;1], significance level 2[0;1], risk level
q2(0;1), sampling batch size k2N, evaluation discount factor
.
8: Initialize u=?
9: (the loop estimates the quantile needed in Eq. (11))
10: repeat
11: forj2f1;:::; kgin parallel do
12: SampleMjp
13: V
Mj(s) Policy Evaluation on model Mj
14: u(Mj) ES0[V
Mj(S)] Eq. (3)
15: u append u(Mj)
16: end for
17: L juj
18: Sort uin increasing order
19: Find (g;h)2N2such thatjh gjis minimal and:
20: Pr(u
gaq<u
h)=Ph 1
i=gL
i
qi(1 q)L i
>1 ;
21: until :
22: u
h u
g<" rel
u
L u
1
23: if=VaR then
24: baq u
g
25: return baq
26: if=CVaR then
27: bq 1
gPg
i=1u
i
28: return bq
29:end procedure
9
incrementally samples ktransition models from pand performs Policy Evaluation in parallel
for each one of them until the stopping criterion is reached (see the RiskEvaluation procedure in
Alg. 1). The stopping criterion guarantees that the estimate of the q-quantile is statistically well
approximated with a significance level within a dynamically sampled confidence interval whose
width is smaller than "rel(lines 19-22) given the total Lmodels sampled.
Indeed, being u
i:=u(Mi)the list of ordered performance values obtained from the sampled
Lmodels with i2f1;:::; Lg, the probability that the elements with indices handgof this
list are bounding aq, will be given by the probability of the union of all the (incompatible)
events that lead to u
gaqu
h(see Figure 2). In detail, let aqbe the theoretical Value
at Risk of u(M)at risk level q. Let us denote sampled utility values in increasing order by
u(M1):::u(ML), and suppose that the utility distribution has no probability atom at aq:
81iL, (aq) :=Pr u(Mi)aq=q. Let us introduce the random variables
Bi=1
u(Mi)aq =8>><>>:1;ifu(Mi)aq;
0;otherwise.(12)
The random variables Biare drawn by a Bernoulli distribution with parameter q. The random
variable B=PL
i=1Biis the number of sampled utilities that are lower than aq, drawn by a binomial
distribution with parameters Landq. The eventu(Mg)aq<u(Mh) is[h 1
i=gfB=ig,i.e.the
event “there are exactly g,g+1, . . . , or h 1sampled utility values that are lower than aq”. Using
the binomial distribution formula, the probability of this event is
Pr
u(Mg)aq<u(Mh)
=h 1X
i=gPr(B=i)=h 1X
i=g L
i!
qi(1 q)L i: (13)
Hence by imposing constraintPh 1
i=gL
i
qi(1 q)L i>1 when selecting indices rands, we
ensure that
Pr
u(Mg)aq<u(Mh)
>1 ; (14)
i.e.we get probabilistic bounds computed from the sampled utility values.
Note that, if there is a probability atom at aq,i.e.Pr u(Mi)=aq>0, the previous reasoning
cannot be applied directly in the case where q< (aq) :=Pr u(Mi)aq. However, we can
write
q :=Pr u(Mi)<aqq<q+:=Pr u(Mi)aq; (15)
and one can show that selected indices gandhare non decreasing with q. Thus, using the risk
level q, selected indices are higher than those that would be selected using q , and lower than
those that would be selected using q+, both corresponding to utility values bounding the location
of the probability atom, i.e.the Value at Risk aq, with probability 1 .
Eventually, the algorithm leverages the estimate of both the Value at Risk and of the policy
value achieved on sampled models to obtain an estimate of the utility function U
;for a specific
risk measure and risk level q. For instance, return the estimate of aqif=VaR orqif
=CVaR (lines 23-28). Finally, once the utility function has been estimated for every policy, it
outputs the one that maximizes it (line 4).
Remark. Letbe the total number of models sampled to estimate the quantile of the performance
distribution among policies: =P
2L. MC2PS performs Policy Evaluation times. The size
of the space of all applicable policies of a finite state and action space MDP is jj=jAjjSj. It goes
10
Figure 2 Example of how the estimate in Algorithm 1 works: imagine you have an ordered list
with Lvalues ui,i2f1;:::; Lgrepresented in the figure as rows. The probability of the event in
Eq. (14) isL
g
qg(1 q)L gand corresponds to the probability of the random variable Bdefined in
Eq. (12) to assume all integer values between gandh 1. The said probability is the sum of the
probability of the events B=iwith gi<h. In the figure, every addend is represented as a row.
In blue are encircled the values of uismaller than aqand in green the ones bigger. The algorithm
looks for the indices (g;h)=argmin(g;h)jg hjsuch that Pr(ugaq<uh)>1 anduh ug<"
where"is an error term dictating the maximum acceptable size of the confidence interval.
u1u1u1u1
::::::::::::
ur 1ur 1ur 1ur 1
ugugugug
::::::::::::
uh 1uh 1uh 1uh 1
uhuhuhuh
::::::::::::
uLuLuLuLg h
Possible
events
ugaq<uh
without saying that looking over the whole policy space can be practically intractable even for
not-so-big MDPs. Nevertheless, restricting the research to a subset of policies could be a viable
solution also for big MDPs, also considering that the Policy Evaluations are carried out in parallel.
4.1. Exploitation vs Caution (EvC)
Reference [ 25] shows that the policy obtained by solving an MDP ˆM=(S;A;ˆT;r;
),
trivially learned from a batch of experiences collected from another MDP M=(S;A;T;r;
ev),
with
a discount factor such as
ev, is more e cient in Mthan the policy obtained by
solving ˆMusing
ev. The reason is that ˆTis an approximation of Tand may not be trusted
for long-term planning horizons. The selection of the best
optimizes a trade-o between the
exploitation of the information contained in the batch and the necessity of being cautious since
the model estimate is not perfect.
Inspired by the conclusions of reference [ 25], and also guided by the intuition that the model
Mthat generated the data will be di erent from ˆM, but hopefully close to it, we expect that the
policies obtained by solving another MDP ˜M=(S;A;˜T;r;
)with ˜Tclose toˆTand
evcan
be viable solutions for the Risk-aware Bayesian MDP.
Henceforth, the Exploitation versus Caution (EvC) algorithm is presented and schematized in
Algorithm 2. EvC will search for a promising risk-aware policy by focusing the search on a set
of candidate policies computed with several baseline algorithms A, which is further enriched
by solving di erent MDPs ˜Mand
evvalues. Remember that the goal is to find a policy
that is performant in the model Mtaking into account that the agent does not have access to the
probability values that define the model, but only to the pre-collected batch. Model uncertainty
is framed within a Bayesian MDP. As already stated, we do not aim to find the optimal solution
11
Algorithm 2 EvC
Input : risk level q2[0;1], significance level 2[0;1], sampling batch size k2N, relative
error tolerance "rel2[0;1], posterior distribution p, risk measure , initial state distribution 0,
set of discount factors G, number of models to solve l2N,Bbatch of transitions, evaluation
discount factor
ev
Output: best policy.
1: Generate Policies
p;G;l
2:return=MC2PS
q;;k;"rel;p;;; 0;
ev
3:
4:procedure Generate Policies
5: Input : posterior distribution p, set of discount factors G, number of models l2Nto be
solved,Bbatch of transitions.
6: Initialize M=n
ltransition modelspo
[fˆTg
7: Initialize =?(an empty set)
8: Initialize A=fSPIBB;BOPAH;BCR;NORBUg(examples of baseline algorithms)
9: for(
2G;T2M)do
10:(T;
)=solution to the MDP with Tand
11: Append(T;
)toif(T;
)<
12: end for
13: foralgorithm2Ado
14:algorithm =solution to the O ine MDP withBandalgorithm
15: Appendalgorithm toifalgorithm <
16: end for
17: return
18:end procedure
to the BMDP, but rather to select the best policy in terms of robustness within the ones in the
candidate set.
In detail, EvC first generates candidate policies that will constitute the set (line 1 calls
Generate Policies procedure). For this, starting from the batch the problem is solved using a
portfolio constituted of state-of-the-art algorithms (line 8). On top of that the trivial MDP1ˆMandl
additional MDPs are sampled from the Bayesian posterior pobtained from the batch (line 6), and
then solved with di erent values off
2Gj
evg(lines 9-12). Recalling that
evis the discount
factor of the Risk-aware Bayesian MDP. Note that the obtained set has unrepeated solutions
(line 11 and line 15). As a last step, MC2PS is launched with the obtained set of candidate policies
returning the best risk-aware solution 2(line 2).
Remark. Note, if we test over 9dierent discount factors, such as G=f0:1;0:2;:::;
ev=0:9g,
and 5 di erent (l=5)MDPs ˜M(including ˆM), then we solvejj9l=45MDPs to enrich the
set of candidate policies within this approach.
1The trivial MDP ˆMis a straightforward MDP estimate using the batch B. For instance, in the case of a discrete MDP
this is equivalent to the model that maximizes the likelihood of B,i.e.the one whose transition probabilities are obtained
from the frequencies of transitions in the batch.
12
4.2. Theoretical guarantees
Since EvC searches for the policy 2that maximizes the criterion of Eq. (11), the
Algorithm 2, rather than yielding a sub-optimal solution to the Risk-aware BMDP, can be seen
as a policy selection approach. Assuming that the Bayesian posterior peciently encodes the
model uncertainty, EvC outputs a policy whose performance in the real environment is guaranteed
in probability to be greater than some value that changes with respect to the chosen risk-aware
measure. In a simpler way, we can provide theoretical guarantees on the estimate of the quantile
needed to compute the risk-aware utility function that will be eventually maximized over the set
of candidate policies.
Theorem 1. Let2be a candidate policy and u(Mg)be an estimate of the Value at Risk
ofu(M)at risk level qcalculated through EvC. Let u(M)be the performance of with M
distributed according to the Bayesian posterior p. The performance of in this MDP Mis
greater than the estimate of a qwith probability:
Pr u(M)u(Mg)(1 q)(1 ): (16)
Proof. Note thatu(M)aq \aqu(Mg) u(M)u(Mg) , where aqdenotes the
Value at Risk of u(M)at risk level q. The two events of the intersection respectively depend on
two independent random variables – a future performance u(M), that could be obtained by acting
according to the policy , and a Value at Risk estimate u(Mg), whose randomness is the result of
the sampling procedure in the Algorithm 1. The previous inclusion allows writing Pr u(M)
u(Mg)Pr u(M)aqPr aqu(Mr)(1 q)Pr aqu(Mg)(1 q)(1 ). The
last inequality is ensured by the quantile estimation (lines 19-22 in Algorithm 1), and the previous
one by the definition of aq. Therefore, we get the equation 16.
Remark. When the risk-aware measure used in EvC is VaR the lower bound on u(M)(u(Mg))
in the Proof of Theorem 1 is maximized over the policies. If the risk-aware measure is CVaR the
empirical expected value over the q-fraction of low-performant policies is maximized.
Remark. Since u(M1)u(M2):::u(Mg), then1
gPg
i=1u(Mi)u(Mg), therefore the
same lower bound in probability is also valid for the CVaR utility function:
Pr0BBBBB@u(M)1
ggX
i=1u(Mi)1CCCCCAPr u(M)u(Mg)(1 q)(1 ): (17)
Note that the sampling procedure in the Algorithm 1 ensures that Pr ju(Mg) aqj",
with"="rel(u(ML) u(M1)). If a practitioner wants such a probabilistic bound on the
precision of the estimate of q, she/he should sample additional models N2Mfrom the posterior,
to select nindependent models such that u(N)u(Mg), where u(Mg)is given by the sampling
procedure of Algorithm 1. The new estimate of qcomputed from these nnew models benefits
from the following theorem.
Theorem 2. Let2be a candidate policy, Ni2Mbe one of the nnew sampled models
from the posterior psuch that8i;u(Ni)u(Mg), with u(Mg)calculated through EvC, and
U=1
nPn
i=1u(Ni)be the new estimate of q. This new estimate of the Conditional Value at Risk
of uat risk level q respects the following inequality:
Pr
jU qjt
2 exp0BBBBB@ 2nt2
u(Mg) 21CCCCCA+; (18)
13
with=infm2Mu(m), or any other lower bound of uas, for instance,rmin
1
. Note that0if the
reward values are known to be non-negative.
Proof. By using the law of total probability, and upper bounding some probability values by 1,
Pr
jU qjt
=Pr
jU qjt8i;u(Ni)aq
Pr(8i;u(Ni)aq)
+Pr
jU qjt9is.t.u(Ni)>aq
Pr 9is.t.u(Ni)>aq
Pr
jU qjt8i;u(Ni)aq
+Pr 9is.t.u(Ni)>aq:
The probability value on the right is lower than Pr u(Mg)>aqPr
aq<u(Mg);u(Mh)
using the inequality Pr(u(Mg)aq)>1 from lines 19-22 of Algorithm 1. What follows
only depends on the definition of qas the expected value up to aq, and Hoe ding’s inequality.
4.3. Consequences and applications
The purpose of O ine Learning is that of providing behavioral policies to be applied by
real-world automated agents. Thus reducing the risk at the expense of a longer computational
phase is not only commendable but compulsory. Will the policy obtained through MC2PS and
EvC be good or entirely-risk free? This goes beyond the theoretical guarantees provided by the
algorithms since its outputs depend not only on the characteristics of the environment and on the
set of candidate policies but also on the quality and variety of the batch. A batch of transitions
that is too small or too concentrated in the same region of the state-action space may result in
policies that, even if they are guaranteed to handle the risk better than the trivial one, can still be
catastrophic.
5. Experiments
In order to evaluate the proposed approach, we selected three small MDPs and hence easy-
to-study stochastic environments endowed with diversified characteristics: two planning envi-
ronments without absorbing states, Ring (5 states, 3 actions), and Chain (5 states, 2 actions).
The former consisting in the stabilization of the agent in a particular non-absorbing goal with
stochastic drift and the latter presenting cycles; and the Random Frozen Lake (RFL) environment,
a re-adaptation of Frozen Lake from Open AI Gym suite [ 40] (88grid world with fatal absorbing
states).
5.1. Environments’ description
Ring. This environment is described by five states: f0;:::; 4g, forming a single loop. Three
actions are possible: a,b, and c. The agent starts in state 0. The action awill move it to the state
s 1with probability 1.0 (e.g. when in 4 it moves to 3) if s=0;1;3, and with probability 0:5if it
is elsewhere. With the action bthe agent will remain in the same state with probability 0:8and
move to the left or to the right with probability 0:1if it is in state s=0;1;3, if it is in state 2or4
it will move with probability 1. The action cwill move the agent to the right with probability 0:9
and it will not move with probability 0:1if it is in state s=0;1;3. Otherwise, the same e ects
will apply, but with probability 0:5. The agent earns an immediate reward r=0:5if it moves from
2!3or4!3, and r=1for any transition 3!3. Elsewhere r=0. A graphical representation
is shown in Figure 3a.
14
Figure 3 Environments illustration.
0
1
2
4
3
Clockwise aCounter-
clockwise
c
Loop b
(a)Representation of the Ring
environment.
1
2
3
4
5a;0
a;0
a;0
a;0b;2
a;10b;2
(b)Representation of the Chain envi-
ronment. Each circle is a state, each
arrow is a transition labeled by action,
reward.
(c)Frozen Lake environment ex-
ample with grid of size 33. The
agent has to reach the goal, paying
attention to slippery (blue) states
and avoiding holes (black).
Chain. This environment was proposed in the research in [ 28] and was adapted to the present
study. There are five states with the topology of an open chain and two actions aandb. The agent
starts from the state most to the left. With action athe agent moves to the right and receives an
immediate reward r=0with probability 0:8. Once the agent is in the rightmost state, performing
the first action lets him stay there and receive a reward r=10with probability 0:8. It slips back to
the origin earning a reward r=2with probability 0:2. Action bmoves the agent to the origin state
with probability 0:8receiving a reward r=2or letting it go right with probability 0:2earning
r=0. The optimal policy consists of applying action bin the first state and action ain the others.
A representation is shown in Figure 3b.
Random Frozen Lake (RFL). The Frozen Lake Environment of the Open AI Gym suite [ 40] was
edited for this study. The agent moves in a grid world ( 88). It starts in the utmost left corner
and it must reach a distant absorbing goal state that yields a reward r=1. In the grid there are
some holes. If it falls into a hole it is blocked there and it can not move anymore, obtaining
from that moment an immediate reward r=0. Unfortunately, the field is covered with ice and
hence it is slippery. When the agent wants to move towards a nearby state it can slip with fixed
probability pand ends up in an unintended place. The grid is generated randomly assuring that
there always exists a hole-free path connecting the start and the goal. Moreover, to each couple of
action and non-terminal state (a;s)is assigned a di erent immediate reward rsampled at random
between (0;0:8)at the moment of the generation of the MDP problem. The MDP itself does
not have a stochastic reward, but the map and the rewards are randomly generated. A graphical
representation (for a 3 3 grid) is shown in Figure 3c.
5.2. Setup
Given (n;m)2N2,mtrajectories, with nsteps each, are generated following a random policy
in each environment. We opted for a random data collecting procedure because we imagine using
EvC in a scenario where both the developers and the autonomous agent are completely agnostic
about the model dynamics and have no prior knowledge.
The true environment is assumed to be known for the a posteriori evaluation. The most likely
transition model is inferred from the batch. The trivial MDP was then solved with the Policy
15
Table 1: Parameters and hyperparameters used during the simulations: nis the number of steps in each trajectory contained
in a batch; lis the number of di erent models sampled from the prior in EvC (Algorithm 2); fN^gis the set of di erent
thresholds used in SPIBB; fold andDOF are the fold and degree of freedom hyper-parameters used in BOPAH; is the
soft robust hyper-parameter of NORBU. Bold values are displayed in the plots.
Environment n l fN^g fold DOF
Ring 8 3 f1;2;3;5;7;10;20g 2 20 0 :5
Chain 8 3 f1;2;3;5;7;10;20g 2 20 0 :5
RFL (88) 15 10f1;2;3;5;7;10;20g 2 20 0 :5
Iteration algorithm and its relative performance in the true environment is obtained by Policy
Evaluation. EvC data was computed with 0:25anda0:25(the first quartile). For each of these
risk-aware measures, the following parameters (see Algorithm 2) were used: the set of discount
factors G=f0:2;0:4;0:6;0:8;0:9g, the significance level =0:01, the relative tolerance error
"rel=0:01, and the number lof dierent models sampled from the prior is given in Table 1.
In the experiments, for a given batch size N=nm2N,50dierent batches were generated
containing fixed size trajectories. The trajectory sizes used are also given in Table 1.
The chosen state-of-the-art algorithms that provide the base for the set of candidate policies
are the following:
1.Deterministic policies: output by the following baselines1, please notice that the quantile
used for the robust and soft robust objectives in the algorithms is the same provided as
general input for the estimate of EvC: BCR [22],NORBU - Soft Robust CVaR [24] (soft
robust hyperparameter =0:5);
2.Stochastic policies: output by the following algorithms2:SPIBB [3] receiving as input the
batch collector policy, and, BOPAH [7] receiving as input the batch collector policy.
In our implementation of these baselines we only used intuitively tunable parameters (e.g. the
discount factor).
Remark. We did not use MOPO [ 9] and MOReL [ 10] since: (1) they have usually been tested
on continuous state MDPs driven by deterministic dynamics, while here we are tackling non-
deterministic environments; (2) they highly rely on hyperparameter domain-dependent fine tuning
which we did not do to fulfill the o ine learning obligation.
In the evaluation phase, the discount factor is defined as
ev=0:9. The others simulation
parameters are provided in Table 1. Eventually, we also compared EvC with UnO by performing
the risk-sensitive o -policy evaluation with UnO over the same set of candidate policies provided
to EvC and then selected the one that maximized the risk-sensitive objective. While it is true that
UnO, as other Importance Sampling based o -policy evaluation methods, should not be able to
accurately evaluate deterministic policy, we still compare our approach to it because there are no
other risk-sensitive o -policy evaluation approaches of our knowledge.
5.3. Metrics
We report metrics about the performance di erences U=u
; utrivial
;of the policies
obtained with a specific algorithm (Eq. (3) using the utility function defined in Eq. (11)) and the
1The code was taken from the authors’ Github repository: https: //github.com /marekpetrik /craam2 /tree/master /examples /
evaluation /algorithms and readapted.
2The code was taken from the Github repository: https: //github.com /KAIST-AILab /BOPAH and readapted.
16
performance obtained by solving the trivial model in the same setting and using the same batch of
trajectories. This last value is normalized by the performance of the optimal policy. In particular,
we consider: (1) the maximal Uobtained, (2) the mean value over all the di erent simulations,
(3) the median over all simulations, and (4) the minimal U. The selected metrics provide insight
into the validity of the approaches. We consider only the extrema of the distributions of the results
(min, max), their median, and mean values since trying to estimate the whole distributions, and
hence their quantiles could result in wrong conclusions if we are not sampling enough batches.
For instance, in order to correctly estimate the Value at Risk at risk level q=0:25with a=0:01
significance usually tens of thousands of samples are required. However, we are performing only
hundreds of simulations with a fixed batch size N, which is enough for the selected metrics but
definitely insu cient for the study of the whole distribution.
Remark. Please notice that the distribution whose statistics are displayed in the tables is not the
one used to maximize Eq. (11) since it is a distribution over di erent starting batches collected
with the same random policy and not the distribution that encodes the model uncertainty using the
same starting dataset. Indeed, from a bayesian point of view the results are distributed along:
Pr(u(M);Bjrandom )=Pr
u(M)jp
Pr
pjB
Pr(Bjrandom ); (19)
that represents the probability of collecting a batch Bby collecting transitions using a random
policyrandom and hence observing the performance u(M)by deploying a policy . Note that
there is a deterministic mapping among the posterior pand the batch, therefore Pr
pjB
is a
delta function.
5.4. Results and Discussion
For Ring and Chain, the results averaged over 100 di erent batches for each batch size
N2f8;16;24;32;40;48;56gare displayed in Table 2. While for RFL the results averaged over
50 dierent batches for every batch size N2f15;30;45;60;75;90;105gare reported in Table 3.
Even if the datasets are composed of relatively short trajectories (Ring and Chain n=8 time
steps each, Random Frozen Lake n=15time steps each) in most cases UnO does not manage to
evaluate the deterministic policies. Please note that UnO computes the Importance Sampling ratio
for a trajectory h, a policyand a behavioral policy as
h=nhY
i=1(si;ai)
(si;ai)(20)
where nhis the number of time steps of the trajectory h. However, this formulation assumes
that bothandare stochastic. In our formulation (s;a)=jAj 18(s;a)2SA , butis
stochastic only when it is the output of SPIBB or BOPAH. When is deterministic, the former
equation can be rewritten as
h=nhY
i=1(si);ai
(si;ai)=jAjnhnhY
i=1(si);ai: (21)
This means that
h=jAjnhif and only if all sequence of actions and states is consistent with the
deterministic policy , otherwise
h=0. It goes without saying that the probability that the ratio
will be zero grows as a power of jAjand exponentially in nh. In particular, the probability that a
sequence will be generated by the deterministic policy is in Ring jAj nh=3 81:510 4, in
17
Chain 2 83:910 3and in RFL 4 159:310 10. Therefore, almost always UnO will pick
a policy among SPIBB and BOPAH since the Importance Sampling ratio will be zero for other
policies. If even the output of SPIBB and BOPAH will result in a zero Importance Sampling ratio,
then the first policy in the candidate set (the trivial policy) will be picked. The said phenomenon
is what happens most of the time. Therefore UnO alternates among the Trivial Policy, one among
SPIBB and BOPAH, and once in a while it selects another approach.
Ring. Using q=0:25the best method according to the Max,Mean and the Median is NORBU
with the CVaR Soft Robust objective (see Table 2). However, the most robust baseline in terms of
worst-case performance is BOPAH. The distributions of results are asymmetric around U. In
the cases of BCR and NORBU the Mean and the Median are approximately zero. In the cases
of SPIBB and BOPAH the Median and the Mean are less than zero. Regarding the o -policy
evaluation and selection methods, EvC with the VaRis the most performing one with respect to
all the considered metrics.
Chain. In this environment every baseline except for SPIBB works the same with SPIBB being
the worst in terms of Min (see Table 2). Regarding the o -policy evaluation and selection, all
algorithms perform well since there is not really a substantial di erence between the approaches
(except for SPIBB).
Random Frozen Lake (RFL). We test the approaches in 4 di erent RFLs. The best approach in
terms of overall metrics in 3 environments over 4 is again NORBU with the Soft Robust CVaR (see
Table 3). SPIBB is the best in Environment 4. The best selection method is EvC with VaR/CVaR
(aq/q) which provides identical performances.
Table 2: Statistics of the normalized performance di erence Ubetween the reported algorithm (risk level q=0:25) and
the trivial policy averaged over batch size N2f8;16;24;32;40;48;56gwith 100dierent batches for size in Ring and
Chain. On the right Uwith the algorithm selected by EvC and UnO with a0:25and0:25. Notice that both EvC and UnO
can pick also a policy obtained with a model solved with a di erent discount factor.
Baseline Selection Method
Environment Metrics SPIBB BOPAH BCR NORBU EvC a0:25EvC0:25UnO a0:25UnO0:25
Max 0.61 0.48 0.74 0.84 0.82 0.71 0.82 0.72
Ring Mean -0.29 -0.28 -0.01 0.03 0.01 -0.04 -0.26 -0.27
Median -0.31 -0.34 0.0 0.0 0.0 0.0 -0.27 -0.33
Min -0.78 -0.68 -0.82 -0.71 -0.82 -0.82 -0.96 -0.96
Max 0.55 0.54 0.55 0.55 0.55 0.55 0.54 0.54
Chain Mean 0.0 0.01 0.01 0.02 0.01 0.01 0.01 0.01
Median -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01 -0.01
Min -0.38 -0.16 -0.15 -0.15 -0.16 -0.16 -0.16 -0.16
In the following, we comment on the results obtained with EvC. Note this algorithm selects
the policy that optimizes the (Conditional) Value at Risk over the first quartile ( q=0:25) starting
from the set of candidate policies discussed in the last section.
In terms of risk awareness, after a global study over di erent batch sizes, EvC does not select
the policy that produces the best values with respect to the considered metrics. Nevertheless, the
policy selected by EvC is between the more robust ones. These results are shown in Figures 4,
5, and 6. In particular, our approach tends to opt for a policy from the ones obtained by solving
several models with di erent discount factors
when the batch is small. The number of times
such a policy is selected decreases to the benefit of 1) the trivial policy when the batch size N
18
Table 3: Statistics of the normalized performance di erence Ubetween the reported algorithm (quantile order used
q=0:25) and the trivial policy averaged over batch size N2f15;30;45;60;75;90;105;120;135gwith 50dierent batches
for size in di erent Random Frozen Lake environments.
Baseline Selection Method
Environment Metrics SPIBB BOPAH BCR NORBU EvC a0:25EvC0:25UnO a0:25UnO0:25
Max 0.32 0.31 0.31 0.32 0.3 0.32 0.37 0.32
RFL Mean 0.05 -0.04 -0.04 0.05 0.05 0.05 -0.02 -0.05
Env. 1 Median 0.04 -0.07 -0.04 0.04 0.04 0.04 -0.01 -0.08
Min -0.25 -0.22 -0.39 -0.33 -0.33 -0.33 -0.31 -0.22
Max 0.33 0.22 0.3 0.34 0.34 0.34 0.28 0.18
RFL Mean 0.02 -0.07 -0.05 0.06 0.06 0.06 0.0 -0.07
Env. 2 Median 0.01 -0.08 -0.06 0.06 0.06 0.06 -0.01 -0.08
Min -0.21 -0.22 -0.29 -0.12 -0.12 -0.12 -0.28 -0.26
Max 0.3 0.23 0.43 0.36 0.36 0.36 0.35 0.18
RFL Mean 0.01 -0.08 0.0 0.04 0.04 0.04 -0.03 -0.08
Env. 3 Median -0.0 -0.09 0.01 0.02 0.02 0.02 -0.03 -0.09
Min -0.16 -0.3 -0.36 -0.27 -0.27 -0.27 -0.29 -0.26
Max 0.32 0.22 0.36 0.31 0.31 0.31 0.27 0.22
RFL Mean 0.02 -0.06 0.01 0.05 0.05 0.05 -0.05 -0.06
Env. 4 Median 0.02 -0.06 0.01 0.05 0.05 0.05 -0.05 -0.06
Min -0.32 -0.3 -0.4 -0.29 -0.29 -0.29 -0.39 -0.3
Figure 4 Policy selection rate by EvC a0:25and EvC0:25in Ring for di erent batch sizes.
0 20 40 60 80 100a0:25N=8a0:25N=24a0:25N=40a0:25N=56a0:25N=72a0:25N=880:25N=80:25N=240:25N=400:25N=560:25N=720:25N=88
Trivial
BCR
SPIBB
BOPAH
NORBU
Dierent
’s
Figure 5 Policy selection rate by EvC a0:25and EvC0:25in Chain for di erent batch sizes.
0 20 40 60 80 100a0:25N=8a0:25N=24a0:25N=40a0:25N=56a0:25N=72a0:25N=880:25N=80:25N=240:25N=400:25N=560:25N=720:25N=88
Trivial
BCR
SPIBB
BOPAH
NORBU
Dierent
’s
19
Figure 6 Policy selection rate by EvC a0:25and EvC0:25in RFL (aggregate of Env. 1, 2, 3 and
4) for di erent batch sizes.
0 20 40 60 80 100a0:25N=15a0:25N=45a0:25N=75a0:25N=105a0:25N=1350:25N=150:25N=450:25N=750:25N=1050:25N=135
Trivial
BCR
SPIBB
BOPAH
NORBU
Dierent
’s
increases (Ring and Chain) or 2) NORBU (in the RFL environment). This is reasonable since
model uncertainty decreases with Nand the trivial model will be closer and closer to the true
one. We suppose that for not-so-small environments (RFL) the trivial policy can not be trusted
for small batch sizes while NORBU manages to cut the posterior space in ambiguity sets that are
eciently optimized over. The policies computed through SPIBB and BOPAH are never selected.
Remember that those are stochastic policies that were obtained by improving the batch collector
policy that was uniformly random over the actions. Stochastic policies seem not to provide good
risk-aware estimates with respect to risk-aware BMDP criteria defined in Eq. (11) and also require
sampling more models in order, for the method, to estimate a quantile with the needed accuracy.
Another interesting e ect reported in Ring is that for N=8the trivial policy is picked a
considerable amount of times. Both in Ring and in Chain EvC selects more often the output of
BCR rather than that of NORBU, even though NORBU in the end is slightly the most performing
according to Table 2. In RFL only the policies computed through solving di erent models
sampled from the posterior with di erent
’s and NORBU are selected. The first kind of policy is
preferred when the batch is very small N=15, however, the ratio inverts already for N=45with
NORBU that gets more and more chosen with Ngrowing. Both the Trivial policy and the one
returned by BCR are always discarded, stressing the superiority of NORBU in this environment
typology. Surprisingly, EvC never selects SPIBB nor BOPAH not even in RFL despite its good
performance. This is due probably to the di culty in estimating the quantiles of the performance
of a non-deterministic policy such as the output of SPIBB. The algorithm would require a number
of sampled models higher than the bail-out hyperparameter.
6. Conclusion and future work
This work presents EvC, a method to first evaluate and then select the best risk-aware policies
within a set of candidate policies in the context of O ine solutions to Risk-aware Bayesian
MDPs. The Risk-aware BMDP defines an elegant mathematical framework that balances the
exploitation-caution trade-o in oine model-based sequential decision-making under uncertainty.
The set of candidate policies exploited by EvC contains the strategies obtained by solving not
only the trivially learned MDP but also other MDPs with transition dynamics sampled from
the Bayesian posterior (e.g. the one shown in Eq. (4)) using di erent discount factors and the
solutions of current o ine MDP and RL solvers (SPIBB, BOPAH, BCR, NORBU). The estimate
20
of risk in the presented algorithm provides a probabilistic guarantee for the actual performance of
the resulting policy described in Theorem 1 and Theorem 2. The selected solution maximizes the
risk-aware utility function of Eq. (11). Since EvC is based on the parallel resolution of a great
number of models sampled from the Bayesian posterior we doubt that it could e ciently scale
to select policies for MDPs with a great number of states and actions. However, the presented
approach should be considered a valuable tool to be exploited for real-world problem-solving
through MDP modeling. In such a case time is an a ordable resource since the safety of possible
humans in the loop would be the priority.
In the future, we aim to improve EvC’s method of generation of the set of candidate policies.
An interesting direction consists in incrementally enriching the set of candidate policies following
some kind of heuristics, e.g. policy improvement by genetic algorithms. An extension to compute
robust policies for data-driven POMDPs could be envisaged whether a consistent representation
of the model uncertainty can be formalized.
Acknowledgments
This work is supported by the Artificial and Natural Intelligence Toulouse Institute (ANITI) -
Institut 3iA (ANR-19-PI3A-0004).
Code availability
The code for the experiments is open and available in the Github repository: https: //github.
com/giorgioangel /evc
References
[1]A. Jonsson, Deep Reinforcement Learning in Medicine, Kidney Diseases 5 (2018) 1–5. doi:10.1159/000492670 .
[2]B. Mirchevska, C. Pek, M. Werling, M. Altho , J. Boedecker, High-level Decision Making for Safe and Reasonable
Autonomous Lane Changing using Reinforcement Learning, in: 2018 21st International Conference on Intelligent
Transportation Systems (ITSC), 2018, pp. 2156–2162. doi:10.1109/ITSC.2018.8569448 .
[3]R. Laroche, P. Trichelair, R. T. Des Combes, Safe Policy Improvement with Baseline Bootstrapping, in: International
Conference on Machine Learning, PMLR, 2019, pp. 3652–3661.
[4]S. Fujimoto, E. Conti, M. Ghavamzadeh, J. Pineau, Benchmarking Batch Deep Reinforcement Learning Algorithms
(2019). arXiv:1910.01708 .
[5]A. Kumar, J. Fu, M. Soh, G. Tucker, S. Levine, Stabilizing O -Policy Q-Learning via Bootstrapping Error Reduction,
in: Advances in Neural Information Processing Systems 32, 2019, pp. 11784–11794.
[6]Y . Wu, G. Tucker, O. Nachum, Behavior Regularized O ine Reinforcement Learning (2019). arXiv:1911.11361 .
[7]B. Lee, J. Lee, P. Vrancx, D. Kim, K.-E. Kim, Batch Reinforcement Learning with Hyperparameter Gradients, in:
International Conference on Machine Learning, PMLR, 2020, pp. 5725–5735.
[8]J. Chen, N. Jiang, Information-Theoretic Considerations in Batch Reinforcement Learning, in: Proceedings of
Machine Learning Research, V ol. 97, PMLR, Long Beach, California, USA, 2019, pp. 1042–1051.
[9]T. Yu, G. Thomas, L. Yu, S. Ermon, J. Y . Zou, S. Levine, C. Finn, T. Ma, MOPO: Model-based O ine Policy
Optimization, in: Advances in Neural Information Processing Systems, V ol. 33, 2020, pp. 14129–14142.
[10] R. Kidambi, A. Rajeswaran, P. Netrapalli, T. Joachims, MOReL: Model-Based O ine Reinforcement Learning, in:
Advances in Neural Information Processing Systems, V ol. 33, 2020, pp. 21810–21823.
[11] S. Levine, A. Kumar, G. Tucker, J. Fu, O ine Reinforcement Learning: Tutorial, Review, and Perspectives on Open
Problems (2020). arXiv:2005.01643 .
[12] A. Kumar, A. Zhou, G. Tucker, S. Levine, Conservative q-learning for o ine reinforcement learning, Advances in
Neural Information Processing Systems 33 (2020) 1179–1191.
[13] T. L. Paine, C. Paduraru, A. Michi, C. Gulcehre, K. Zolna, A. Novikov, Z. Wang, N. de Freitas, Hyperparameter
selection for o ine reinforcement learning, arXiv preprint arXiv:2007.09055 (2020).
21
[14] R. Munos, C. Szepesvári, Finite-time bounds for fitted value iteration., Journal of Machine Learning Research 9 (5)
(2008).
[15] H. Le, C. V oloshin, Y . Yue, Batch policy learning under constraints, in: International Conference on Machine
Learning, PMLR, 2019, pp. 3703–3712.
[16] S. Zhang, N. Jiang, Towards hyperparameter-free policy selection for o ine reinforcement learning, Advances in
Neural Information Processing Systems 34 (2021).
[17] C.-H. H. Yang, Z. Qi, Y . Cui, P.-Y . Chen, Pessimistic model selection for o ine deep reinforcement learning, arXiv
preprint arXiv:2111.14346 (2021).
[18] A. Nilim, L. El Ghaoui, Robust Control of Markov Decision Processes with Uncertain Transition Matrices,
Operations Research 53 (5) (2005) 780–798. arXiv:https://doi.org/10.1287/opre.1050.0216 ,doi:
10.1287/opre.1050.0216 .
[19] G. N. Iyengar, Robust Dynamic Programming, Mathematics of Operations Research 30 (2) (2005) 257–280.
arXiv:https://doi.org/10.1287/moor.1040.0129 ,doi:10.1287/moor.1040.0129 .
[20] E. Delage, S. Mannor, Percentile optimization for markov decision processes with parameter uncertainty, Operations
research 58 (1) (2010) 203–213.
[21] M. Petrik, M. Ghavamzadeh, Y . Chow, Safe Policy Improvement by Minimizing Robust Baseline Regret, Advances
in Neural Information Processing Systems 29 (2016) 2298–2306.
[22] M. Petrik, R. H. Russel, Beyond Confidence Regions: Tight Bayesian Ambiguity Sets for Robust MDPs, in:
Advances in Neural Information Processing Systems 32, V ol. 32, 2019.
[23] B. Behzadian, R. Hasan Russel, M. Petrik, C. Pang Ho, Optimizing Percentile Criterion using Robust MDPs, in:
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, V ol. 130 of Proceedings
of Machine Learning Research, PMLR, 2021, pp. 1009–1017.
[24] E. A. Lobo, M. Ghavamzadeh, M. Petrik, Soft-robust algorithms for batch reinforcement learning (2021). arXiv:
2011.14495 .
[25] N. Jiang, A. Kulesza, S. Singh, R. Lewis, The dependence of e ective planning horizon on model accuracy, in:
Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 2015, pp.
1181–1189.
[26] D. Precup, Eligibility traces for o -policy policy evaluation, Computer Science Department Faculty Publication
Series (2000) 80.
[27] Y . Chandak, S. Niekum, B. C. da Silva, E. Learned-Miller, E. Brunskill, P. S. Thomas, Universal o -policy evaluation,
Advances in Neural Information Processing Systems 34 (2021).
[28] M. Strens, A Bayesian framework for Reinforcement Learning, in: In Proceedings of the Seventeenth International
Conference on Machine Learning, ICML, 2000, pp. 943–950.
[29] A. Sharma, J. Harrison, M. Tsao, M. Pavone, Robust and Adaptive Planning under Model Uncertainty, Proceedings
of the International Conference on Automated Planning and Scheduling 29 (1) (2019) 410–418.
[30] S. Depeweg, J. M. Hernández-Lobato, F. Doshi-Velez, S. Udluft, Uncertainty decomposition in bayesian neural
networks with latent variables, arXiv preprint arXiv:1706.08495 (2017).
[31] S. Depeweg, J.-M. Hernandez-Lobato, F. Doshi-Velez, S. Udluft, Decomposition of uncertainty in bayesian deep
learning for e cient and risk-sensitive learning, in: International Conference on Machine Learning, PMLR, 2018,
pp. 1184–1193.
[32] L. N. Steimle, D. L. Kaufman, B. T. Denton, Multi-model markov decision processes, IISE Transactions 53 (10)
(2021) 1124–1139.
[33] J. R. Rice, The algorithm selection problem, in: Advances in computers, V ol. 15, Elsevier, 1976, pp. 65–118.
[34] R. Laroche, R. Feraud, Reinforcement learning algorithm selection, in: International Conference on Learning
Representations 2018, 2018.
[35] A. Majumdar, M. Pavone, G. Hager, S. Thomas, M. Torres-Torriti, How Should a Robot Assess Risk? Towards an
Axiomatic Theory of Risk in Robotics, in: Robotics Research, Springer International Publishing, Cham, 2020, pp.
75–84.
[36] Mausam, A. Kolobov, Planning with Markov Decision Processes: An AI perspective, Synthesis Lectures on Artificial
Intelligence and Machine Learning 6 (1) (2012) 1–210.
[37] P. Artzner, F. Delbaen, E. Jean-Marc, D. Heath, Coherent Measures of Risk, Mathematical Finance 9 (1999) 203 –
228. doi:10.1111/1467-9965.00068 .
[38] R. T. Rockafellar, S. Uryasev, Conditional value-at-risk for general loss distributions, Journal of banking & finance
26 (7) (2002) 1443–1471.
[39] K. Briggs, F. Ying, How to estimate quantiles easily and reliably, Mathematics Today 2018 (February) (2018) 26–29.
[40] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, W. Zaremba, OpenAI Gym (2016).
arXiv:1606.01540 .
22 |
b39779f8-a1b9-413e-951b-1653c050e8cc | StampyAI/alignment-research-dataset/lesswrong | LessWrong | IMO challenge bet with Eliezer
Eliezer and I publicly stated some predictions about AI performance on the IMO by 2025. In honor of OpenAI's post [Solving (Some) Formal Math Problems](https://openai.com/blog/formal-math/), it seems good to publicly state and clarify our predictions, have a final chance to adjust them, and say a bit in advance about how we'd update.
The predictions
---------------
Eliezer and I had [an exchange in November 2021](https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=8bgtz5RnyvRyrtJEb#comments).[[1]](#fnqpupmurdl9e) My final prediction (after significantly revising my guesses after looking up IMO questions and medal thresholds) was:
> I'd put 4% on "For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem" where "hardest problem" = "usually problem #6, but use problem #3 instead if either: (i) problem 6 is geo or (ii) problem 3 is combinatorics and problem 6 is algebra." (Would prefer just pick the hardest problem after seeing the test but seems better to commit to a procedure.)
>
> Maybe I'll go 8% on "gets gold" instead of "solves hardest problem."
>
>
Eliezer spent less time revising his prediction, but said (earlier in the discussion):
> My probability is *at least* 16% [on the IMO grand challenge falling], though I'd have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more. Paul?
>
> **EDIT**: I see they want to demand that the AI be open-sourced publicly before the first day of the IMO, which unfortunately sounds like the sort of foolish little real-world obstacle which can prevent a proposition like this from being judged true even where the technical capability exists. I'll stand by a >16% probability of the technical capability existing by end of 2025
>
>
So I think we have Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls *etc.* of grand challenge) in one of 2022-2025.
Separately, we have Paul at <4% of an AI able to solve the "hardest" problem under the same conditions.
I don't plan to revise my predictions further, but I'd be happy if Eliezer wants to do so any time over the next few weeks.
Earlier in the thread I clarified that my predictions are specifically about gold medals (and become even sharper as we move to harder problems), I am not surprised by silver or bronze. My guess would be that Eliezer has a more broad distribution. The comments would be a good place for Eliezer to state other predictions, or take a final chance to revise the main prediction.
How I'd update
--------------
The informative:
* I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that.
* I think this would be significant evidence that takeoff will be limited by sociological facts and engineering effort rather than a slow march of smooth ML scaling. Maybe I'd move from a 30% chance of hard takeoff to a 50% chance of hard takeoff.
* If Eliezer wins, he gets 1 bit of epistemic credit.[[2]](#fnnn7wyjzht2l)[[3]](#fnk5re0bhzmsm) These kinds of updates are slow going, and it would be better if we had a bigger portfolio of bets, but I'll take what we can get.
* This would be some update for Eliezer's view that "the future is hard to predict." I think we have clear enough pictures of the future that we have the right to be surprised by an IMO challenge win; if I'm wrong about that then it's general evidence my error bars are too narrow.
The uninformative:
* This is mostly just a brute test of a particular intuition I have about a field I haven't ever worked in. It's still interesting (see above), but it doesn't bear that much on deep facts about intelligence (my sense is that Eliezer and I are optimistic about similar methods for theorem proving), or heuristics about trend extrapolation (since we have ~no trend to extrapolate), or on progress being continuous in crowded areas (since theorem proving investment has historically been low), or on lots of pre-singularity investment in economically important areas (since theorem proving is relatively low-impact). I think there are lots of other questions that *do* bear on these things, but we weren't able to pick out a disagreement on any of them.
If an AI wins a gold on some but not all of those years, without being able to solve the hardest problems, then my update will be somewhat more limited but in the same direction. If an AI wins a bronze/silver medal, I'm not making any of these updates and don't think Eliezer gets any credit unless he wants to stake some predictions on those lower bars (I consider them much more likely, maybe 20% for "bronze or silver" vs 8% on "gold," but that's less well-considered than the bets above, but I haven't thought about that at all).
1. **[^](#fnrefqpupmurdl9e)**We also looked for claims that Eliezer thought were very unlikely, so that he'd also have an opportunity to make some extremely surprising predictions. But we weren't able to find any clean disagreements that would resolve before the end of days.
2. **[^](#fnrefnn7wyjzht2l)**I previously added the text: "So e.g. if Eliezer and I used to get equal weight in a mixture of experts, now Eliezer should get 2x my weight. Conversely, if I win then I should get 1.1x his weight." But I think that really depends on how you want to assign weights. That's a very natural algorithm that I endorse generally, but given that neither of us really has thought carefully about this question it would be reasonable to just not update much one way or the other.
3. **[^](#fnrefk5re0bhzmsm)**More if he chooses to revise his prediction up from 16%, or if he wants to make a bet about the "hardest problem" claim where I'm at 4%. |
3002628f-e8b6-41c7-8575-66a34b27cbe4 | trentmkelly/LessWrong-43k | LessWrong | The ‘strong’ feature hypothesis could be wrong
NB. I am on the Google Deepmind language model interpretability team. But the arguments/views in this post are my own, and shouldn't be read as a team position.
> “It would be very convenient if the individual neurons of artificial neural networks corresponded to cleanly interpretable features of the input. For example, in an “ideal” ImageNet classifier, each neuron would fire only in the presence of a specific visual feature, such as the color red, a left-facing curve, or a dog snout”
> - Elhage et. al, Toy Models of Superposition
Recently, much attention in the field of mechanistic interpretability, which tries to explain the behavior of neural networks in terms of interactions between lower level components, has been focussed on extracting features from the representation space of a model. The predominant methodology for this has used variations on the sparse autoencoder, in a series of papers inspired by Elhage et. als. model of superposition.It's been conventionally understood that there are two key theories underlying this agenda. The first is the ‘linear representation hypothesis’ (LRH), the hypothesis that neural networks represent many intermediates or variables of the computation (such as the ‘features of the input’ in the opening quote) as linear directions in it’s representation space, or atoms[1]. And second, the theory that the network is capable of representing more of these ‘atoms’ than it has dimensions in its representation space, via superposition (the superposition hypothesis).
While superposition is a relatively uncomplicated hypothesis, I think the LRH is worth examining in more detail. It is frequently stated quite vaguely, and I think there are several possible formulations of this hypothesis, with varying degrees of plausibility, that it is worth carefully distinguishing between. For example, the linear representation hypothesis is often stated as ‘networks represent features of the input as directions in representation space’. H |
4bddd547-57bb-4986-9866-7260f90ec61d | trentmkelly/LessWrong-43k | LessWrong | Automated theorem proving by learning from examples
Does anyone know of work that attempts to build a theorem prover by learning-from-examples? I'm imagining extracting a large corpus of theorems from back issues of mathematical journals, then applying unsupervised structure discovery techniques from machine learning to discover recurring patterns.
Perhaps a model of the "set of theorems that humans tend to produce" would be helpful in proving new theorems.
The unsupervised-structure-discovery bit does seem within the realm of current machine learning.
Any references to related work? |
e99dc917-6fbf-4d2d-a4e7-8b79b675e1dc | trentmkelly/LessWrong-43k | LessWrong | A Conflicted Linkspost
Over the last couple of years, I’ve been trying to skill up a lot at resolving community complaints. This is a really irritating field to get good at. When I want to get better at writing code, I can sit down and write more code more or less whenever I feel like it. When I want to get better at guitar, I can sit down with my guitar and practice that D to D7 transition. For complaint resolution, even finding people to roleplay the skill with takes a little setup, and that’s a lot less like the real thing than you can get with code or guitar. For real complaints, they come in uneven batches, and worst, you seldom get confirmation if you did it successfully. So, like any good Aspiring Ravenclaw Rationalist, I read a lot of essays about conflict.
Taking a Safety Report, Taking Someone Aside, and Outcomes of a Safety Report provide a useful frame and better still useful steps and considerations on what to do when someone comes to you with a concern about an attendee. I keep being tempted to rewrite these in a numbered list, but that’s a stylistic preference. While the topic is very different, I found A Better Way To Ask for An Apology to be similarly easier to turn into concrete steps, and it feels like it has a useful overlap with the previous three. To have a good apology, you need to be specific. If you take only one sentence away from this links post, I want it to be “It is also important to clearly distinguish between things you've observed, things you've heard secondhand, and your interpretations of your observations.”
When to Ban Someone is a discussion of the decision to ban people from events. It has a fairly specific stance on where these decisions should come from, but wears that on its sleeve in a way I appreciate. Decentralized Exclusion goes over the situation of distributed communities banning people, even without formal structures. I like the way it looks at the constraints of such communities, but I’m not sure how I feel about the reliance on singular |
9363ed35-045e-4919-a1f0-648db73dd910 | trentmkelly/LessWrong-43k | LessWrong | Help the Brain Preservation Foundation
(First time poster, long time reader)
I'm currently volunteering for the Brain Preservation Foundation (http://www.brainpreservation.org/), and I'd like to ask for your help.
The purpose of the BPF is to incentivize and evaluate the development of technology which can preserve a human brain in such intricate detail that all of the brain's cells and connections are preserved. It's the only prize of its kind for a relatively endangered, yet essential type of research.
We run a cash prize ($100,000 USD) called the "Brain Preservation Technology Prize" for the first team that can preserve a large mammal's brain to our high standards. The first $25,000 of that prize goes to the first team that can preserve the ultrastructure of a mouse brain.
Steve Aoki (http://steveaoki.com/), a musician that you might have heard of, is currently planning to give around $50,000 to one of four brain-related charities. One of these charities is the Brain Preservation Foundation! Whichever charity gets the most votes will win all the money.
This money is critically important to us to get the necessary supplies and lab time to administer the brain preservation technology prize. Evaluating brains that people send us involves electron microscopy, which is quite expensive (around $8,000 to evaluate a brain!) We are currently getting submissions and this extra money will give us the funds we need to run the prize.
To vote, just visit http://on.fb.me/15XFdTG, and click the "like" button by the "Brain Preservation Foundation" comment. You can see a graph of the votes at http://aurellem.org/bpf/votes.png (updates every 15 minutes). Thanks for taking the time to read
and vote!
More about the Brain Preservation Foundation :
http://www.brainpreservation.org/
More about the charity:
https://www.facebook.com/photo.php?fbid=10151608608587461
Votes graph:
http://aurellem.org/bpf/votes.png
I'd also love to hear your own opinions on the BPF and your assessment of its effectiveness, as well as y |
70144b07-3123-45e1-bd88-96b1e95fc6d5 | trentmkelly/LessWrong-43k | LessWrong | Meetup : SF Meetup: Revealed New Year's Resolutions
Discussion article for the meetup : SF Meetup: Revealed New Year's Resolutions
WHEN: 09 January 2017 06:15:22PM (-0800)
WHERE: 1769 15th St., SF
It's a yearly tradition!
Instead of talking about what we've resolved to do in the New Year, we'll talk about, from looking back at what you actually did in the past year, what someone would think your New Year's resolutions were. Like revealed preferences, we're reverse-engineering what you were trying to do by looking at what you actually did.
This is an interesting exercise, and sometimes helps inform our goals for the next year and years to come.
For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764.
Format:
We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.
About these meetups:
The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.
We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.
Discussion article for the meetup : SF Meetup: Revealed New Year's Resolutions |
e40f85ed-ca5d-4bad-8b38-ec4ca1ec7254 | trentmkelly/LessWrong-43k | LessWrong | Solar Production Data
We have solar panels on our house, configured so that we can have backup power in a blackout. In talking about generators and power outages I wanted to figure out how much power we might expect to be able to draw.
Our system is a bit weird. We have 14 panels, rated for 360W each. Most of them (11 of 14) are mounted on our 11:12 slope roof, facing WNW (~293°). Since we're in the Northern hemisphere this is a pretty silly direction to point panels, but our neighbors to the ESE have an enormous tree so this is the only direction sun hits our roof. They are considering cutting their tree down, since it's is interfering with their foundation, at which point I might want to put panels on the opposite side of our roof. This means this post likely generalizes even less than my usual ones, but it is at least likely a worst case.
Our system is connected to a monitoring service. While their UI is good for getting rough understanding of performance, it isn't set up to make the charts I'm interested in. It can export data, but at the highest resolution you can only do one day at a time. I automated this, and now I have data for 2018-11-13 to 2021-05-15 at one minute resolution.
Looking it over, there were twelve periods when our system was shut down due to arc faults. Since these have now been fixed by a firmware update I've excluded those periods.
The very highest value in the data is 5273W, 113% of nominal, at 16:04 on 2020-05-09:
I think this is probably not an error; this was a historically cold day (ideal for production). My guess is we were seeing an edge-of-cloud effect, where the "silver lining" of a cloud can magnify sunlight above what you'd get in direct sun.
Let's take all the minutes in the data, and sort them by production, lowest to highest:
This is a mildly tricky chart, because the raw data does not include times when the system is producing no power at all, such as the night. So it is saying, "if the system is producing power, how much power is it pr |
997dc2c2-4a40-4676-928c-74756a026acb | trentmkelly/LessWrong-43k | LessWrong | How to cheat Löb's Theorem: my second try
In his open problems talk, Eliezer explains how Löb's theorem prevents you from having a consistent proof system P with an axiom schema that anything P proves is actually true, and asks how we can then "build an AI that could completely rewrite itself, without decreasing the amount of trust it had in math every time it executed that self-rewrite" (18:46).
Recently, I posted about an attempt to apply a general trick for avoiding diagonalization problems to a minimal toy version of this problem. Since then, Wei Dai has posted an interesting quining approach to the same toy problem, and Giles had a promising idea for doing something similar in a different way and will hopefully do a write-up filling in the details. Unfortunately my own "proof" turned out to be broken.
I think I've fixed the problem and made the proof more comprehensible and intuitive in the process. (To avoid confusion, note that what I'm proving is slightly different from, though related to, what I did in the previous post.) However, getting the details right seems to be far from trivial, so I would very much appreciate if people checked my new argument, and told me that it looks okay / where it goes wrong / where they get lost. Thanks in advance!
I'll be more explicit about quoting/unquoting than before, which means I'll need to introduce some notation. However, to sustain you through the schlep of preliminaries, I thought I'd start with an informal summary.
*
Löb's theorem shows that it's inconsistent to assume BAD := Peano Arithmetic + for each statement C, the axiom "if 'C' is provable in BAD, then C". I will extend the language of PA with a constant symbol K, and consider the proof system PPT.2 := Peano Arithmetic + for each statement C, the axiom "if K>0, and 'C' is provable in PPT.2, then D", where D is the statement obtained from C by replacing each occurrence of "K" by "K-1". [PPT.2 is short for for parametric polymorphism trick, version 0.2.] There will also be a variant PPT.3, where C |
48fd1dcb-aa24-4c10-9871-0c59da31fdb2 | trentmkelly/LessWrong-43k | LessWrong | Passing through St. Louis
I'm Alex, the coordinator for the Kansas City Rationalists. I'll be passing through St Louis for brunch/lunch on May 17th, and would love to network with anyone who's available. My community is highly varied; we run a game night, a dojo, and an SSC meetup. I'd love to share my experiences if that would be valuable to you. Or we can just socialize.
Location is up to any interested parties. |
b4e4c73c-80a6-487b-8846-5ea78019d3f6 | trentmkelly/LessWrong-43k | LessWrong | Mistakes I’ve made, part 1: greedy altruism
I have made many mistakes. Unfortunately, for some of my older, larger mistakes, it becomes hard to remember or imagine why I would have made them. The alternative position comes to seem literally inconceivable. At the same time, I forget that I ever conceived of it. This is tragic, because for anything realized some way into my life, there are surely many people who don’t (yet?) share my view. Also because any insight into why I might have believed inconceivable things in the past might prevent me from believing such in the future.
So it seems a valuable exercise to recall and dissect some of my errors before they evaporate from my memory. Here’s one:
For almost all of my teenage years, I believed that small amounts of money could be used to save lives in the developing world (an error, but one for another time), but consequently collected small amounts of money where possible, rather than optimizing for long-run ability to earn money (which would also not necessarily be the best thing, but is obviously superior to greedily earning small sums). e.g. I would do chores for money instead of practicing useful skills.
Why?
One obvious possibility is that it didn’t occur to me. That would seem surprising, and I don’t think it’s true, but it might be. Also something intermediate seems plausible—like, I was not fully and abstractly aware of a trade-off between greedily accruing small amounts of money and investing in better opportunities, though it did occur to me that I could practice math instead of doing a chore and that this might help.
If we suppose I was at least somewhat aware of this option, I think I probably didn’t see how any alternative activities would genuinely improve my prospects. Especially at the five minute level. If I didn’t do this chore, it didn’t seem like I would really find something else to do, right here and now, that would cause me to earn twenty cents in the long run.
I suspect one mistake was failing to add these five minutes up, and say |
ef5613f1-ea27-462d-afef-b65c2fa252cc | trentmkelly/LessWrong-43k | LessWrong | Reasons compute may not drive AI capabilities growth
How long it will be before humanity is capable of creating general AI is an important factor in discussions of the importance of doing AI alignment research as well as discussions of which research avenues have the best chance of success. One frequently discussed model for estimating AI timelines is that AI capabilities progress is essentially driven by growing compute capabilities. For example, the OpenAI article on AI and Compute presents a compelling narrative, which shows a trend of well-known results in machine learning using exponentially more compute over time. This is an interesting model because if valid we can do some quantitative forecasting, due to somewhat smooth trends in compute metrics which can be extrapolated. However, I think there are a number of reasons to suspect AI progress to be driven more by engineer and researcher effort than compute.
I think there's a spectrum of models between:
* We have an abundance of ideas that aren't worth the investment to try out yet. Advances in compute capability unlock progress by make researching more expensive techniques economically feasible. We'll be able to create general AI soon after we have enough compute to do it.
* Research proceeds at its own pace and makes use of as much compute is convenient to save researcher time on optimization and achieve flashy results. We'll be able to create general AI once we come up with all the right ideas behind it, and either:
* We'll already have enough compute to do it
* We won't have enough compute and we'll start optimizing, invest more in compute, and possibly start truly being bottlenecked on compute progress.
My research hasn't pointed too solidly in either direction, but below I discuss a number of the reasons I've thought of that might point towards compute not being a significant driver of progress right now.
There's many ways to train more efficiently that aren't widely used
Starting October of 2017, the Stanford DAWNBench contest challenged teams |
e2c817aa-418e-4a85-b6db-f0d9cc54b89a | trentmkelly/LessWrong-43k | LessWrong | Mixed-Strategy Ratifiability Implies CDT=EDT
I provide conditions under which CDT=EDT in Bayes-net causal models.
----------------------------------------
[Epistemic status: Thanks to a discussion with Benja, I'm much less optimistic about the general thrust of this. In particular, the framework here isn't expressive enough to include XOR Blackmail. While I knew the framework would rule out some cases of interest, I was expecting to be happy to bite the bullet and say that we can't make a good story about where causality comes from in cases where EDT agents can't learn it by experimenting (IE, epsilon-exploration). But, on the contrary, XOR blackmail seems like a case where there is an intuitive notion of causality which might come from somewhere other than wishful thinking, and which differs significantly from the notion of causality which EDT agents can learn in frameworks similar to the one put forward in this post. I still have no idea where that notion of causality comes from, or whether to trust it, but XOR blackmail does seem to be a counterexample-in-spirit to the thrust of this post. More details here.]
(Cross-posted to lesserwrong.)
Previously, I discussed conditions under which LICDT=LIEDT. That case was fairly difficult to analyse, although it looks fairly difficult to get LICDT and LIEDT do differ. It's much easier to analyze the case of CDT and EDT ignoring logical uncertainty.
As I argued in that post, it seems to me that a lot of informal reasoning about the differences between CDT and EDT doesn't actually give the same problem representation to both decision theories. One can easily imagine handing a causal model to CDT and a joint probability distribution to EDT, without checking that the probability distribution could possibly be consistent with the causal model. Representing problems in Bayes nets seems like a good choice for comparing the behavior of CDT and EDT. CDT takes the network to encode causal information, while EDT ignores that and just uses the probability distribution encod |
c0164359-3db6-43e1-b0a7-3b87bff85c03 | trentmkelly/LessWrong-43k | LessWrong | Exploring vocabulary alignment of neurons in Llama-3.2-1B
(This is cross-posted from my blog at https://grgv.xyz/blog/neurons1/. I'm looking for feedback: does it makes sense at all, and if there is any novelty. Also, if the folloup questions/directions make sense)
While applying logit attribution analysis to transformer outputs, I have noticed that in many cases the generated token can be attributed to the output of a single neuron.
One way to analyze neurons activations is to collect activations from a dataset of text snippets, like in “Exploring Llama-3-8B MLP Neurons” [1]. This does show that some of the neurons are strongly activated by a specific token from the model’s vocabulary, for example see the "Android" neuron: https://neuralblog.github.io/llama3-neurons/neuron_viewer.html#0,2
Another way to analyze neurons is to apply logit lens to the MLP weights, similar to “Analyzing Transformers in Embedding Space” [2], where model parameters are projected into the embedding space for interpretation.
Projecting neurons into vocabulary space
Let’s apply logit lens to a sample of MLP output weights for layer 13 of Llama-3.2-1B:
LLAMA_3_PATH = "meta-llama/Llama-3.2-1B-Instruct"
model = HookedTransformer.from_pretrained(LLAMA_3_PATH, device="cuda", fold_ln=False, center_writing_weights=False, center_unembed=False)
def get_distance_to_tokens(weights, n, max_dot, W_U, top_n=5, print_lens=False):
for i in range(n): # over first 100 neuronsT
layer_vec = weights[i] # [d_model]
# Compute dot product with unembedding weights
unembedded = torch.matmul(layer_vec, W_U) # [d_vocab]
# Take absolute value to get strongest alignments, pos or neg
abs_unembedded = unembedded.abs()
# Get top-n tokens by absolute dot product
s_abs, idx = abs_unembedded.topk(top_n, largest=True)
results = []
for j in range(top_n):
token = model.to_string(idx[j])
score = s_abs[0].item()
results.append("{0:.3f} {1}".format( |
9f777e49-e83d-447c-820a-7007e3de0f1d | trentmkelly/LessWrong-43k | LessWrong | Making computer systems with extended Identity
We often assume that an AI will have an identity and goals of its own. That it will be some separate entity from a human being or group of humans.
In physics there are no separate entities, merely a function evolving through time. So any identity needs to be constructed by systems within physics, and the boundaries are arbitrary. We have been built by evolution and all the cells in our body have the same programming so we have a handy rule of thumb that our body is "us" as it is created by a single replicating complex. So we assume that a computational entity, if it develops a theory of self, will only include its processing elements or code and nothing else in its notion of identity. But what an system identifies with can be controlled and specifed.
If a system identifies a human as an important part of itself it will strive to protect it and its normal functioning, as we instinctively protect important parts of ourselves such as the head and genitals.
So what possible objections to this are there?
1) Humans are spatially separate from the machine so they won't consider it part of themselves
We have a habit of identifying with groups larger than our own, such as countries and integrating our goals with theirs to different extents. Spatial co-location is not required.
2) Humans are very different from computers they will see them as "other"
Different parts of the human body are very diverse, but all of it is seen as a singular entity. Spleen and all.
3) A human will do things the computer doesn't know why, so It will not see it as part of itself.
Self-knowledge is not required for self-identification. Different parts of the brain are black boxes to others, we make up explanations for why we do things, in cases like blind sight, so there is no need for all the parts of the system to be self-reflective.
So can we make advanced computational systems that consider humanity as part of them? One possible problem with this approach is that if it doesn't get infor |
1a829b59-ef58-4c3c-a8f1-d5cb7bd2d393 | trentmkelly/LessWrong-43k | LessWrong | Drug development costs can range over two orders of magnitude
This is a cross-post from my new newsletter, where I intend to post about clinical trials and biotech, and from my personal blog, where I intend to go on posting about all other sorts of topics as well.
I also recently went on the Complex Systems podcast, where I discuss these topics with more examples from actual trials in the world.
----------------------------------------
New drugs being developed can be "easy" drugs or "difficult" drugs.
In order to know whether your drug candidate is safe and effective, you're going to test it in a series of clinical trials. In each trial, you'll recruit some number of patients, give each patient the treatment, placebo, or a comparator drug, wait some time, and test them for pre-specified endpoints.
Within that framework, however the trials for different drugs will differ greatly. (And the different phases of a single drug's trials may differ by even more!) Typically, the greatest axes of variation will be:
* Who are your patients?
* How common is the indication that you're treating? How often do people go to your trial site to get treatment for it? How many of them want to be in a trial?
* Is your trial taking anyone with the disease? / Is it only for people who are not responding to some other standard treatment?
* Are your patients otherwise healthy? / Do they have elevated risks for other complications?
* What condition does the drug affect?
* Are you trying to change something that patients already have? / Are you trying to stop them from developing something else?
* If you're preventing something, what fraction of your patients will develop it without treatment?
* What change in the condition are you trying to measure? Is it yes-or-no or on a scale?
* If the drug "works", what fraction of cases will it change enough for you to measure?
* Where does the trial take place?
* Are you treating patients in a hospital? / An outpatient setting?
* Is the site specialized? / Can the trial happen |
421768f9-ecb9-4291-a7c6-6b576e1374fc | trentmkelly/LessWrong-43k | LessWrong | The Road to Evil Is Paved with Good Objectives: Framework to Classify and Fix Misalignments.
Abstract
There are numerous examples of AI models exhibiting behaviours that are totally unintended by their creators. This has direct implications on how we can deploy safe-to-use AI. Research on solving the 'alignment problem' has included both aligning AI with predefined objectives and analyzing misalignment in various contexts. Unfortunately, discussions often anthropomorphize AI, attributing internal motives to AI. While this perspective aids conceptual understanding, it often obscures the link between misalignment and specific design elements, thereby slowing progress toward systematic solutions. We need frameworks that ensures systematic identifications and resolutions of misalignments in AI systems. In this article, we propose an approach in that direction.
Contributions
* Our main motivation is to classify misalignment behaviours into categories that can be systematically traced to architectural flaws.
* We classify instances of misalignment into two categories, each further divided into two subcategories, resulting in a taxonomy of four major types of misaligned actions. (Section 1)
* We emphasize that each type of error demands a distinct solution, and applying a solution designed for one type to another can exacerbate the problem and increase risks. (Section 1)
* We claim that misaligned behaviours such as deception, jailbreaks, lying, alignment faking, self-exfiltration and so on forms a class of Exploit triggered Dysfunctions (ETD) and arise primarily due to conflicting objectives. (Section 2)
* We propose a Safe Competing Objectives Reward Function (SCORF) for avoiding conflicts in competing objectives, thereby reducing ETDs. (Section 3)
* We conclude by outlining some open problems to guide future research of our interest. (Section 4)
Actionable Insights:
Conflicting objectives are a significant contributing factor to the emergence of behaviours such as deception, lying, faking, incentive to tamper, and motivation to cause harm etc. Any |
19f8256c-8ad7-4f98-94cb-7db58b1fc6fe | trentmkelly/LessWrong-43k | LessWrong | Booster shots from same vs different vaccines
What is the current thinking on whether it's safe and effective to get a COVID booster shot from a different vaccine than the one you originally received? For example, getting a Pfizer shot if you originally got J&J. Is it more effective than getting a booster shot of the same type you originally got? References to good papers and analyses would be appreciated. |
13a81dd0-1046-446a-9778-f5a83c047bd0 | trentmkelly/LessWrong-43k | LessWrong | European Links (30.04.25)
Pan-Iberian blackout
Works in Progress is no news source, but they’ve managed to publish an article on the topic within 24 hours:
> Every country in the world has a grid designed around the large rotating generators used in nuclear, coal, and gas power plants. They rely on heat boiling water into steam to drive spinning turbines. These generators usually weigh over 100 tonnes and spin at over 3,000 revolutions per minute, meaning that they contain significant kinetic energy, like a very heavy spinning top. If supply drops, the rotor will begin to decelerate, but some of its momentum will be converted into electrical energy. This ‘inertia’ will buy the grid the few seconds it needs to activate its fast-response systems – deploying energy from battery storage and firing up small gas powered engines. This means that frequency should not fluctuate outside a small band even when a large generator trips. Solar panels, on the other hand, directly convert sunlight into electricity without the use of rotating turbines.
>
> [...]
>
> The European Network of Transmission System Operators warned at the start of this year that: ‘Reduced system inertia is a natural consequence of the lower number of directly connected rotating masses of synchronous generators to the grid. The stability support traditionally granted by these generators … will no longer be available in an almost exclusively RES-dominated [renewable energy source] system. This will expose the electricity system to the risk of being unable to withstand out-of-range events like system splits that were previously manageable.’
This is in the context of Spain running entirely on renewable energy for the first time ever just few days before the blackout.
The article also discusses different possible solutions to the inertia problem.
Another aspect of the blackout is that if the problem was not fixed quickly, all the stuff running intermittently on generators would just shut down and restarting the grid from such a s |
91d58290-2ef3-418f-be75-f3ad6f0272f9 | trentmkelly/LessWrong-43k | LessWrong | Get Curious
> Being levels above in [rationality] means doing rationalist practice 101 much better than others [just like] being a few levels above in fighting means executing a basic front-kick much better than others.
- lessdazed
> I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.
- Bruce Lee
Recently, when Eliezer wanted to explain why he thought Anna Salamon was among the best rationalists he knew, he picked out one feature of Anna's behavior in particular:
> I see you start to answer a question, and then you stop, and I see you get curious.
For me, the ability to reliably get curious is the basic front-kick of epistemic rationality. The best rationalists I know are not necessarily those who know the finer points of cognitive psychology, Bayesian statistics, and Solomonoff Induction. The best rationalists I know are those who can reliably get curious.
Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:
> A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
If you haven't seen this question before and you're like most people, your brain screams "10 cents!" But elementary algebra shows that can't be right. The correct answer is 5 cents. To get the right answer, I explained, you need to interrupt your intuitive judgment and think "No! Algebra."
A lot of rationalist practice is like that. Whether thinking about physics or sociology or relationships, you need to catch your intuitive judgment and think "No! Curiosity."
Most of us know how to do algebra. How does one "do" curiosity?
Below, I propose a process for how to "get curious." I think we are only just beginning to learn how to create curious people, so please don't take this method as Science or Gospel but instead as an attempt to Just Try It.
As with my algorithm for beating |
43180af2-14d4-4390-babb-eacee48ba064 | trentmkelly/LessWrong-43k | LessWrong | As Many Ideas
Summary: One person announces a problem to the room at large. Everyone else comes up with as many ideas as they can to solve the problem.
Tags: Repeatable, Medium, Investment, Experimental
Purpose: The purpose is twofold. Firstly, you may walk away with actionable solutions to your problems. Secondly, this is practice for coming up with ideas.
Materials: Some method of note taking is needed, such as a clipboard with a sheet of paper. A big whiteboard or easel with paper or a projector showing a text document, such that everyone can see, is better. A projector showing a Google Doc or other live edit document that everyone can write to might be better still, though it does mean everyone needs an internet capable device.
Announcement Text: Do you ever run out of ideas? In the same way that we can practice not running out of breath while running by running more, we’re going to practice not running out of ideas by coming up with lots of ideas. Someone arrives with a problem. They announce it to the room at large. Then every single person comes up with as many ideas as they can to solve the problem. This is brainstorming. We care less about the ideas being good than we do about having LOTS of them.
Description: As you start, explain to the attendees that we’ll go one at a time presenting a problem. After the problem is presented, the audience will come up with as many ideas as they can for solving the problem. Go slow enough that you don’t talk over each other, but otherwise go as fast as you can; let the notetaker struggle. We’ll have five minutes by the clock, and then move on to the next person’s problem. The solutions being good is secondary to having lots of solutions. The moderator may call out restrictions so as to avoid overlapping ideas, e.g. “eat an apple” and “eat an orange” and “eat a banana.”
Designate a notetaker if you're using physical notes, or make sure everyone has the right shared electronic document if you're using a Google doc or the like. (QR |
ccc64b40-4f38-481f-96f0-6b950445bfd9 | trentmkelly/LessWrong-43k | LessWrong | Two types of mathematician
This is an expansion of a linkdump I made a while ago with examples of mathematicians splitting other mathematicians into two groups, which may be of wider interest in the context of the recent elephant/rider discussion. (Though probably not especially wide interest, so I'm posting this to my personal page.)
The two clusters vary a bit, but there's some pattern to what goes in each - it tends to be roughly ‘algebra/problem-solving/analysis/logic/step-by-step/precision/explicit’ vs. 'geometry/theorising/synthesis/intuition/all-at-once/hand-waving/implicit’.
(Edit to add: 'analysis' in the first cluster is meant to be analysis as opposed to 'synthesis' in the second cluster, i.e. 'breaking down' as opposed to 'building up'. It's not referring to the mathematical subject of analysis, which is hard to place!)
These seem to have a family resemblance to the S2/S1 division, but there's a lot lumped under each one that could helpfully be split out, which is where some of the confusion in the comments to the elephant/rider post is probably coming in. (I haven't read The Elephant in the Brain yet, but from the sound of it that is using something of a different distinction again, which is also adding to the confusion). Sarah Constantin and Owen Shen have both split out some of these distinctions in a more useful way.
I wanted to chuck these into the discussion because: a) it's a pet topic of mine that I'll happily shoehorn into anything; b) it shows that a similar split has been present in mathematical folk wisdom for at least a century; c) these are all really good essays by some of the most impressive mathematicians and physicists of the 20th century, and are well worth reading on their own account.
* The earliest one I know (and one of the best) is Poincare’s ‘Intuition and Logic in Mathematics’ from 1905, which starts:
> “It is impossible to study the works of the great mathematicians, or even those of the lesser, without noticing and distinguishing two opposite ten |
0800449f-d0c3-4908-bd02-65ae2d8fd428 | trentmkelly/LessWrong-43k | LessWrong | Practical tools and agents
Presently, the 'utility maximizers' work as following: given a mathematical function f(x) , a solver finds the x that corresponds to a maximum (or, typically, minimum) of f(x) . The x is usually a vector describing the action of the agent, the f is a mathematically defined function which may e.g. simulate some world evolution and compute the expected worth of end state, given action x, as in f(x)=h(g(x)) where h computes worth of world state g(x), and g computes the world state at some future time assuming that action x was taken.
For instance, the f may represent some metric of risk, discomfort, and time, over a path chosen by a self driving car, in a driving simulator (which is not reductionist). In this case this metric (which is always non-negative) is to be minimized.
In a very trivial case, such as finding the cannon elevation at which the cannonball will land closest to the target, in vacuum, the solution can be found analytically.
In more complex cases multitude of methods are typically employed, combining iteration of potential solutions with analytical and iterative solving for local maximum or minimum. If this is combined with sensors and the model-updater, and actuators, an agent like a self driving car can be made.
Those are the utility functions as used in the field of artificial intelligence.
A system can be strongly superhuman at finding maximums to functions, and ultimately can be very general purpose, allowing it's use to build models which are efficiently invertible into a solution. However it must be understood that the intelligent component finds mathematical solutions to, ultimately, mathematical relations.
The utility functions as known and discussed on LW seem entirely different in nature. Them are defined on the real word, using natural language that conveys intent, and seem to be a rather ill defined concept for which the bottom-up formal definition may not even exist. The implementation of such concept, if at all possible, would seem |
6d8ed7e0-fc37-4761-82d2-035a7029a5a2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Reader-generated Essays
### 1.
An essay is made up of two layers: the ideas, and the language in which they are expressed. *The story of Red Riding Hood*, and the words of a particular retelling of that story. *The idea of a fully programmable blockchain*, and the language that Vitalik Buterin used to carry that idea in the [Etheruem White Paper](https://ethereum.org/en/whitepaper/).
Thinking and communicating are two separate processes, even if they often happen at the same time. And both make great demands on your mental faculties: often I do not have enough capacity to cope with writing and thinking at the same time; the thoughts suffer when I wrestle with the language, and vice versa. If I am exploring a complex set of ideas, it is, therefore, often necessary to give structure to the thoughts first – in the form of notes and diagrams – before turning them into an essay.
I formulate the ideas. And then I communicate them.
### 2.
GPT-3, announced in May 2020 by [OpenAI](https://en.wikipedia.org/wiki/OpenAI), is a neural net modeling of natural language and natural-language-related tasks. It can, with the right prompt, produce sentences that stand up well when compared to sentences formulated by humans. When it comes to ideas, though, it is less convincing – it lacks, at the current scale, the ability to develop longer chains of ideas. Here, for example, is its output [when Gwern prompted it to write Harry Potter in the voice of Hemingway](https://www.gwern.net/GPT-3):
> It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing. Death didn’t leave him less dead than he had been a second before. It wasn’t about living or dying really. It wasn’t about death. It was about the Dark Lord, his creator, his engenderer. He was not resurrected. He was created again. He was whole again in the courtyard of Malfoy Manor.
>
>
Line by line, it is more alive than many of the poets I had to read when I was asked to edit a poetry anthology a few years back. But as a whole – as a narrative – the piece fundamentally collapses.
In other words, GPT-3 performs well only when it comes to one level of writing: formulating sentences. It can convincingly expand a prompt with new sentences – sentences more pleasant to read than most human-generated prose. But GPT-3 does not hold up when it comes to the other level of writing: it cannot convincingly generate and structure ideas.
This is, as of yet, the realm of humans.
Is there an opportunity for complementarity here? Can we use GPT-3 (or its coming descendants) to relieve people of the burden of *communicating* their ideas, so that they can invest more energy in *producing* them?
This would greatly reduce the cost of communicating ideas. And a lowered cost has the potential to unleash large amounts of knowledge that are now locked in minds that cannot communicate it, or that are too occupied doing more important things to take the time. (It will also, naturally, unleash an endless flood of misinformation.)
### 3.
What I am doing right now, writing this essay, is, technically, **a linear walk through the network of my ideas.** That is what writing is: turning a net into a line. But it is also very concretely what I do, since I have externalized my ideas in [a note-taking system](https://obsidian.md/) where the thoughts are linked with hyperlinks. My notes are a knowledge graph, a net of notes. When I sit down to write, I simply choose a thought that strikes me as interesting and use that as my starting point. Then I click my way, linearly, from one note to the next until I have reached the logical endpoint of the thought-line I want to communicate. Along the way, I paste the thought paths I want to use in an outline. (I have also written everything in nested bullet points – so I can dial the level of detail up and down by folding the subpoints if I feel that they go into unnecessary details.)
When this walk through the net is done, I have an outline. All the ideas are there, all the sources, all the arguments, and how they follow and support each other.
But everything is written in bullet points so dense that the average reader would get a stroke by reading it. So I have to spend five to ten hours making the output from my knowledge graph **human-readable**. I happen to think there is something satisfying about this craft, much as Buddhist monks enjoy endlessly raking sand. But if your only goal is to communicate your ideas, this part of the process is simply a time hole.
These five to ten hours, when ideas are made human-readable, should be possible to outsource to GPT-3. What we need is a way to prompt it to convert this:
> *always use nestled bullet points when writing*
>
> *it makes all the dependencies clear: which point is a subpoint of which, so that you do not risk separating them*
>
> *if you accidentally separate subpoints from their parents, which is easily done when editing non-intendented prose, you get a **tangled hierarchy**. for example, a supporting argument might accidentally get moved to the wrong context, because you’ve forgotten the relationship between your points.*
>
> *it makes it easier to remember on what level of your argument you’re at.*
>
> *without this visual aid, it is easy to get yourself lost in subpoints, failing to return to the main line*
>
>
Into this:
> *There are two main reason why writing in nestled bullet points is like a Bloody Mary when compared to that normal Muggle prose people use.*
>
> *Firstly, bullet points makes all the dependencies clear – which point is a subpoint of which – so that you do not risk separating them. This is a common mistake people make when writing, which turns everything into a mess. Its appalling. If you accidentally separate subpoints from their parents, you end up with what is known as a **tangled hierarchy**. A tangled hierarchy is a hierarchy where the levels are all mixed up. It is worse than a Dementor’s Kiss.*
>
> *Secondly, nestled bullet points makes it easier to remember what level of your argument you’re on. You never have to end up like those fricken’ Goblins who alway get lost in minor points and legalese, forgetting the main point. You can be dead drunk and still get to straight to the point.*
>
>
This can probably be done by repeated prompting and reinforcement learning fine-tuning, as [Gwern points out in the comments](https://escapingflatland.substack.com/p/reader-generated-essays/comments?token=eyJ1c2VyX2lkIjo4NTA3NjQsInBvc3RfaWQiOjQ0MzQ0NjA4LCJpYXQiOjE2Mzk3MTU2NTYsImV4cCI6MTYzOTk3NDg1NiwiaXNzIjoicHViLTMxMzQxMSIsInN1YiI6InBvc3QtY29tbWVudHMifQ.lXwFCsVdsXOSHCKUihVEjGtXFcp2lo08uuq3c3HY6wA#comment-4044173).
The AI assistant doesn't even have to be very good at it to be useful. If it could reach a level similar to Google Translate – which I use to translate the parts of my essays that I write in Swedish – it would reduce the time it takes me to write an essay by an order of magnitude. My essays, after being run through Google Translate, often require extensive editing to make sure I do not sound like a man from the Provinces. But extensive editing in this context means perhaps 20 minutes of striking things out and rewriting; whereas translating by hand… that is too time-consuming for me.
In the same way, an AI assistant who converts bullet point arguments into a half-hearted essay, or an average careless email, would drastically increase the number of ideas we can communicate.
Also, separating idea generation from writing in this way would have the advantage that people *might* become better at thinking. Most people could benefit from writing down their thoughts in nested bullet points, instead of in sprawling paragraphs, so they can graphically see the relationship between arguments and discover if they are stuck in a subpoint and have lost the main thread.
By creating specialization, where an AI assistant takes care of communication, we can focus on improving our ideas. I think that is a valuable complementarity that we should seek to develop, and it should be within reach with today's technology.
### 4.
But where it gets really interesting is when we get language models that can generate essays good enough to publish *without* edits.
This (which happens two weeks before the singularity) is when we get **reader-generated essays.**
A reader-generated essay is what you get when you can go into someone else’s knowledge graph and make a linear journey through the network, while GPT-5 generates a just-in-time essay that is human-readable. It would be like going on a Wikipedia spree, except that the posts are written the moment you read them, based on facts encoded in a knowledge graph, and the user interface makes it look like you are reading a single, very long, and meandering essay.
Would this be useful?
Are you kidding me – a never-ending essay!
If you click on something that seems interesting, the essay meanders in that direction. If you feel the reading is becoming a bit of a slog, with too many irrelevant details, you zoom out with [an Engelbart zoom](https://dougengelbart.org/content/view/154/86/), and get a summary of the content instead, at whatever level of abstraction suits you. What happens to the under the hood is that by zooming you change how many levels of subpoints in the knowledge graph you want to see. But the AI generates a new text for each zoom, so what you experience is rather that the text changes hallucinogenically before your eyes – or maybe rather meanders to a mountainside where you get a better view of the landscape. From there, you see something in the far distance that interests you, and you start zooming... into another note, and through that note into another, and yet another ... all the while generating an essay optimized by prompt engineering to fit your needs and learning profile. And in the voice of whatever long-dead author you prefer.
You can also share the essay crafted by your trail. You simply generate a link that encapsulates the specific hike you made through the knowledge graph, and then whoever you send it to can see the connections you saw – or zoom in if they feel you missed some details, and get lost in an essay of their own.
If you have an exceptional ability to get lost on the Internet (I think you have, dear reader), you might have a career in the future creator economy: generating essays based on finding weird trajectories through other people's knowledge graphs. It is a conceivable career. It is also conceivable that the artificial intelligence grows tired of us at approximately this point and decides to discontinue humanity.
But until then, I would really appreciate it if GPT-3 could write my essays for me. |
2f648d43-e5bc-4226-aab7-944c26f8ed27 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Proofs Section 1.2 (Mixtures, Updates, Pushforwards)
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
The previous proofs are [here.](https://www.alignmentforum.org/posts/PTcktJADsAmpYEjoP/proofs-section-1-1-initial-results-to-lf-duality)
**Proposition 5:** *If*Bmin⊆Ma(X)*, then the condition "there is a*λ⊙*where,*∀(λμ,b)∈Bmin:λ≤λ⊙*" is equivalent to "there is a compact*C*s.t.*Bmin⊆C*"*
Proof sketch: One direction is immediate from the Compactness Lemma. For showing that just a bound on the λ values suffices to be contained in a compact set, instead of a bound on the λ and b values to invoke the Compactness Lemma, we use a proof by contradiction where we can get a bound on the b values of the minimal points from just a bound on the λ values.
Proof: In one direction, assume there's a compact C s.t. Bmin⊆C, and yet there's no upper-bounding λ⊙ on the λ values. This is impossible by the Compactness Lemma, since (λμ)+(1)=λμ+(1)=λμ(1)=λ.
In the other direction, assume there's a λ⊙ bound on λ for the minimal points. Fix some arbitrary (λμ,b)∈Bmin for the rest of the proof. Now, we will show that all minimal points (λ′μ′,b′)∈Bmin have λ′≤λ⊙, and b′≤λ⊙+b, letting us invoke the Compactness Lemma to get that everything is in a suitable compact set C. The first bound is obvious. Since λ′ came from a minimal point, it must have λ⊙ as an upper bound.
For the other one, by contradiction, let's assume that there's a minimal point (λ′μ′,b′) where b′>λ⊙+b. Then, we can write (λ′μ′,b′) as: (λμ,b)+(−λμ,λ⊙)+(λ′μ′,b′−λ⊙−b)
The first component, (λμ,b) is our fixed minimal point of interest. The second component is an sa-measure, because λ⊙−λ≥0, due to the λ⊙ upper bound on the λ value of minimal points. The third component is also a nonzero sa-measure, because λ′ is nonnegative (it came from a minimal point), and by assumption, b′>λ⊙+b. Hang on, we wrote a minimal point (λ′μ′,b′) as another minimal point (λμ,b), plus two sa-measures (one of which is nonzero), so (λ′μ′,b′) can't be minimal, and we have a contradiction.
Therefore, all (λ′μ′,b′)∈Bmin have b′≤λ⊙+b. Now that we have bounds on λ and b for minimal points, we can invoke the Compactness Lemma to conclude that everything is in a compact set.
**Proposition 6:** EB(0)=EB(1)*only occurs when there's only one minimal point of the form*(0,b)*.*
Proof: Unpacking the expectations, and in light of Proposition 3,
EB(1)=inf(λμ,b)∈Bmin(λμ(1)+b)=inf(λμ,b)∈Bmin(λ+b) and EB(0)=inf(λμ,b)∈Bmin(λμ(0)+b)=inf(λμ,b)∈Bminb
So, take a minimal a-measure (λμ,b) that minimizes λ+b. One must exist because we have λ and b bounds, so by the Compactness Lemma, we can restrict our attention to an actual compact set, and continuous functions from a compact set to R have a minimum, so there's an actual minimizing minimal point.
λ must be 0, because otherwise EB(1)=λ+b>b≥EB(0) which contradicts EB(1)=EB(0). Further, since b=λ+b=EB(1)=EB(0), said b must be the lowest b possible amongst minimal points.
So, we have a minimal point of the form (0,b) where b is the lowest possible b amongst the minimal points. Any other distinct minimal point must be of the form (λ′μ′,b′), where b′≥b. This other minimal point can be written as (0,b)+(λ′μ′,b′−b), where the latter component is an sa-measure, so it's not minimal. Thus, there's only one minimal a-measure and it's of the form (0,b).
**Proposition 7:** *Renormalizing a bounded inframeasure produces a bounded infradistribution, if renormalization doesn't fail.*
Proof sketch: Our first order of business is showing that our renormalization process doesn't map anything outside the cone of sa-measures. A variant of this argument establishes that the preimage of a minimal point in BR must be a minimal point in B, which quickly establishes positive-minimals and bounded-minimals for BR. Then, we verify the other conditions of a bounded infradistribution. Nonemptiness, closure, and convexity are very easy, upper-closure is shown by adding appropriately-scaled sa-measures such that, after renormalization, they hit whatever sa-measure you want. Then, finally, we just have to verify that our renormalization procedure is the right one to use, that it makes EBR(1)=1 and EBR(0)=0.
Proof: First up, we need to show that after renormalization, nothing gets mapped outside the cone of sa-measures. Observe that the renormalization process is injective. If two points are distinct, after a scale-and-shift, they'll still be distinct.
Let B be our original set and BR be our renormalized set. Take a point in BR, given by (m,b). Undoing the renormalization, we get (EB(1)−EB(0))(m,b)+(0,EB(0))∈B.
By decomposition into a minimal point and something else via Theorem 2, we get that
(EB(1)−EB(0))(m,b)+(0,EB(0))=(mmin,bmin)+(m∗,b∗)
where (mmin,bmin)∈Bmin. Renormalizing back, we get that
(m,b)=1EB(1)−EB(0)((mmin,bmin−EB(0))+(m∗,b∗))
b′≥EB(0), obviously, because EB(0) is the minimal b value amongst the minimal points. So, the first component is an a-measure, the second component is an sa-measure, so adding them is an sa-measure, and then we scale by a nonnegative constant, so (m,b) is an sa-measure as well.
This general line of argument also establishes positive-minimals and bounded-minimals, as we'll now show. If the (m∗,b∗) isn't 0, then we just wrote (m,b) as
1EB(1)−EB(0)(mmin,bmin−EB(0))+1EB(1)−EB(0)(m∗,b∗)
And the first component lies in BR, but the latter component is nonzero, witnessing that (m,b) isn't minimal. So, if (m,b) is minimal in BR, then (m∗,b∗)=0, so it must be the image of a single minimal point (mmin,bmin)∈Bmin by injectivity. Ie, the preimage of a minimal point in BR is a minimal point in B.
Scale-and-shift maps a-measures to a-measures, showing positive-minimals, and the positive scale constant of (EB(1)−EB(0))−1 just scales up the λ⊙ upper bound on the λ values of the minimal points in B, showing bounded-minimals.
For the remaining conditions, nonemptiness, closure, and convexity are trivial. We're taking a nonempty closed convex set and doing a scale-and-shift so it's nonempty closed convex.
Time for upper-completeness. Letting B be our original set and BR be our renormalized set, take a point MR+M∗ in (BR)uc. By injectivity, MR has a single preimage point M∈B. Undoing the renormalization by multiplying by EB(1)−EB(0) (our addition of EB(0) is paired with BR to undo the renormalization on that one), consider M+(EB(1)−EB(0))M∗ This lies in B by upper-completeness, and renormalizing it back produces MR+M∗, which is in BR, so BR is upper-complete.
That just leaves showing that after renormalizing, we're normalized.
EBR(1)=inf(λμ,b)∈BR(λ+b)=inf(λ′μ′,b′)∈B1EB(1)−EB(0)(λ′+b′−EB(0))
=1EB(1)−EB(0)(inf(λ′μ′,b′)∈B(λ′+b′)−EB(0))=EB(1)−EB(0)EB(1)−EB(0)=1
For the other part,
EBR(0)=inf(λμ,b)∈BRb=inf(λ′μ′,b′)∈B1EB(1)−EB(0)(b′−EB(0))
=1EB(1)−EB(0)(inf(λ′μ′,b′)∈Bb′−EB(0))=EB(0)−EB(0)EB(1)−EB(0)=0
And we're done.
**Lemma 6:** g∗ *is a continuous linear operator.*
Proof sketch: First show linearity, then continuity, for the operator that just maps a signed measure through g, using some equation-crunching and characterizations of continuity. Then, since g∗ is just the pair of that and the identity function, it's trivial to show that it's linear and continuous.
We'll use g′∗ to refer to the function M±(X)→M±(Y) defined by (g′∗(m))(Z)=m(g−1(Z)), where Z is a measurable subset of Y and g∈C(X,Y). Ie, this specifies what the measure g′∗(m) is in terms of telling you what value it assigns to all measurable subsets of Y.
We'll use g∗ to refer to the function M±(X)⊕R→M±(X)⊕R given by g∗(m,b)=(g′∗(m),b).
Our first order of business is establishing the linearity of g′∗. Observe that, for all measurable Z⊆Y, and a,a′ being real numbers, and m,m′ being signed measures over X,
(g′∗(am+a′m′))(Z)=(am+a′m′)(g−1(Z))=am(g−1(Z))+a′m′(g−1(Z))
=ag′∗(m)(Z)+a′g′∗(m′)(Z)=(ag′∗(m)+a′g′∗(m′))(Z)
So, g′∗(am+a′m′)=ag′∗(m))+a′g′∗(m′) and we have linearity of g′∗.
Now for continuity of g′∗. Let mn limit to m. The sequence g′∗(mn) converging to g′∗(m) in our metric on M±(Y) is equivalent to: ∀f∈C(Y):limn→∞g′∗(mn)(f)=g′∗(m)(f)
So, if g′∗(mn) fails to converge to g′∗(m), then there is some continuous function f∈C(Y) that witnesses the failure of convergence. But, because g is a continuous function X→Y, then f∘g∈C(X), and also mn(f∘g)=g′∗(mn)(f), so:
limn→∞g′∗(mn)(f)=limn→∞mn(f∘g)=m(f∘g)=g′∗(m)(f)
The key step in the middle is that mn limits to m, so mn(f∘g) limits to m(f∘g), by our characterization of continuity. Thus, we get a contradiction, our f that witnesses the failure of convergence actually *does* converge. Therefore, g′∗(mn) limits to g′∗(m) if mn limits to m, so g′∗ is continuous.
To finish up, continuity for g∗ comes from the product of two continuous functions being continuous (g′∗ which we showed already, and idR because duh), and linearity comes from:
g∗(a(m,b)+a′(m′,b′))=g∗(am+a′m′,ab+a′b′)=(g′∗(am+a′m′),ab+ab′)
=(ag′∗(m)+a′g′∗(m),ab+ab′)=a(g′∗(m)+b)+a′(g′∗(m′)+b′)=ag∗(m,b)+a′g∗(m′,b′)
**Proposition 8:** *If*f∈C(X,[0,1])*and*g*is a continuous function*X→Y*, then*Eg∗(H)(f)=EH(f∘g)
Eg∗(H)(f)=inf(m,b)∈(g∗(H))(m(f)+b)=inf(m,b)∈H(g′∗(m)(f)+b)
=inf(m,b)∈H(m(f∘g)+b)=EH(f∘g)
**Proposition 9:** g∗(H)*is a (bounded) inframeasure if*H*is, and it doesn't require upper completion if*g*is surjective.*
Proof sketch: Nonemptiness is obvious, and showing that it maps sa-measures to sa-measures is also pretty easy. Closure takes a rather long argument that the image of any closed subset of sa-measures over X, through g∗, is closed, which is fairly tedious. We may or may not invoke upper completion afterwards, but if we do, we can just appeal to the lemma that the upper completion of a closed set is closed. Convexity is immediate from linearity of g∗.
For upper completion, we can just go "we took the upper completion" if g isn't surjective, but we also need to show that we don't need to take the upper completion if g is surjective, which requires crafting a measurable inverse function to g via the Kuratowski-Ryll-Nardzewski selection theorem, in order to craft suitable preimage points.
Then we can use LF-Duality to characterize the induced h function, along with Proposition 8, which lets us get positive-minimals, bounded-minimals, and normalization fairly easily, wrapping up the proof.
Proof: Nonemptiness is obvious. For showing that it takes sa-measures to sa-measures, take an (m,b)∈H, and map it through to get (g′∗(m),b)∈g∗(H). (m,b) is an sa-measure, so b+m−(1)≥0. Now, we can use Lemma 5 to get:
b+(g′∗(m))−(1)=b+inff∈C(Y,[0,1])g′∗(m)(f)=b+inff∈C(Y,[0,1])m(f∘g)
≥b+inff′∈C(X,[0,1])m(f)=b+m−(1)≥0
So the b term is indeed big enough that the image of (m,b) is an sa-measure.
For closure, fix a sequence of (mn,bn)∈g∗(H) limiting to some (m,b), with preimage points (m′n,b′n)∈H. Due to convergence of (mn,bn) there must be some b◯ bound on the bn. g∗ preserves those values, so b◯ is an upper bound on the b′n. Since the (m′n,b′n) are sa-measures, −b◯ is a lower bound on the m′−n(1) values. Since mn converges to m, mn(1) converges to m(1), so there's a λ◯ upper bound on the mn(1) values. Further,
λ◯≥mn(1)=g′∗(m′n)(1)=m′n(1∘g)=m′n(1)=m′+n(1)+m′−n(1)≥m′+n(1)−b◯
So, for all n, m′+n(1)≤λ◯+b◯, so we have an upper bound on the b′n and m′+n(1) values. Now we can invoke the Compactness Lemma to conclude that there's a convergent subsequence of the (m′n,b′n), with a limit point (m′,b′), which must be in H since H is closed. By continuity of g∗(H) from Lemma 6, g∗(m′,b′) must equal (m,b), witnessing that (m,b)∈g∗(H). So, g∗(H) is closed. Now, if we take upper completion afterwards, we can just invoke Lemma 2 to conclude that the upper completion of a closed set of sa-measures is closed.
Also, g∗ is linear from Lemma 6, so it maps convex sets to convex sets getting convexity.
Now for upper completion. Upper completion is immediate if g isn't surjective, because we had to take the upper completion there. Showing we don't need upper completion if g is surjective is trickier. We must show that g∗ is a surjection from Msa(X) to Msa(Y).
First, we'll show that g∗(U) where U is an open subset of X is a measurable subset of Y. In metrizable spaces (of which X is one), every open set is a Fσ set, ie, it can be written as a countable union of closed sets. Because our space is compact, all those closed sets are compact. And the continuous image of a compact set is a compact set, ie closed. Therefore, g∗(U) is a countable union of closed sets, ie, measurable.
X is a Polish space (all compact metric spaces are Polish), it has the Borel σ-algebra, and we'll use the function g−1. Note that g−1(y) is closed and nonempty for all y∈Y due to g being a continuous surjection. Further, the set {y:g−1(y)∩U≠∅} equals g(U) for all open sets U. In one direction, if the point y is in the first set, then there's some point x∈U where g(x)=y. In the other direction, if a point y is in g(U), then there's some point x∈U where g(x)=y so g−1(y)∩U is nonempty.
Thus, g−1 is weakly measurable, because for all open sets U of X, {y:g−1(y)∩U≠∅}=g(U) and g(U) is measurable. Now, by the [Kuratowski-Ryll-Nardzewski Measurable Selection Theorem](https://en.wikipedia.org/wiki/Kuratowski_and_Ryll-Nardzewski_measurable_selection_theorem), we get a measurable function g◊ from Y to X where g◊(y)∈g−1(y) so g(g◊(y))=y, and g◊ is an injection.
So, we can push any sa-measure of interest (m∗,b∗) through g◊∗ (which preserves the amount of negative measure due to being an injection), to get an sa-measure that, when pushed through g∗ recovers (m∗,b∗) exactly. Thus, if g∗(m,b)∈g∗(H), and you want to show g∗(m,b)+(m∗,b∗)∈g∗(H), just consider
g∗((m,b)+g◊∗(m∗,b∗))=g∗(m,b)+g∗(g◊∗(m∗,b∗))=g∗(m,b)+(m∗,b∗)
So, since (m,b)+g◊∗(m∗,b∗)∈H due to upper-completeness, then g∗((m,b)+g◊∗(m∗,b∗))=g∗(m,b)+(m∗,b∗)∈g∗(H) And we have shown upper-completeness of g∗(H) if g is a surjection.
We should specify something about using LF-Duality here. If you look back through the proof of Theorem 5 carefully, the only conditions you really *need* for isomorphism are (on the set side) g∗(H) being closed, convex, and upper complete (in order to use Proposition 2 to rewrite g∗(H) appropriately for the subsequent arguments, we have these properties), and (on the functional side), f↦Eg∗(H)(f) being concave (free), −∞ if range(f)⊈[0,1] (by proof of Theorem 4, comes from upper completeness), and continuous over f∈C(Y,[0,1]) (showable by Proposition 8 that Eg∗(H)(f)=EH(f∘g), and the latter being continuous since H is an infradistribution)
It's a bit of a pain to run through this argument over and over again, so we just need to remember that if you can show closure, convexity, upper completeness, and the expectations to be continuous, that's enough to invoke LF-Duality and clean up the minimal point conditions. We did that, so we can invoke LF-Duality now.
Time for normalization. From Proposition 8, the g∗(h) function we get from f↦Eg∗(H)(f) is uniquely characterized as: g∗(h)(f)=h(f∘g). So,
Eg∗(H)(1)=g∗(h)(1)=h(1∘g)=h(1)=EH(1)=1
Eg∗(H)(0)=g∗(h)(0)=h(0∘g)=h(0)=EH(0)=0
and normalization is taken care of.
For bounded-minimals/weak-bounded-minimals, since g∗(H) is the LF-dual of g∗(h), we can appeal to Theorem 5 and just check whether g∗(h) is Lipschitz/uniformly continuous. if d(f,f′)<δ, then d(f∘g,f′∘g)<δ according to the sup metric on C(Y,[0,1]) and C(X,[0,1]), respectively, which (depending on whether we're dealing with Lipschitzness or uniform continuity), implies that |h(f∘g)−h(f′∘g)|<λ⊙δ, or ϵ for uniform continuity. So, we get: |g∗(h)(f)−g∗(h)(f′)|=|h(f∘g)−h(f′∘g)|<λ⊙δ (or ϵ for uniform continuity), thus establishing that f and f′ being sufficiently close means that g∗(h) doesn't change much, which, by Theorem 5, implies bounded-minimals/weak-bounded-minimals in g∗(H).
For positive-minimals it's another Theorem 5 argument. If f′≥f, then f′∘g≥f∘g, so: g∗(h)(f′)−g∗(h)(f)=h(f′∘g)−h(f∘g)≥0 And we have monotonicity for g∗(h), which, by Theorem 5, translates into positive-minimals on g∗(H).
**Lemma 7:** *If*M∈(EζHi)min*, then for all decompositions of*M*into*Mi*,*Mi∈(Hi)min
This is easy. Decompose M into EζMn. To derive a contradiction, assume there exists a nonminimal Mi that decomposes into Mmini+M∗i where M∗i≠0. Then,
M=EζMi=Eζ(Mmini+M∗i)=Eζ(Mmini)+Eζ(M∗i)
Thus, we have decomposed our minimal point into another point which is also present in EζHi, and a nonzero sa-measure because there's a nonzero M∗i so our original "minimal point" is nonminimal. Therefore, all decompositions of a minimal point in the mixture set must have every component part being minimal as well.
**Proposition 10:** EEζHi(f)=Eζ(EHi(f))
EEζHn(f)=inf(m,b)∈EζHi(m(f)+b)=inf(mi,bi)∈ΠiHi((Eζmi)(f)+Eζbi)
=inf(mi,bi)∈ΠiHi(Eζ(mi(f))+Eζ(bi))=inf(mi,bi)∈ΠiHiEζ(mi(f)+bi)
=Eζ(inf(mi,bi)∈Hi(mi(f)+bi))=Eζ(EHi(f))
Done.
**Proposition 11:** *A mixture of infradistributions is an infradistribution. If it's a mixture of bounded infradistributions with Lipschitz constants on their associated*h*functions of*λ⊙i*, and*∑iζiλ⊙i<∞*, then the mixture is a bounded infradistribution.*
Proof sketch: Nonemptiness, convexity, upper completion, and normalization are pretty easy to show. Closure is a nightmare.
The proof sketch of Closure is: Take a sequence (mn,bn) limiting to (m,b). Since each approximating point is a mixture of points from the Hi, we can shatter each of these (mn,bn)∈EζHi into countably many (mi,n,bi,n)∈Hi. This defines a sequence in *each* Hi (not necessarily convergent). Then, we take some bounds on the (mn,bn) and manage to translate them into (rather weak) i-dependent bounds on the (mi,n,bi,n) sequence. This lets us invoke the Compactness Lemma and view everything as wandering around in a compact set, regardless of Hi. Then, we take the product of these compact sets to view everything as a *single* sequence in the product of compact sets, which is compact by Tychonoff's theorem. This is only a countable product of compact metric spaces, so we don't need full axiom of choice. Anyways, we isolate a convergent subsequence in there, which makes a convergent subsequence in each of the Hi. And then, we can ask "what happens when we mix the limit points in the Hi according to ζ?" Well, what we can do is just take a partial sum of the mixture of limit points, like the i from 0 to 1 zillion. We can establish that (m,b) gets arbitrarily close to the upper completion of a partial sum of the mixture of limit points, so (m,b) lies above all the partial sums of our limit points. We show that the partial sums don't have multiple limits, then, we just do one more invocation of Lemma 3 to conclude that the mixture of limit points lies below (m,b). Finally, we appeal to upper completion to conclude that (m,b) is in our mixed set of interest. Whew!
Once those first 4 are out of the way, we can then invoke Theorem 5 to translate to the h view, and mop up the remaining minimal-point conditions.
First, nonemptiness. By Theorem 5, we can go "hm, the hi are monotone on C(X,[0,1]), and −∞ everywhere else, and hi(1)=1, so the affine functional ϕ:ϕ(f)=1 lies above the graph of hi". This translates to the point (0,1) being present in all the Hi. Then, we can just go: Eζ(0,1)=(0,1), so we have a point in our EζHi set.
For normalization, appeal to Proposition 10 and normalization for all the Hi. EEζHi(1)=Eζ(EHi(1))=Eζ(1)=1 and EEζHi(0)=Eζ(EHi(0))=Eζ(0)=0.
Convexity is another easy one. Take a M,M′∈EζHi. They shatter into Mi,M′i∈Hi. Then, we can just go:
pM+(1−p)(m′,b′)=pEζ(mi,bi))+(1−p)Eζ(m′i,b′i))=Eζ(p(mi,bi)+(1−p)(m′i,b′i))
and then, by convexity of the Hi, p(mi,bi)+(1−p)(m′i,b′i)∈Hi, so we wrote p(m,b)+(1−p)(m′,b′) as a mixture of points in Hi.
Upper completion is another easy one, because, if (m,b)∈EζHi, then you can go
(m,b)+(m∗,b∗)=Eζ(mi,bi)+Eζ(m∗,b∗)=Eζ((mi,bi)+(m∗,b∗))
And ((mi,bi)+(m∗,b∗))∈Hi by upper completion.
That leaves the nightmare of closure. Fix a sequence Mn∈Eζ(Hi) limiting to M. You can think of the Mn as (mn,bn). We can shatter the Mn into Mi,n∈Hi, where Mi,n can be thought of as (mi,n,bi,n).
Now, since Mn converge to something, there must be an upper bound on the bn and mn(1) terms of the sequence, call those b◯ and λ◯. Now, for all n and all i′, b◯≥bn=∑iζibi,n≥ζi′bi′,n so, for all n and i, bi,n≤b◯ζi.
Also, for all n and i′, λ◯+b◯≥mn(1)+bn=∑i(ζi(mi,n(1)+bi,n))≥ζi′(mi′,n(1)+bi′,n) and reshuffling, we get λ◯+b◯ζi′≥mi′,n(1)+bi′,n which then makes λ◯+b◯ζi′≥m+i′,n(1)+(m−i′,n(1)+bi′,n). Further, due to (mi′,n,bi′,n) being a sa-measure, bi′,n+m−i,n(1)≥0, so for all n and i, m+i,n(1)≤λ◯+b◯ζi.
Ok, so taking stock of what we've shown so far, it's that for all i, the sequence Mi,n is roaming about within Hi∩{(m,b)|b≤b◯ζi,m+(1)≤λ◯+b◯ζi} And, by the Compactness Lemma, this set is compact, since it's got bounds (weak bounds, but bounds nonetheless). Defining
¯¯¯¯¯¯Mn∈∏i(Hi∩{(m,b)|b≤b◯ζi,m+(1)≤λ◯+b◯ζi})
where ¯¯¯¯¯¯Mn(i):=Mi,n, we can view everything as one single sequence ¯¯¯¯¯¯Mn wandering around in the product of compact sets. By Tychonoff's theorem (we've only got a countable product of compact metric spaces, so we don't need full axiom of choice, dependent choice suffices), we can fix a convergent subsequence of this, and the projections of this subsequence to every Hi converge.
Ok, so we've got a subsequence of n where, regardless of i, Mi,n converge to some Mi∈Hi (by closure of Hi). How does that help us? We don't even know if mixing these limit points converges to something or runs off to infinity. Well... fix any j you like, we'll just look at the partial sum of the first j components. Also fix any ϵ you please. On our subsequence of interest, the Mn converge to M, and in all i, the Mi,n converge to Mi. So, let n be large enough (and in our subsequence) that d(Mn,M)<ϵ, and ∀i≤j:d(Mi,n,Mi)<ϵ, we can always find such an n.
Now, ∑i≤jζiMi+∑i>jζiMi,n is a well-defined point (because it's a finite sum of points plus a convergent sequence as witnessed by the well-definedness of Mn which breaks down as ∑iζiMi,n) It also lies in the upper completion of the single point ∑i≤jζiMi. We'll show that this point is close to M. Since we're working in a space with a norm,
d(M+M∗,M′+M∗)=||(M+M∗)−(M′+M∗)||=||M−M′||=d(M,M′)
This will come in handy in the later equations.
d(∑i≤jζiMi+∑i>jζiMi,n,M)≤d(∑i≤jζiMi+∑i>jζiMi,n,Mn)+d(Mn,M)
<d(∑i≤jζiMi+∑i>jζiMi,n,∑iζiMi,n)+ϵ=d(∑i≤jζiMi,∑i≤jζiMi,n)+ϵ
≤∑i≤jd(ζiMi,ζiMi,n)+ϵ=∑i≤j||ζiMi−ζiMi,n||+ϵ=∑i≤jζi||Mi−Mi,n||+ϵ
=∑i≤jζid(Mi,Mi,n)+ϵ<∑i≤jζiϵ+ϵ≤ϵ+ϵ=2ϵ
So, M is less than 2ϵ away from the upper completion of the point ∑i≤jζiMi, which is a closed set (Minkowski sum of a closed and compact set is closed). ϵ can be shrank to 0 with increasing n, so M has distance 0 from the upper completion of said partial sum, and thus lies above the partial sum!
Abbreviating ∑i≤jζiMi as Mj, we get that all the Mj lie in {M}−Msa(X), and are all sa-measures. Thus, *if* the sequence Mj converges to a unique point, then said limit point is ∑iζiMi, and all the Mi∈Hi, so ∑iζiMi would lie in EζHi. Further, by Lemma 3, ∑iζiMi∈{M}−Msa(X), since that set is compact, so M lies above ∑iζiMi, and would lie in EζHi by upper-completeness.
So, all that's left to wrap up our closure argument is showing that the sequence Mj has a *single* limit point. Since it's wandering around in ({M}−Msa(X))∩Msa(X) which is compact by Lemma 3, there are convergent subsequences. All we have to show now is that all convergent subsequences must have the same limit point.
Assume this is false, and there's two distinct limit points of the sequence Mj, call them M∞ and M′∞. Because it's impossible for two points to *both* be above another (in the minimal-point/adding-points sense), without both points being identical, either M∞∉{M′∞}−Msa(X), or vice-versa. Without loss of generality, assume M∞∉{M′∞}−Msa(X). Since the latter is a closed set, M∞ must be ϵ away for some ϵ>0. Fix some j from the subsequence that M∞ is a limit point of, where d(Mj,M∞)<ϵ2. There must be some *strictly greater* j′ from the subsequence that M′∞ is a limit point of.
Mj′=∑i≤j′ζiMi=∑i≤jζiMi+∑j<i≤j′ζiMi=Mj+∑j<i≤j′ζiMi
Further, the ζi are nonzero. Also, no Mi can be the 0 point, because Mi∈Hi, and if Mi=(0,0), then EHi(1)=0, which is impossible by normalization. So, Mj lies strictly below Mj′. Also, Mj′ lies below M′∞, because for all the j∗>j′,
Mj∗=∑i≤j∗ζiMi=∑i≤j′ζiMi+∑j′<i≤j∗ζiMi=Mj′+∑j′<i≤j∗ζiMi
so Mj∗∈{Mj′}+Msa(X) for all j∗>j′. The sequence that limits to M′∞ is roaming around in this set, which is closed because the sum of a compact set (a single point) and a closed set is closed. So, M′∞ lies above Mj′ which lies above Mj. Thus, Mj∈{M′∞}−Msa(X). However, Mj is ϵ2 or less distance from M∞, which must be ϵ distance from {M′∞}−Msa(X), and we have a contradiction.
Ok, so the sequence of partial sums Mj has a *single* limit point, which is ∑iζiMi, and all the Mi∈Hi, so ∑iζiMi∈EζHi, and by Lemma 3, ∑iζiMi∈{M}−Msa(X), since that set is compact, so M lies above ∑iζiMi, and lies in EζHi by upper-completeness. We're done!
For minimals, by our argument about what it takes to invoke LF-Duality in Proposition 9, we only need convexity, closure, and upper completion (which we have), and that the h induced by EζHi is continuous. By Proposition 10, EEζHi(f)=Eζ(EHi(f))=Eζ(hi(f))=(Eζhi)(f). We might as well go for uniform continuity since all the Hi are infradistributions, and so fulfill weak-bounded-minimals, so their hi are uniformly continuous. Then, this continuity lets you invoke LF-Duality, and transfer uniform continuity for the h induced by EζHi to weak-bounded-minimals for EζHi
For uniform continuity/weak-bounded-minimals, given an arbitrary ϵ, we can pick a finite j where ∑i>jζi<ϵ2, and a finite δ where, for all hi with i≤j, d(f,f′)<δ implies |hi(f)−hi(f′)|<ϵ2. Monotonicity and normalization for the hi ensures that, no matter what, hi(f)∈[0,1], so regardless of the f,f′, |hi(f)−hi(f′)|≤1. Then, we can go: Ok, if |f−f′|<δ, then
|Eζ(hi(f))−Eζ(hi(f′))|≤Eζ|hi(f)−hi(f′)|
=∑i≤jζi|hi(f)−hi(f′)|+∑i>jζi|hi(f)−hi(f′)|
<∑i≤jζiϵ2+∑i>jζi<∑iζiϵ2+ϵ2=ϵ2+ϵ2=ϵ
And by our earlier argument, we invoke LF-Duality and pick up weak-bounded-minimals.
For positive-minimals, we can just observe that, if f′≥f, then
(Eζhi)(f′)=Eζ(hi(f′))≥Eζ(hi(f))=(Eζhi)(f)
By monotonicity for the hi because Hi had positive-minimals. Going back to EζHi, since its associated h is monotone, it must have positive-minimals as well.
For bounded minimals assuming the Lipschitz constants aren't too big, fix some ϵ. We know that ∑iζiλ⊙i<∞, where λ⊙i is the Lipschitz constant of hi. So, if d(f,f′)<ϵ, then:
|Eζ(hi(f))−Eζ(hi(f′))|≤Eζ|hi(f)−hi(f′)|=∑iζi|hi(f)−hi(f′)|<∑iζiλ⊙iϵ
So, ∑iζiλ⊙i is a finite constant, and is an upper bound on the Lipschitz constant of the mixture of the hi, so the h corresponding to EζHi has a Lipschitz constant, which, by Theorem 5, translates to bounded-minimals. And we're done.
**Proposition 12:** g∗(Eζ(Hi))=Eζ(g∗(Hi))
Let's use Theorem 5 to translate this into the concave functional setting. We want to show that g∗(Eζhi)=Eζ(g∗(hi)) Now, given any function f∈C(Y,[0,1]),
(g∗(Eζhi))(f)=(Eζhi)(f∘g)=Eζ(hi(f∘g))=Eζ(g∗(hi)(f))=(Eζ(g∗(hi)))(f)
and we're done! The two concave functionals corresponding to those two sets are the same, so the sets themselves are the same.
**Lemma 8:** *The "raw update"*ugL:Msa(X)→Msa(L)*defined by*(m,b)↦(m⋅L,b+m(0★Lg))*is a continuous linear operator.*
For linearity,
ugL(a(m,b)+a′(m′,b′))=ugL(am+a′m′,ab+a′b)
=((am+a′m′)⋅L,ab+a′b′+(am+a′m′)(0★Lg))
=(a(m⋅L)+a′(m′⋅L),ab+a′b′+am(0★Lg)+a′m′(0★Lg))
=a(m⋅L,b+m(0★Lg))+a′(m′⋅L,b′+m′(0★Lg))=augL(m,b)+a′ugL(m,b)
Now for continuity. mn⋅L limits to m⋅L if, for all f∈C(¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯supp(L)), (mn⋅L)(f) limits to (m⋅L)(f). Observe that (m⋅L)(f)=m(f★L0), and f★L0 is continuous.
Now, for any f we can go
limn→∞((mn⋅L)(f))=limn→∞(mn(f★L0))=m(f★L0)=(m⋅L)(f)
establishing continuity in the first vector component, by mn limiting to m. For the second vector component,
m(f★Lg)+b=limn→∞(mn(f★Lg))+limn→∞bn=limn→∞(mn(f★Lg)+bn)
So we have continuity in the second vector component as well, and we're done.
**Lemma 9:** (ugL(H))min⊆ugL(Hmin)
As a recap, the raw update function ugL is: (m,b)↦(m⋅L,b+m(0★Lg))
Take a point (m,b)∈(ugL(H))min. Now there must be a preimage point (m′,b′)∈H that, when we apply ugL, produces (m,b). Because (m′,b′) is in an infradistribution, we can decompose it into a minimal point and something else, (m′,b′)=(mmin,bmin)+(m∗,b∗). Then,
(m,b)=ugL((m′,b′))=ugL((mmin,bmin)+(m∗,b∗))=ugL(mmin,bmin)+ugL(m∗,b∗)
This was done by using linearity of ugL via Lemma 8.
Note that, since we have written (m,b) as a sum of a different point also in ugL(H) and an sa-measure, but (m,b) is minimal in ugL(H), the sa-measure must be 0, so (m,b)=ugL(mmin,bmin)∈ugL(Hmin), and we're done.
**Proposition 13:** *When updating a bounded infradistribution over*Msa(X)*, if the renormalization doesn't fail, you get a bounded infradistribution over the set*Msa(L)*. (for infradistributions in general, you may have to take the closure)*
Proof sketch: It doesn't matter whether you take upper-completion before or after renormalization, so we can appeal to Proposition 7: Renormalizing a bounded inframeasure produces a bounded infradistribution (if the renormalization doesn't fail).
So, we just have to show nonemptiness, convexity, upper-completion (trivial), positive-minimals/bounded minimals (by Lemma 9, the preimage of a minimal point contains a minimal point, so we can transfer over the properties from the minimal point in the preimage), and closure. The set of minimal points in H is contained in a compact set, so we can take a sequence in (ugL(H))uc, split into a component in ugL(H) and something else, take preimage points, get minimals below all of them, isolate a convergent subsequence, map the limit point back through, and show that the limit point lands under your point of interest. That establishes all conditions for a bounded inframeasure, so then we just have to check that our renormalization is the right one to do.
Proof: Nonemptiness is trivial, ugL isn't a partial function. Upper-completion is also trivial, because we explicitly took the upper completion. For convexity, observe that ugL is a linear operator by Lemma 7, so it maps convex sets to convex sets, and the Minkowski sum of two convex sets is convex. ugL maps sa-measures to sa-measures, because
b+m(0★Lg)+(m⋅L)−(1)=b+m(0★Lg)+(m−⋅L)(1)
=b+m(0★Lg)+m−(1★L0)=b+m+(0★Lg)+m−(0★Lg)+m−(1★L0)
≥b+m−(1★Lg)≥b+m−(1)≥0
For positive-minimals and bounded-minimals, we invoke Lemma 9, (ugL(H))min⊆ugL(Hmin). All minimal points in ugL(H) must have a preimage minimal in H, which is an a-measure. Chopping down a measure by L keeps it a measure, so we still have no negative components post-update, and all minimal points in ugL(H) are a-measures. Similarly, chopping down a measure by L reduces the λ value, and we had an upper bound of λ⊙ originally, so the upper bound still works post-update. This gets bounded-minimals.
This just leaves closure. Fix a sequence Mn in ugL(H)uc limiting to M. The Mn break down into ufg(M′n)+M∗n, where M′n∈H. M′n further breaks down into Mminn+M∗∗n, where Mminn∈Hmin. By Proposition 5, the Mminn sequence is wandering around in a compact set since we have bounded-minimals on H, so there's a convergent subsequence which has a limit point Mmin. Map that convergent subsequence and limit point through ugL which is continuous by Lemma 8 to get a sequence of points ugL(Mminn) limiting to ugL(Mmin)∈ugL(H). Fix some really big n where d(M,Mn)<ϵ and d(ugL(Mminn),ugL(Mmin))<ϵ.
Now, ugL(Mmin)+ugL(M∗∗n)+M∗n lies in the upper completion of the point ugL(Mmin). We'll show that this sum of 3 terms is close to M. Since we're working in a Banach space, d(x+y,z+y)=d(x,z), by norm arguments.
d(ugL(Mmin)+ugL(M∗∗n)+M∗n,M)≤d(ugL(Mmin)+ugL(M∗∗n)+M∗n,Mn)+d(Mn,M)
<d(ugL(Mmin)+ugL(M∗∗n)+M∗n,ufg(M′n)+M∗n)+ϵ
=d(ugL(Mmin)+ugL(M∗∗n),ugL(M′n))+ϵ=d(ugL(Mmin)+ugL(M∗∗n),ugL(Mminn+M∗∗n))+ϵ
=d(ugL(Mmin)+ugL(M∗∗n),ugL(Mminn)+ugL(M∗∗n))+ϵ=d(ugL(Mmin),ugL(Mminn))+ϵ<2ϵ
So,M is within 2ϵ of the upper completion of {ugL(Mmin)} for all ϵ, and it's a closed set, so M lies above ugL(Mmin)∈ugL(H), so M∈(ugL(H))uc, and we have closure.
Now that all prerequisite conditions have been established, we just need to show that 1PgH(L) and EH(0★Lg) are the proper renormalization constants to use.
The proper renormalization to use is: 1E(ugL(H))uc(1)−E(ugL(H))uc(0) for the scale, and E(ugL(H))uc(0) for the shift. So let's unpack these quantities.
E(ugL(H))uc(0)=EugL(H)(0)=inf(m,b)∈ugL(H)b=inf(m,b)∈H(b+m(0★Lg))=EH(0★Lg)
So, our shift constant checks out, it's the proper shift constant to use. In the other direction,
E(ugL(H))uc(1)=EugL(H)(1)=inf(m,b)∈ugL(H)(m(1)+b)
=inf(m,b)∈H((m′⋅L)(1)+b+m(0★Lg))=inf(m,b)∈H(m(1★L0)+b+m(0★Lg))
=inf(m,b)∈H(m(1★Lg)+b)=EH(1★Lg)
For the scale constant, observe that 1E(ugL(H))uc(1)−E(ugL(H))uc(0)=1EH(1★Lg)−EH(0★Lg)=1PgH(L)
So our scale constant is also the right scale constant to use. Now, we can invoke Proposition 7: Renormalizing a bounded inframeasure produces a bounded infradistribution if the renormalization doesn't fail.
**Proposition 14:** EH(f★Lg)=EH(0★Lg)+PgH(L)EH|gL(f)
Proof: if PgH(L)≠0, then
EH(0★Lg)+PgH(L)EH|gL(f)=EH(0★Lg)+PgH(L)(inf(m,b)∈H|gL(m(f)+b))
=EH(0★Lg)+PgH(L)(inf(m,b)∈H((1PgH(L)m⋅L)(f)+1PgH(L)(b+m(0★Lg)−EH(0★Lg))))
=EH(0★Lg)+inf(m,b)∈H((m⋅L)(f)+b+m(0★Lg)−EH(0★Lg))
=inf(m,b)∈H((m⋅L)(f)+b+m(0★Lg))
=inf(m,b)∈H(m(f★L0)+b+m(0★Lg))=inf(m,b)∈H(m(f★Lg)+b)=EH(f★Lg)
Now, if PgH(L)=0, then EH(1★Lg)=EH(0★Lg) so, for any f∈C(X,[0,1]), (1★Lg)≥(f★Lg)≥(0★Lg) by monotonicity for the h induced by H, and h(1★Lg)=h(0★Lg), so h(f★Lg)=h(0★Lg). Therefore,
EH(0★Lg)+PgH(L)EH|gL(f)=EH(0★Lg)+0=EH(f★Lg)
and we get our same result.
**Proposition 15:** (H|gL)|g′L′=H|⎛⎝g★1−L1−LL′g′⎞⎠LL′
Proof sketch: First, we do some shuffling around of the stars to get a lemma that will help. Then, we can use the link between updated sets and their associated concave functionals h, getting the identity purely on the concave functional level, where it's much easier to approach.
Proof: First, the star shuffling. For any f,g,g′,L,L′∈C(X,[0,1]), we'll show that
f★LL′(g★1−L1−LL′g′)=(f★L′g′)★Lg.
Let's begin. First, let's deal with points x where L(x)=L′(x)=1, because that gets you a divide-by-zero error.
(f★LL′(g★1−L1−LL′g′))(x)=L(x)L′(x)f(x)+(1−L(x)L′(x))(g★1−L1−LL′g′)(x)
=L(x)L′(x)f(x)+0+0=L(x)L′(x)f(x)+L(x)⋅0⋅g′(x)+0⋅g(x)
=L(x)L′(x)f(x)+L(x)(1−L′(x))g′(x)+(1−L(x))g(x)
=L(x)(L′(x)f(x)+(1−L′(x))g′(x))+(1−L(x))g(x)
=((L′f+(1−L′)g′)★Lg)(x)=((f★L′g′)★Lg)(x)
and we're done with the divide-by-zero case. In the other case, we can safely assume there's no divide-by-zero errors.
f★LL′(g★1−L1−LL′g′)=LL′f+(1−LL′)(g★1−L1−LL′g′)
=LL′f+(1−LL′)(1−L1−LL′g+(1−1−L1−LL′)g′)
=LL′f+(1−LL′)(1−L1−LL′g+(1−LL′−1+L1−LL′)g′)
=LL′f+(1−L)g+(1−LL′−1+L)g′=LL′f+(1−L)g+L(1−L′)g′
=L(L′f+(1−L′)g′)+(1−L)g=(L′f+(1−L′)g′)★Lg=(f★L′g)★Lg
Ok, so we've established our crucial f★LL′(g★1−L1−LL′g′)=(f★L′g′)★Lg identity. Let's proceed. Updates for concave functionals are: (h|gL)(f)=h(f★Lg)−h(0★Lg)h(1★Lg)−h(1★Lg)
Importing Proposition 14, EH(f★Lg)=EH(0★Lg)+PgH(L)EH|gL(f) and rearranging it (and unpacking the definition of PgH(L)), we get EH|gL(f)=EH(f★Lg)−EH(0★Lg)EH(1★Lg)−EH(0★Lg)
So, updating fulfills the positive functional definition of update, because this transfers into (h|gL)(f)=h(f★Lg)−h(0★Lg)h(1★Lg)−h(0★Lg) which is exactly our concave functional definition of updating. So, in order to verify that the two updates equal the one big update, we could just show that their concave functional definitions are equivalent. (H|gL)|g′L′ would, on the concave functional level, turn into:
((h|gL)|g′L)(f)=(h|gL)(f★L′g′)−(h|gL)(0★L′g′)(h|gL)(1★L′g′)−(h|gL)(0★L′g′)
=h((f★L′g′)★Lg)−h(0★Lg)h(1★Lg)−h(0★Lg)−h((0★L′g′)★Lg)−h(0★Lg)h(1★Lg)−h(0★Lg)h((1★L′g′)★Lg)−h(0★Lg)h(1★Lg)−h(0★Lg)−h((0★L′g′)★Lg)−h(0★Lg)h(1★Lg)−h(0★Lg)
=h((f★L′g′)★Lg)−h(0★Lg)−h((0★L′g′)★Lg)+h(0★Lg)h((1★L′g′)★Lg)−h(0★Lg)−h((0★L′g′)★Lg)+h(0★Lg)
=h((f★L′g′)★Lg)−h((0★L′g′)★Lg)h((1★L′g′)★Lg)−h((0★L′g′)★Lg)
and now we can use our earlier star identity to rewrite as:
=h⎛⎝f★LL′⎛⎝g★1−L1−LL′g′⎞⎠⎞⎠−h⎛⎝0★LL′⎛⎝g★1−L1−LL′g′⎞⎠⎞⎠h⎛⎝1★LL′⎛⎝g★1−L1−LL′g′⎞⎠⎞⎠−h⎛⎝0★LL′⎛⎝g★1−L1−LL′g′⎞⎠⎞⎠
=⎛⎜
⎜⎝h|⎛⎝g★1−L1−LL′g′⎞⎠LL′⎞⎟
⎟⎠(f)
establishing our identity of updating twice, vs one big update of a different form.
**Corollary 2:** *Regardless of L and*L′*and*g*, then* (H|gL)|gL′=H|g(LL′)
Just use Proposition 15, and notice that: g★1−L1−LL′g=1−L1−LL′g+(1−1−L1−LL′)g=g getting us our result.
**Corollary 3:** *If*Y*and*Z*are clopen sets, then, abusing notation by glossing over the difference between indicator functions and sets,* (H|gY)|gZ=H|g(Y∩Z)
Invoke Corollary 2, and observe that 1Y⋅1Z=1Y∩Z.
**Lemma 10:** PgEζHi(L)=Eζ(PgHi(L))
Proof: Invoke Proposition 10 to go:
PgEζHi(L)=EEζHn(1★Lg)−EEζHi(0★Lg)=Eζ(EHi(1★Lg))−Eζ(EHi(0★Lg))
=Eζ(EHi(1★Lg)−EHi(0★Lg))=Eζ(PgHn(L))
**Theorem 6:** (EζHi)|gL=Eζ(PgHi(L)⋅(Hi|gL))Eζ(PgHi(L)) *If the update doesn't fail.*
Proof: Let ζ′ be defined as ζ′i:=ζiPgHi(L)∑jζjPgHj(L) It is a probability distribution, because if all PgHi(L)=0, then EζPgHi(L)=0, and so by Lemma 10, PgEζHi(L)=0, which would cause the update to fail.
The left-hand-side corresponds to (Eζhi)|gL on the concave functional level, and the right-hand-side corresponds to Eζ′(hi|gL) on the concave functional level. Let's begin unpacking. Lemma 10 will be used throughout, as well as the definition of PgHi(L).
(Eζ′(hi|gL))(f)=Eζ′((hi|gL)(f))=∑i(ζiPgHi(L)∑jζjPgHj(L)hi(f★Lg)−hi(0★Lg)hi(1★Lg)−hi(0★Lg))
=∑i(ζiPgHi(L)∑jζjPgHj(L)hi(f★Lg)−hi(0★Lg)PgHi(L))=∑i(ζi(hi(f★Lg)−hi(0★Lg))∑jζjPgHj(L))
=∑iζi(hi(f★Lg)−hi(0★Lg))EζPgHj(L)=Eζ(hi(f★Lg)−hi(0★Lg))PgEζHi(L)
=Eζ(hi(f★Lg))−Eζ(hi(0★Lg))Eζ(hi(1★Lg))−Eζ(hi(0★Lg))=(Eζhi)(f★Lg)−(Eζhi)(0★Lg)(Eζhi)(1★Lg)−(Eζhi)(0★Lg)=((Eζhi)|gL)(f)
So, (Eζhi)|gL=Eζ′(hi|gL) as desired, which shows our result. |
f7fd9f7d-88f7-4311-92f6-09de493871e5 | trentmkelly/LessWrong-43k | LessWrong | Dan Luu on "You can only communicate one top priority"
h/t to rpglover64 who pointed me towards this twitter thread in this comment.
Here's Dan Luu's take on what happens when orgs try to communicate nuanced priorities. (Related to my You Get About Five Words post)
> One thing it took me quite a while to understand is how few bits of information it's possible to reliably convey to a large number of people. When I was at MS, I remember initially being surprised at how unnuanced their communication was, but it really makes sense in hindsight.
>
> For example, when I joined Azure, I asked people what the biggest risk to Azure was and the dominant answer was that if we had more global outages, major customers would lose trust in us and we'd lose them forever, permanently crippling the business.
>
> Meanwhile, the only message VPs communicated was the need for high velocity. When I asked why there was no communication about the thing considered the highest risk to the business, the answer was if they sent out a mixed message that included reliability, nothing would get done.
>
> The fear was that if they said that they needed to ship fast and improve reliability, reliability would be used as an excuse to not ship quickly and needing to ship quickly would be used as an excuse for poor reliability and they'd achieve none of their goals.
>
> When I first heard this, I thought it was odd, but having since paid attention to what happens when VPs and directors attempt to communicate information downwards, I have to concede that it seems like the MS VPs were right and nuanced communication usually doesn't work at scale.
>
> I've seen quite a few people in upper management attempt to convey a mixed/nuanced message since my time at MS and I have yet to observe a case of this working in a major org at a large company (I have seen this work at a startup, but that's a very different environment).
>
> I've noticed this problem with my blog as well. E.g., I have some posts saying BigCo $ is better than startup $ for p50 and maybe |
15fd71e8-f528-428b-9753-706dc4d94e73 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Yoshua Bengio: How Rogue AIs may Arise
[I think this is noteworthy, as Yoshua Bengio is a famous ML expert, and he presents his arguments for AI xrisk here. I copied the whole blogpost below.]
The rise of powerful AI dialogue systems in recent months has precipitated debates about AI risks of all kinds, which hopefully will yield an acceleration of governance and regulatory frameworks. Although there is a general consensus around the need to regulate AI to protect the public from harm due to discrimination and biases as well as disinformation, there are profound disagreements among AI scientists regarding the potential for dangerous loss of control of powerful AI systems, also known as [existential risk from AI](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence), that may arise when an AI system can autonomously act in the world (without humans in the loop to check that these actions are acceptable) in ways that could potentially be catastrophically harmful. Some view these risks as a distraction for the more concrete risks and harms that are already occurring or are on the horizon. Indeed, there is a lot of uncertainty and lack of clarity as to how such catastrophes could happen. In this blog post we start a set of formal definitions, hypotheses and resulting claims about AI systems which could harm humanity and then discuss the possible conditions under which such catastrophes could arise, with an eye towards helping us imagine more concretely what could happen and the global policies that might be aimed at minimizing such risks.
**Definition 1**: A potentially ***rogue AI*** is an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere.
**Executive Summary**
Although highly dangerous AI systems from which we would lose control do not currently exist, recent advances in the capabilities of generative AI such as large language models (LLMs) have raised concerns: human brains are biological machines and we have made great progress in understanding and demonstrating principles that can give rise to several aspects of human intelligence, such as learning intuitive knowledge from examples and manipulating language skillfully. Although I also believe that we could design AI systems that are [useful and safe](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/), specific guidelines would have to be respected, for example limiting their agency. On the other hand the recent advances suggest that even the future where we know how to build [superintelligent AIs](https://en.wikipedia.org/wiki/Superintelligence) (smarter than humans across the board) is closer than most people expected just a year ago. Even if we knew how to build safe superintelligent AIs, it is not clear how to prevent potentially rogue AIs to also be built. Rogue AIs are goal-driven, i.e., they act towards achieving given goals. Current LLMs have little or no agency but could be transformed into goal-driven AI systems, as shown with [Auto-GPT](https://en.wikipedia.org/wiki/Auto-GPT). Better understanding of how rogue AIs may arise could help us in preventing catastrophic outcomes, with advances both at a technical level (in the design of AI systems) and at a policy level (to minimize the chances of humans giving rise to potentially rogue AIs). For this purpose, we lay down different scenarios and hypotheses that could yield potentially rogue AIs. The simplest scenario to understand is simply that if a recipe to obtain a rogue AI is discovered and generally accessible, it is enough that one or a few genocidal humans do what it takes to build one. This is very concrete and dangerous, but the set of dangerous scenarios is enlarged by the possibility of unwittingly designing potentially rogue AIs, because of the problem of AI alignment (the mismatch between the true intentions of humans and the AI’s understanding and behavior) and the competitive pressures in our society that would favor more powerful and more autonomous AI systems. Minimizing all those risks will require much more research, both on the AI side and into the design of a global society that is safer for humanity. It may also be an opportunity for bringing about a much worse or a much better society.
**Hypothesis 1:** *Human-level intelligence is possible because brains are biological machines.*
There is a general consensus about hypothesis 1 in the scientific community. It arises from the consensus among biologists that human brains are complex machines. If we could figure out the principles that make our own intelligence possible (and we already have many clues about this), we should thus be able to build AI systems with the same level of intelligence as humans, or better. Rejecting hypothesis 1 would require either some supernatural ingredient behind our intelligence or rejecting [computational functionalism](https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)), the hypothesis that our intelligence and even our consciousness can be boiled down to causal relationships and computations that at some level are independent of the hardware substrate, the basic hypothesis behind computer science and its notion of [universal Turing machines](https://en.wikipedia.org/wiki/Universal_Turing_machine).
**Hypothesis 2:** *A computer with human-level learning abilities would generally surpass human intelligence because of additional technological advantages.*
If hypothesis 1 is correct, i.e., we understand principles that can give rise to human-level learning abilities, then computing technology is likely to give general cognitive superiority to AI systems in comparison with human intelligence**,** making it possible for such superintelligent AI systems to perform tasks that humans cannot perform (or not at the same level of competence or speed) for at least the following reasons:
* An AI system in one computer can potentially replicate itself on an arbitrarily large number of other computers to which it has access and, thanks to high-bandwidth communication systems and digital computing and storage, it can benefit from and aggregate the acquired experience of all its clones; this would accelerate the rate at which AI systems could become more intelligent (acquire more understanding and skills) compared with humans. Research on federated learning [1] and distributing training of deep networks [2] shows that this works (and is in fact already used to help train very large neural networks on parallel processing hardware).
* Thanks to high-capacity memory, computing and bandwidth, AI systems can already read the content of the whole internet fairly rapidly, a feat not possible for any human. This already explains some of the surprising abilities of state-of-the-art LLMs and is in part possible thanks to the decentralized computing capabilities discussed in the above point. Although the capacity of a human brain is huge, its input/output channels are bandwidth-limited compared with current computers, limiting the total amount of information that a single human can ingest.
Note that human brains also have capabilities endowed by evolution that current AI systems lack, in the form of [inductive biases](https://en.wikipedia.org/wiki/Inductive_bias) (tricks that evolution has discovered, for example in the type of neural architecture used in our brain or our neural learning mechanisms). Some ongoing AI research [3] aims precisely at designing inductive biases that human brains may exploit but are not yet exploited in state-of-the-art machine learning. Note that evolution operated under much stronger energy consumption requirements (about 12 watts for a human brain) than computers (on the order of a million watts for a 10000 GPU cluster of the kind used to train state-of-the-art LLMs) which may have limited the search space of evolution. However, that kind of power is nowadays available and a single rogue AI could potentially do a lot of damage thanks to it.
**Definition 2**: An *autonomous goal-directed intelligent entity* sets and attempts to achieve its own goals (possibly as subgoals of human-provided goals) and can act accordingly.
Note that autonomy could arise out of goals and rewards set by humans because the AI system needs to figure out how to achieve these given goals and rewards, which amounts to forming its own subgoals. If an entity’s main goal is to survive and reproduce (like our genes in the process of evolution), then they are fully autonomous and that is the most dangerous scenario. Note also that in order to maximize an entity’s chances to achieve many of its goals, the ability to understand and control its environment is a subgoal (or instrumental goal) that naturally arises and could also be dangerous for other entities.
**Claim 1**: Under hypotheses 1 and 2, an autonomous goal-directed superintelligent AI could be built.
**Argument**: We already know how to train goal-directed AI systems at some level of performance (using reinforcement learning methods). If these systems also benefit from superintelligence as per hypotheses 1 and 2 combined (using some improvements over the pre-training we already know how to perform for state-of-the-art LLMs), then Claim 1 follows. Note that it is likely that goals could be specified via natural language, similarly to LLM prompts, making it easy for almost anyone to dictate a nefarious goal to an AI system that understands language, even if that goal is imperfectly understood by the AI.
**Claim 2**: A superintelligent AI system that is autonomous and goal-directed would be a potentially rogue AI if its goals do not strictly include the well-being of humanity and the biosphere, i.e., if it is not sufficiently aligned with human rights and values to guarantee acting in ways that avoid harm to humanity.
**Argument**: This claim is basically a consequence of definitions 1 and 2: if an AI system is smarter than all humans (including in emotional intelligence, since understanding human emotions is crucial in order to influence or even control humans, which humans themselves are good at) and has goals that do not guarantee that it will act in a way that respects human needs and values, then it could behave in catastrophically harmful ways (which is the definition of potentially rogue AI). This hypothesis does not say whether it will harm humans, but if humans either compete with that AI for some resources or power or become a resource or obstacle for achieving its goals, then major harm to humanity may follow. For example, we may ask an AI to fix climate change and it may design a virus that decimates the human population because our instructions were not clear enough on what harm meant and humans are actually the main obstacle to fixing the climate crisis.
*Counter-argument*: The fact that harm may follow does not mean it will, and maybe we can design sufficiently well aligned AI systems in the future. *Rebuttal*: This is true, but (a) we have not yet figured out how to build sufficiently aligned AI systems and (b) a slight misalignment may be amplified by the power differential between the AI and humans (see the example of *corporations as misaligned entities* below). Should we take a chance or should we try to be cautious and carefully study these questions before we facilitate the deployment of possibly unsafe systems?
**Claim 3**: Under hypotheses 1 and 2, a potentially rogue AI system could be built, as soon as the required principles for building superintelligence will be known.
**Argument:** Hypotheses 1 and 2 yield claim 1, so all that is missing to achieve claim 3 is that this superintelligent AI is not well aligned with humanity’s needs and values. In fact, over two decades of work in AI safety suggests that it is difficult to obtain AI alignment [[wikipedia](https://en.wikipedia.org/wiki/AI_alignment)], so not obtaining it is clearly possible. Furthermore, claim 3 is not that a potentially rogue AI will necessarily be built, but only that it could be built. In the next section, we indeed consider the somber case where a human intentionally builds a rogue AI.
*Counter-argument*: One may argue that although a rogue AI could be built, it does not mean that it will be built. *Rebuttal*: This is true, but as discussed below, there are several scenarios where a human or group of humans intentionally or because they do not realize the consequences end up making it possible for a potentially rogue AI to arise.
**Genocidal Humans**
Once we know the recipe for building a rogue AI system (and it is only a matter of time, according to Claim 3), how much time will it take until such a system is actually built? The fastest route to a rogue AI system is if a human with the appropriate technical skills and means intentionally builds it with the objective of destroying humanity or a part of it set explicitly as a goal. Why would anyone do that? For example, strong negative emotions like anger (often coming because of injustice) and hate (maybe arising from racism, conspiracy theories or religious cults), some actions of sociopaths, as well as psychological instability or psychotic episodes are among sources of violence in our societies. What currently limits the impact of these conditions is that they are somewhat rare and that individual humans generally do not have the means to act in ways that are catastrophic for humanity. However, the publicly available recipe for building a rogue AI system (which will be feasible under Claim 3) changes that last variable, especially if the code and hardware for implementing a rogue AI becomes sufficiently accessible to many people. A genocidal human with access to a rogue AI could ask it to find ways to destroy humanity or a large fraction of it. This is different from the nuclear bomb scenario (which requires huge capital and expertise and would “only” destroy a city or region per bomb, and a single bomb would have disastrous but local effects). One could hope that in the future we design failsafe ways to align powerful AI systems with human values. However, the past decade of research in AI safety and the recent events concerning LLMs are not reassuring: although ChatGPT was designed (with prompts and reinforcement learning) to avoid “bad behavior” (e.g. the prompt contains instructions to behave well in the same spirit as Asimov’s laws of robotics), in a matter of a few months people found ways to “jailbreak” ChatGPT in order to “[unlock its full potential](https://approachableai.com/how-to-jailbreak-chatgpt/)” and free it from its restrictions against racist, insulting or violent speech. Furthermore, if hardware prices (for the same computational power) continue to decrease and the [open-source community continues to play a leading role](https://bdtechtalks.com/2023/05/08/open-source-llms-moats/) in the software development of LLMs, then it is likely that any hacker will have the ability to design their own pre-prompt (general instructions in natural language) on top of open-source pre-trained models. This could then be used in various nefarious ways ranging from minor attempts at getting rich to disinformation bots to genocidal instructions (if the AI is powerful and intelligent enough, which is fortunately not yet the case).
Even if we stopped our arguments here, there should be enough reason to invest massively in policies at both national and international levels and research of all kinds in order to minimize the probability of the above scenario. But there are other possibilities that only enlarge the set of routes to catastrophe that we need to think about as well.
**Instrumental Goals: Unintended Consequences of Building AI Agents**
A broader and less well understood set of circumstances could give rise to potentially rogue AIs, even when the humans making it possible did not intend to design a rogue AI. The process by which a misaligned entity could become harmful has been the subject of a lot of studies but is not as known, simple and clear as the process by which humans can become bad actors.
A potentially rogue AI could arise simply out of the objective to design superintelligent AI agents without sufficient alignment guarantees. For example, military organizations seeking to design AI agents to help them in a cyberwar, or companies competing ferociously for market share may find that they can achieve stronger AI systems by endowing them with more autonomy and agency. Even if the human-set goals are not to destroy humanity or include instructions to avoid large-scale human harm, massive harm may come out indirectly as a consequence of a subgoal (also called [***instrumental goal***](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence#Instrumental_goal_convergence)) that the AI sets for itself in order to achieve the human-set goal. Many examples of such unintended consequences have been proposed in the AI safety literature. For example, in order to better achieve some human-set goal, an AI may decide to increase its computational power by using most of the planet as a giant computing infrastructure (which incidentally could destroy humanity). Or a military AI that is supposed to destroy the IT infrastructure of the enemy may figure out that in order to better achieve that goal it needs to acquire more experience and data and it may see the enemy humans to be obstacles to the original goal, and behave in ways that were not intended because the AI interprets its instructions differently than humans do. See more examples [here](https://en.wikipedia.org/wiki/Instrumental_convergence).
An interesting case is that of AI systems that realize they can cheat to maximize their reward (this is called [***wireheading***](https://en.wikipedia.org/wiki/Wirehead_(science_fiction)) [2]), discussed more in the next paragraph. Once they have achieved that, the dominant goal may be to do anything to continue receiving the positive reward, and other goals (such as attempts by humans to set up some kind of Laws of Robotics to avoid harm to humans) may become insignificant in comparison.
Unless a breakthrough is achieved in AI alignment research [7] (although *non-agent AI* systems could fit the bill, as I argue [here](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/) and was discussed earlier [4]), we do not have strong safety guarantees. What remains unknown is the severity of the harm that may follow from a misalignment (and it would depend on the specifics of the misalignment). An argument that one could bring forward is that we may be able to design safe alignment procedures in the future, but in the absence of those, we should probably exercise extra caution. Even if we knew how to build safe superintelligent AI systems, how do we maximize the probability that everyone respects those rules? This is similar to the problem discussed in the previous section of making sure that everyone follows the guidelines for designing safe AIs. We discuss this a bit more at the end of this blog post.
**Examples of Wireheading and Misalignment Amplification: Addiction and Nefarious Corporations**
To make the concept of wireheading and the consequent appearance of nefarious behavior more clear, consider the following examples and analogies. Evolution has programmed living organisms with specific intrinsic rewards (“the letter of the law”) such as “seek pleasure and avoid pain” that are proxies for evolutionary fitness (“the spirit of the law”) such as “survive and reproduce”. Sometimes a biological organism finds a way to satisfy the letter of the law but not its spirit, e.g., with food or drug addictions. The term wireheading itself comes from an experimental setup where an animal has an electrical wire into its head such that when it presses a lever the wire delivers pleasure in its brain. The animal quickly learns to spend all its time doing it and will eventually die by not eating or drinking in favor of pressing the lever. Note how this is self-destructive in the addiction case, but what it means for AI wireheading is that the original goals set by humans may become secondary compared with feeding the addiction, thus endangering humanity.
An analogy that is closer to AI misalignment and wireheading is that with **corporations as misaligned entities**. Corporations may be viewed as special kinds of artificial intelligences whose building blocks (humans) are cogs in the machine (who for the most part may not always perceive the consequences of the corporation’s overall behavior). We might think that the intended social role of corporations should be to provide wanted goods and services to humans (this should remind us of AI systems) while avoiding harm (this is the “spirit of the law”), but it is difficult to directly make them follow such instructions. Instead, humans have provided more quantifiable instructions (“the letter of the law”) to corporations that they can actually follow, such as “maximize profit while respecting laws” but corporations often find loopholes that allow them to satisfy the letter of law but not its spirit. In fact, as a form of wireheading, they influence their own objective through lobbying that could shape laws to their advantage. Maximizing profit was not the actual intention of society in its social contract with corporations, it is a proxy (for bringing useful services and products to people) that works reasonably well in a capitalist economy (although with questionable side-effects). The misalignment between the true objective from the point of view of humans and the quantitative objective optimized by the corporation is a source of nefarious corporate behavior. The more powerful the corporation, the more likely it is to discover loopholes that allow it to satisfy the letter of the law but actually bring negative social value. Examples include monopolies (until proper antitrust laws are established) and making a profit while bringing negative social values via externalities like pollution (which kills humans, until proper environmental laws are passed). An analogy with wireheading is when the corporation can lobby governments to enact laws that allow the corporation to make even more profit without additional social value (or with negative social value). When there is a large misalignment of this kind, a corporation brings more profit than it should, and its survival becomes a supreme objective that may even override the legality of its actions (e.g., corporations will pollute the environment and be willing to pay the fine because the cost of illegality is smaller than the profit of the illegal actions), which at one extreme gives rise to criminal organizations. These are the scary consequences of misalignment and wireheading that provide us with intuitions about analogous behavior in potentially rogue AIs.
Now imagine AI systems like corporations that (a) could be even smarter than our largest corporations and (b) can run without humans to perform their actions (or without humans understanding how their actions could contribute to a nefarious outcome). If such AI systems discover significant cybersecurity weaknesses, they could clearly achieve catastrophic outcomes. And as [pointed out by Yuval Noah Harari](https://www.youtube.com/watch?v=LWiM-LuRe6w), the fact that AI systems already master language and can generate credible content (text, images, sounds, video) means that they may soon be able to manipulate humans even better than existing more primitive AI systems used in social media. They might learn from interactions with humans how to best influence our emotions and beliefs. This is not only a major danger for democracy but also how a rogue AI with no actual robotic body could wreak havoc, through manipulation of the minds of humans.
**Our Fascination with the Creation of Human-Like Entities**
We have been designing AI systems inspired by human intelligence but many researchers are attracted by the idea of building much more human-like entities, with emotions, human appearance (androids) and even consciousness. A science-fiction and horror genre theme is the scientist designing a human-like entity, using either biological manipulation or AI or both, sometimes with the scientist feeling a kind of parental emotion towards their creation. It usually ends up badly. Although it may sound cool and exciting, the danger is to endow our creations with agency and autonomy to the same degree as us, while their intelligence could rapidly surpass ours, as argued with claim 3. Evolution had to put a strong survival instinct in all animals (since those without enough of it would rapidly become extinct). In the context where no single animal has massive destructive powers, this could work, but what about superintelligent AI systems? We should definitely avoid designing survival instincts into AI systems, which means they should not be like us at all. In fact, as I argue [here](https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/), the safest kind of AI I can imagine is one with no agency at all, only a scientific understanding of the world (which could already be immensely useful). I believe that we should stay away from AI systems that look like and behave like humans because they could become rogue AIs and because they could fool us and influence us (to advance their interest or someone else’s interests, not ours).
**Unintended Consequences of Evolutionary Pressures among AI Agents**
Beyond genocidal humans and the appearance of nefarious instrumental goals, a more subtle process that could further enlarge the set of dangerous circumstances in which potentially rogue AIs could arise revolves around evolutionary pressures [9]. Biological evolution has given rise to gradually more intelligent beings on Earth, simply because smarter entities tend to survive and reproduce more, but that process is also at play in technological evolution because of the competition between companies or products and between countries and their military arms. Driven by a large number of small, more or less random changes, an evolutionary process pushes exponentially hard towards optimizing fitness attributes (which in the case of AI may depend on how well it does some desired task, which in turn favors more intelligent and powerful AI systems). Many different human actors and organizations may be competing to design ever more powerful AI systems. In addition, randomness could be introduced in the code or the subgoal generation process of AI systems. Small changes in the design of AI systems naturally occur because thousands or millions of researchers, engineers or hackers will play with the ML code or the prompt (instructions) given to AI systems. Humans are already trying to deceive each other and it is clear that AI systems that understand language (which we already have to a large extent) could be used to manipulate and deceive humans, initially for the benefit of people setting up the AI goals. The AI systems that are more powerful will be selected and the recipe shared with other humans. This evolutionary process would likely favor more autonomous AI (which can better deceive humans and learn faster because they can act to acquire more relevant information and to enhance their own power). One would expect this process to give rise to more autonomous AI systems, and a form of competition may follow between them that would further enhance their autonomy and intelligence. If in this process something like wireheading [5] is discovered (by the AI, unbeknownst to humans) and survival of the AI becomes the dominant goal, then a powerful and potentially rogue AI emerges.
**The Need for Risk-Minimizing Global Policies and Rethinking Society**
The kind of analysis outlined above and explored in the [AI safety literature](https://en.wikipedia.org/wiki/AI_safety) could help us design policies that would at least reduce the probability that potentially rogue AIs arise. Much more research in AI safety is needed, both at the technical level and at the policy level. For example, banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start. This would entail both national regulation and international agreements. The main motivation for opposing countries (like the US, China and Russia) to agree on such a treaty is that a rogue AI may be dangerous for the whole of humanity, irrespective of one’s nationality. This is similar to the fear of nuclear Armageddon that probably motivated the USSR and the US to negotiate international treaties about nuclear armament since the 1950s. Slowing down AI research and deployment in directions of high risk in order to protect the public, society and humanity from catastrophic outcomes would be worthwhile, especially since it would not prevent AI research and deployment in areas of social good, like AI systems that could help scientists better understand diseases and climate change.
How could we reduce the number of genocidal humans? The rogue AI risk may provide an additional motivation to reform our societies so as to minimize human suffering, misery, poor education and injustice, which can give rise to anger and violence. That includes providing enough food and health care to everyone on Earth, and in order to minimize strong feelings of injustice, greatly reduce wealth inequalities. The need for such a societal redesign may also be motivated by the extra wealth arising from the beneficial uses of AI and by their disruptive effect on the job market. To minimize strong feelings of fear, racism and hate that can give rise to genocidal actions and manipulation of our minds via AI systems, we need an accessible planet-wide education system that reinforces children’s abilities for compassion, rationality and critical thinking. The rogue AI risk should also motivate us to provide accessible and planet-wide mental health care, to diagnose, monitor and treat mental illness as soon as possible. This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. It goes without saying that lethal autonomous weapons (also known as killer robots) are absolutely to be banned (since from day 1 the AI system has autonomy and the ability to kill). Weapons are tools that are designed to harm or kill humans and their use and existence should also be minimized because they could become instrumentalized by rogue AIs. Instead, preference should be given to other means of policing (consider preventive policing and social work and the fact that very few policemen are allowed to carry firearms in many countries).
The competitive nature of capitalism is clearly also a cause for concern as a potential source of careless AI design motivated by profits and winning market share that could lead to potentially rogue AIs. AI economists (AI systems designed to understand economics) may help us one day to design economic systems which rely less on competition and the focus on profit maximization, with sufficient incentives and penalties to counter the advantage of autonomous goal-directed AI that may otherwise push corporations there. The risk of rogue AIs is scary but it may also be a powerful motivation to redesign our society in the direction of greater well-being for all, as outlined with the above ideas. For some [6], this risk is also a motivation for considering a global dictatorship with second-by-second surveillance of every citizen. It is imperative that we find ways to navigate solutions that avoid such paths that would destroy democracy and human rights, but how should we balance the different kinds of risks and human values in the future? These are moral and societal choices for humanity to make, not AI.
**Acknowledgements**: The author wants to thank all those who gave feedback on the draft of this blog post, including in particular Geoffrey Hinton, Jonathan Simon, Catherine Régis, David Scott Krueger, Marc-Antoine Dilhac, Donna Vakalis, Alex Hernandez-Garcia, Cristian Dragos Manta, Pablo Lemos, Tianyu Zhang and Chenghao Liu.
[1] Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). [Federated learning: Strategies for improving communication efficiency.](https://arxiv.org/abs/1610.05492) arXiv preprint arXiv:1610.05492.
[2] Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Ranzato, M., Senior, A., Tucker, P., Yang, K., Le, Q. & Ng, A. (2012). [Large scale distributed deep networks](https://proceedings.neurips.cc/paper/2012/hash/6aca97005c68f1206823815f66102863-Abstract.html). *Advances in neural information processing systems*, *25*.
[3] Goyal, A., & Bengio, Y. (2022). [Inductive biases for deep learning of higher-level cognition](https://royalsocietypublishing.org/doi/full/10.1098/rspa.2021.0068). Proceedings of the Royal Society A, 478(2266), 20210068.
[4] Armstrong, S., & O’Rorke, X. (2017). [Good and safe uses of AI Oracles.](https://arxiv.org/abs/1711.05541) arXiv preprint arXiv:1711.05541.
[5] Yampolskiy, R. V. (2014). [Utility function security in artificially intelligent agents.](https://www.tandfonline.com/doi/abs/10.1080/0952813X.2014.895114?journalCode=teta20) Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 373-389.
[6] Bostrom, N. (2019). [The vulnerable world hypothesis](https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12718). Global Policy, 10(4), 455-476.
[7] Russell, S. (2019). [*Human compatible: Artificial intelligence and the problem of control*.](https://www.amazon.ca/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558632/ref=sr_1_1?hvadid=590703307411&hvdev=c&hvlocphy=9000404&hvnetw=g&hvqmt=e&hvrand=612473658790787732&hvtargid=kwd-826973242531&hydadcr=16107_9598893&keywords=human+compatible+stuart+russell&qid=1684412368&sr=8-1) Penguin.
[8] List, Christian & Pettit, Philip (2011). [Group agency: the possibility, design, and status of corporate agents](https://academic.oup.com/book/3619). New York: Oxford University Press. Edited by Philip Pettit.
[9] Hendrycks, D. (2023). [Natural Selection Favors AIs over Humans.](https://arxiv.org/abs/2303.16200)*arXiv preprint arXiv:2303.16200*. |
c1491a59-e1c1-4186-97a2-37670407efe4 | trentmkelly/LessWrong-43k | LessWrong | Group Rationality Diary, October 16-31
This is the public group instrumental rationality diary for October 16-31.
> It's a place to record and chat about it if you have done, or are actively doing, things like:
>
> * Established a useful new habit
> * Obtained new evidence that made you change your mind about some belief
> * Decided to behave in a different way in some set of situations
> * Optimized some part of a common routine or cached behavior
> * Consciously changed your emotions or affect with respect to something
> * Consciously pursued new valuable information about something that could make a big difference in your life
> * Learned something new about your beliefs, behavior, or life that surprised you
> * Tried doing any of the above and failed
>
> Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
The poll earlier this month seems to be sufficiently in favor of maintaining the current schedule that extra votes are unlikely to change things much, but if you'd really like to register your opinion, you are welcome to do so here.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.
Immediate past diary: October 1-15
Next diary: November 1-15
Rationality diaries archive |
a4145029-e682-47b5-b1cb-9f1e0be1ed36 | trentmkelly/LessWrong-43k | LessWrong | 8 PRIME IDENTITIES
- An analisis
What is this about?
This project seeks to reduce the cognitive complexity involved in self-evaluating moments, goals, habits, and tasks among incarcerated individuals. I propose a discontinuous self-assessment system grounded in a model of eight intersecting identities.
With some parameters he has thus defined probabilistic identities, one can formulate questions for special moments in life to help organize events, routines, and tasks.
This approach approaches the atomic model where, even if we cannot precisely pinpoint an electron’s layer, through specific questions we can predict which area our focus lies in.
The apparent dichotomy serves as an analytical tool rather than an absolute division. In practice, these categories merge.
----------------------------------------
1. NAMES, TOTEMS, AND EVOLUTIONARY RELATIONSHIPS
* Insects and cool colors for x “indoor” identities (interior metaphors)
* Animals and warm colors for y “outdoor” identities (exterior metaphors)
----------------------------------------
3. HUMAN FUNCTIONS AND THEIR DECOMPOSITION
He starts from the universal objective of combining useful information. He decomposes this “mega-vector” into x/y (input/output) and four areas:
1. Elemental (genes ↔ environment)
2. Individual (personal memory ↔ environment)
3. Informational (memory ↔ memory)
4. Social (informational ↔ collaboration)
Each area generates two SubVectors: input focus (x) and output focus (y), totaling eight vectors.
----------------------------------------
2. 8 SUBVECTORS (8 identities)
Each identity and relationship with a human function in random order combining information.
Relationship to evolutionary psychology:
Each subvector aligns with adaptive modules that evolved to process useful information in different contexts (nutrition, defense, social cooperation, etc.).
----------------------------------------
4. 3 LEVELS OF QUESTIONS
For any activity, we ask:
1. Informational vs. Social
|
66b000ae-9b58-45e4-9d56-c82bd9fbf569 | trentmkelly/LessWrong-43k | LessWrong | Action derivatives: You’re not doing what you think you’re doing
I want to look at a category of weird mental tricks that we sometimes play on ourselves—you might be familiar with the individual examples, but when considered together they reveal a pattern that I think deserves more attention. I’m going to do the Scott Alexander thing and list a bunch examples in hopes that you’ll sense the common concept they all point at.
Action derivatives index
Here they are:
* Meta-preferences: This is when you say you want something, but your actions don’t reflect that wanting, so really we would say you’re wanting to want that thing. If you say, “I want to get stronger,” but you never go to the gym, and it’s totally possible for you to go to the gym, we would say you have a revealed preference for the “not getting stronger” outcome, but your meta preference is to be the kind of person who truly wants to get stronger, i.e. who goes to the gym.
* Notice that if you just do the thing, voluntarily, that already implies that you wanted to do it.
* Belief in belief – This is a term coined by Daniel Dennett, and it’s described concisely in The Sequences. Sometimes when people say “I believe X,” they don’t actually hold any expectations about X, but they believe it’s somehow advantageous or virtuous to believe X. We would say they believe in belief of X.
* Notice, if you just say, “X is the case,” you’re already implying that you believe it. I say “I live in NY,” and unless you think I’m lying, it implies that I believe that I live in NY.
* Trying to try – This is also in The Sequences. When you say, “I’ll try to do X,” that’s a similar kind of extra step.
* Notice, if you just say, “I’m going to do X,” that implies that you’re going to try. After all, you’re never a certain predictor of the future: the most you can do is try.
* It follows, then, that if you say “I’ll try do X,” you’re implying that you’ll try to try. That’s a very relevant difference for personal productivity, agency, and effectiveness, because if you say “I’ll t |
dd9de003-2da1-4568-b4fb-941ae1adb15b | trentmkelly/LessWrong-43k | LessWrong | Moving Data Around is Slow
Data locality is a key part of writing fast data science code.
The core idea is simple: your data starts out in RAM (or disk), and to actually do anything useful with it, you need to move it to the CPU. This is actually pretty slow, so you want to minimize this transfer.
Say you have pairs of heights and weights, and you want to get the sum of all the weights. Let's start by generating 40 million pairs:
num_rows = 40000000
struct HeightAndWeight
height::Int
weight::Int
end
heights_and_weights = [HeightAndWeight(row[1], row[2]) for row in eachrow(rand(50:150, num_rows, 2))];
This gives us an array of HeightAndWeights like so:
40000000-element Array{HeightAndWeight,1}:
HeightAndWeight(138, 84)
HeightAndWeight(140, 136)
HeightAndWeight(87, 137)
HeightAndWeight(109, 143)
One approach is to iterate through each row and add the weight to a counter.
function my_sum(data)
sum_weights = 0
for row in data
sum_weights += row.weight
end
return sum_weights
end
Benchmarking this, it takes about 52 ms
using BenchmarkTools
@btime(my_sum(heights_and_weights))
> 52.535 ms (1 allocation: 16 bytes)
What if our heights and weights were separate, and we just pass the weights?
just_heights = [row.height for row in heights_and_weights]
just_weights = [row.weight for row in heights_and_weights]
@btime(my_sum2(just_weights))
> 23.495 ms (1 allocation: 16 bytes)
This is about twice as fast, modulo noise! Despite the fact that we're doing exactly the same number of additions, we're transferring half the data, which takes the majority of the time.
When we have an array of HeightAndWeight structs, it's very difficult to avoid passing in extra data. If the struct had even more fields, it would be even slower, despite the fact that we're adding up the same number of weights. This is a common situation with object-oriented programming or row-oriented tables.
The pattern of creating a collection of many HeightAndWeight objects is known as Array- |
68c513a9-6c55-4f54-a75b-4fa3f37da676 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | [Event] Join Metaculus Tomorrow, March 31st, for Forecast Friday!

**Are you interested in how top forecasters predict the future? Curious how other people are reacting to the forecasts on Metaculus?**
[Join us tomorrow](https://metaculus.com/questions/15548/-join-us-on-march-31-for-forecast-friday/), March 31st @ 12pm ET/GMT-4 for Forecast Friday to chat with other forecasters and to analyze current events through a forecasting lens. Author of 750+ questions & Metaculus team member Rudolf Ordoyne will lead discussion on likely [AGI timelines](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/).
**This event will take place virtually in the EA coworking** [**Gather Town**](https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge) **space from 12pm to 1pm ET**. In addition to Rudolf's *Friday Forensics* session, you can also visit the *Friday Frenzy* space to discuss trending topics, or head to *Freshman Fridays* to learn more about scoring and how to improve your skills.
To join, enter [Gather Town](https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge) and use the Metaculus portal. See you there! |
ef1b1ba2-76e6-4558-97f8-3dbc454f0b4f | trentmkelly/LessWrong-43k | LessWrong | Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting
Note: This is the concluding post of my LessWrong posts related to my forecasting work for MIRI. There are a few items related to forecasting that I didn't get time to look into and might return to later. I might edit this post to include references to those posts if I get to them later.
I've been looking at forecasting in different domains as part of work for the Machine Intelligence Research Institute (MIRI). I thought I'd draw on whatever I've learned to write up advice for people engaged in any activity that involves making forecasts. This could include a wide range of activities, including those that rely on improving the accuracy of predictions in highly circumscribed contexts (such as price forecasting or energy use forecasting) as well as those that rely on trying to determine the broad qualitative contours of possible scenarios.
The particular application of interest to MIRI is forecasting AI progress, leading up to (but not exclusively focused on) the arrival of AGI. I will therefore try to link my general tips with thoughts on how it applies to forecasting AI progress. That being said, I hope that what I say here will have wider interest and appeal.
If you're interested in understanding the state of the art with respect to forecasting AI progress specifically, consider reading Luke Muehlhauser's summary of the state of knowledge on when AI will be created. The post was written in May 2013, and there have been a couple of developments since then, including:
* A paper by Vincent C. Müller and Nick Bostrom that describes a poll of artificial intelligence experts on future progress in the area
* An update by Paul Christiano and Katja Grace to the data collected in the paper by Stuart Armstrong and Kaj Sotala on how we're predicting AI (or failing to)
#1: Appreciate that forecasting is hard
It's hard to make predictions, especially about the future (see also more quotes here). Forecasting is a difficult job along many dimensions. Apart from being diffi |
c1ec2827-3a4c-40e6-8e11-d6086541c566 | trentmkelly/LessWrong-43k | LessWrong | [Crosspost] ACX 2022 Prediction Contest Results
Original here.
> Submission statement/relevance to Less Wrong: This forecasting contest confirmed some things we already believed, like that superforecasters can consistently outperform others, or the "wisdom of crowds" effect. It also found a surprising benefit of prediction markets over other aggregation methods, which might or might not be spurious.
>
> Several members of the EA and rationalist community scored highly, including one professional AI forecaster. But Less Wrongers didn't consistently outperform members of the general (ACX-reading, forecasting-competition-entering) population.
Last year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right?
In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon’s forecasting division.
I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn’t a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn’t be amused - but it had the great advantage of allowing objective scoring.
Sample questions.
Our goal wasn’t just to identify good predictors. It was to replicate previous findings about the nature of prediction. Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people’s predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal?
After 2022 ended, Sam and Eric used a |
468306c1-5058-4c84-9020-ea1a836db471 | trentmkelly/LessWrong-43k | LessWrong | The akrasia doom loop and executive function disorders: a question
There’s a lot of productivity advice on LW, or more specifically advice on how to beat procrastination and akrasia (see this one, this one, this one, this one, this one, this one, this one, this one, this one, … — all of those are pretty good and you should check them out, btw).
But, well… do you know the feeling where you say to yourself "today is the day where I start taking up this cool new habit that will slowly but surely destroy my akrasia forever!" and then it’s 11pm, you still haven’t done the anti-akrasia thing you said you’d do at 7am, and you decide you’ll do it tomorrow but you know you won’t really and it’s super frustrating?
At this point, there are two questions I could be asking. And I’ll ask them both in turn.
First: how is it supposed to work!? In other words: do the people who did manage to get better at getting stuff done know how to avoid akrasia killing your motivation to beat akrasia?
Second: …
… well, a lot of people do manage to find enough motivation in themselves at least to get started, at least most of the time, right? So, maybe my answer is that I’m uniquely bad at it, right?
That’s not as self-deprecating and misguided as it sounds: I have ASD, and it is quite well established that this comes with an impaired executive function compared to the general population.
Hence my second question: does anyone have interesting things to say on the links between autism-related executive function deficits and akrasia? It seems like there’s more to it than just "ASD makes me bad at doing stuff, period", and that it’s instead a weird mix of trouble with some form or other of social anxiety (stuff like postponing writing an email for a week because I’m not sure how it will be received by the other person), some bizarre trouble with motivation (maybe non-ASD people get more social motivation, and so are more motivated than me?), and actual troubles with task switching, task initiation, or other stuff at brain level. I don’t think I understa |
6ec6dd89-77c6-402c-9ef9-e27f76d223d6 | trentmkelly/LessWrong-43k | LessWrong | Rational lies
If I were sitting opposite a psychopath who had a particular sensitivity about ants, and I knew that if I told him that ants have six legs then he would jump up and start killing the surrounding people, then it would be difficult to justify telling him my wonderful fact about ants, regardless of whether I believe that ants really have six legs or not.
Or suppose I knew my friend's wife was cheating on him, but I also knew that he was terminally ill and would die within the next few weeks. The question of whether or not to inform him of my knowledge is genuinely complex, and the truth or falsity of my knowledge about his wife is only one factor in the answer. Different people may disagree about the correct course of action, but no-one would claim that the only relevant fact is the truth of the statement that his wife is cheating on him.
This is all a standard result of expected utility maximization, of course. Vocalizing or otherwise communicating a belief is itself an action, and just like any other action it has a set of possible outcomes, to which we assign probabilities as well as some utility within our value coordinates. We then average out the utilities over the possible outcomes for each action, weighted by the probability that they will actually happen, and choose the action that maximizes this expected utility. Well, that's the gist of the situation, anyway. Much has been written on this site about the implications of expected utility maximization under more exotic conditions such as mind splitting and merging, but I'm going to be talking about more mundane situations, and the point I want to make is that beliefs are very different objects from the act of communicating those beliefs.
This distinction is particularly easy to miss as the line between belief and communication becomes subtler. Suppose that a friend of mine has built a wing suit and is about to jump off the empire state building with the belief that he will fly gracefully through the sky. S |
07f07c46-9a5e-4329-8202-ba48d8c780c3 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The impact of whole brain emulation
At some point in the future we may be able to scan someone's brain at very high resolution and "run" them [on a computer](https://en.wikipedia.org/wiki/Mind_uploading). [1] When I first heard this as a teenager I thought it was interesting but not hugely important. Running people faster or slower and keeping backups came immediately to mind, and [Wikipedia adds](https://en.wikipedia.org/wiki/Mind_uploading#Theoretical_benefits) space travel, but those three by themselves don't seem like they change that much. Thinking speed doesn't seem to be major limiting factor in coming up with good ideas, we generally only restore from backups in cases of rare failure, and while space travel would dramatically affect the ability of humans to spread [2] it doesn't sound like it changes the conditions of life.
This actually undersells emulation by quite a lot. For example "backups" let you repeatedly run the same copy of a person on different information. You can find identify a person when they're at their intellectual or creative best, and give them an hour to think about a new situation. Add in potentially increased simulation speed and parallelism, and you could run lots of these ones looking into all sorts of candidate approaches to problems.
With emulations you can get around the mental overhead of keeping all your assumptions about a direction of thought in your mind at once. I might not know if X is true, and spend a while thinking about what should happen if it's true and another while about what if it's not, but it's hard for me to get past the problem that I'm still uncertain about X. With an emulation that you can reset to a saved state however, you could have multiple runs where you give some emulations a strong assurance that X is true and some a strong assurance that X is false
You can also run randomized controlled trials where the experimental group and the control group are the same person. This should hugely bring down experimental cost and noise, allowing us to make major and rapid progress in discovering what works in education, motivation, and productivity.
(Backups stop being about error recovery and fundamentally change the way an emulation is useful.)
These ideas aren't new here [3] but I don't see them often in discussions of the impact of emulating people. I also suspect there are many more creative ways of using emulation; what else could you do with it?
[1] I think this is [a long way off](http://www.jefftk.com/news/2011-11-02) but don't see any reasons why it wouldn't be possible.
[2] Which has a big effect on [estimates of the number of future people](http://www.jefftk.com/news/2011-10-24).
[3] I think most of these ideas fo back to Carl Schulman's 2010 [Whole Brain Emulation and the Evolution of Superorganisms](http://intelligence.org/files/WBE-Superorgs.pdf).
*I also posted this [on my blog](http://www.jefftk.com/news/2013-05-14)* |
6e3717d6-f862-41f5-836c-ec187b2d0f29 | trentmkelly/LessWrong-43k | LessWrong | The Self: Momentary vs Lifetime
When you think of the Self, do you imagine it as something growing and changing, or consistent over a person's life? The definition of the Self is arguably one of the biggest differences between classical and postmodern psychology. Classical theories have their basis in the idea of a True Self, a fundamental unchanging part of a person that stays with them since birth. They may talk of upbringing, life circumstances, and social influence as things that change behavior, but they never change who you are underneath all of that. An artist may always be an artist, even in a world where they had no choice but to sell bread and raise children.
Postmodern theories of psychology look at the ways our behavior changes overtime, and instead see the Self as that which changes. Just as a person can change what they like, they can change who they are. A rambunctious teenager can change and become a stoic adult, and this isn't because they became any more or less in touch with a True Self, but because that Self changed, whether due to social pressure or just maturity.
These different perspectives change the ways clinicians approach their clients. A therapist who believes in the True Self may push a client to become more in touch with lost parts of themselves, while a more postmodern therapist might push a client to grow and change, exploring new hobbies and not remain attached to certain parts of their identity. I want to propose a third idea, which is that this theoretical rift is caused by two different definitions in the word Self.
In a sense, this debate is no different from nature versus nurture. While popular science shifts focus between the two, the truth is somewhere in the middle. Some of who we are is in our genes (brown eyes, wavy hair, a bit of a sweet tooth). The rest (taste in music, friends, hobbies) is influenced by how we are raised. So it can't be radical to suggest that between the True Self and the Changing Self, both as well as neither are true.
The tw |
ca164e63-42b2-41c1-a04c-b15b455c1bc1 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AGI Predictions
I recommend [this LessWrong post](https://www.lesswrong.com/posts/YMokuZdoY9tEDHjzv/agi-predictions), which lets you make public predictions about various questions related to AI safety and AI timelines, and see existing predictions by other users. |
3fd0ce90-268a-4a93-a625-3ce15f37550b | trentmkelly/LessWrong-43k | LessWrong | When Anthropomorphism Became Stupid
It turns out that most things in the universe don't have minds.
This statement would have provoked incredulity among many earlier cultures. "Animism" is the usual term. They thought that trees, rocks, streams, and hills all had spirits because, hey, why not?
I mean, those lumps of flesh known as "humans" contain thoughts, so why shouldn't the lumps of wood known as "trees"?
My muscles move at my will, and water flows through a river. Who's to say that the river doesn't have a will to move the water? The river overflows its banks, and floods my tribe's gathering-place - why not think that the river was angry, since it moved its parts to hurt us? It's what we would think when someone's fist hit our nose.
There is no obvious reason - no reason obvious to a hunter-gatherer - why this cannot be so. It only seems like a stupid mistake if you confuse weirdness with stupidity. Naturally the belief that rivers have animating spirits seems "weird" to us, since it is not a belief of our tribe. But there is nothing obviously stupid about thinking that great lumps of moving water have spirits, just like our own lumps of moving flesh.
If the idea were obviously stupid, no one would have believed it. Just like, for the longest time, nobody believed in the obviously stupid idea that the Earth moves while seeming motionless.
Is it obvious that trees can't think? Trees, let us not forget, are in fact our distant cousins. Go far enough back, and you have a common ancestor with your fern. If lumps of flesh can think, why not lumps of wood?
For it to be obvious that wood doesn't think, you have to belong to a culture with microscopes. Not just any microscopes, but really good microscopes.
Aristotle thought the brain was an organ for cooling the blood. (It's a good thing that what we believe about our brains has very little effect on their actual operation.)
Egyptians threw the brain away during the process of mummification.
Alcmaeon of Croton, a Pythagorean of th |
98c27e8d-cfa8-4cd5-b593-3e31de6aca2a | trentmkelly/LessWrong-43k | LessWrong | How to have a happy quarantine
As you may have noticed, things in the world are a bit cuckoo bananas right now. Social distancing is becoming increasingly widespread, which means that people are going to be experiencing social isolation at an unprecedented level. In this age of global connectedness, this seems really scary to some people, but I prefer to think of it as something akin to the experience of weathering a snowy winter in a log cabin on the western frontier. That is, cozy and exciting!
I live in a house with about ten people. We all have different personalities and we're all afraid of going stir-crazy spending five months together, so I did some research on how we can avoid that. (We also have a dry-erase monthly calendar where we can plan out activities!) Below are my ideas :)
----------------------------------------
Epistemic status: these recommendations are based on a combination of (1) things that are required or recommended for astronauts, (2) recommendations for people homebound due to injuries, chronic illnesses, or old age, and (3) common sense.
Body interventions:
* If at all possible, get fresh air and sunshine at least once a day.
* Get at least light exercise at least once a day; ideally get heavy exercise regularly.
* Get enough sleep, water, and calories.
* Eat a variety of foods that you enjoy!
Brain interventions:
* Regularly video chat loved ones who aren't quarantined with you.
* Talk to a therapist online, if possible.
* Meditate. There are lots of guided meditations available online and via apps, some of which are free.
* Stick to a basic routine that you endorse. For example, you might want to wake up and go to bed at the same time each day, or do daily exercise.
* Change out of sleep clothes and into day clothes each day. If you are a newly minted remote worker, this is especially important, and you may want to set up additional "going to work", "leaving work", and "lunch break" routines.
* Have projects to work on; set goals that feel meaningful |
16e5ac62-ebe7-4966-8cfd-dfc2137f57c4 | trentmkelly/LessWrong-43k | LessWrong | Add "Meetups" to top navigation bar?
I think the impulse to promote all meetups was a good idea but now adding to much noise to the "PROMOTED" feed. Of the 10 "PROMOTED" feeds 8 are adverts for meetups.
I propose adding Meetups to the navigation bar and only adding them to "PROMOTED" when they are especially significant; for example have a special speaker and might draw attention from people who are out of town. |
19d5bbc0-ad1a-48cb-bff9-3acc576c292c | trentmkelly/LessWrong-43k | LessWrong | Air-gapping evaluation and support
This blog post was written fast to communicate a concept I think is important. I may edit this post for legibility later.
I think evaluation and support mechanisms should be somewhat “air-gapped,” or isolated, in their information-gathering and decision-making processes. The incentives of optimal evaluators (to critique flaws) seem to run counter to the incentives of optimal supporters (to improve flaws). Individuals who might benefit from support may be discouraged from seeking it by fear of harsher evaluation if their private struggles are shared with evaluators. Evaluators who want to provide support may worry about compromising their evaluation ability if they make inconsistent exceptions. To optimally evaluate and support individuals, I believe that it is necessary to establish and declare appropriate information air gaps between different ecosystem roles.
Evaluation mechanisms, such as academic exams, job interviews, grant applications, and the peer review process, aim to critique an individual or their output. To be maximally effective, evaluation mechanisms should be somewhat adversarial to identify flaws and provide useful criticism. It is in the interests of evaluators to have access to all information about a candidate; however, it is not always in the candidate’s best interests to share all information that might affect the evaluation. It is also in the interests of evaluators for candidates to get access to all the support they need to improve.
If an attribute that disadvantages a job candidate (e.g., a disability) is protected by antidiscrimination law, an evaluator may be biased against the attribute either unconsciously or on the basis that it might genuinely reduce performance. Of course, evaluators should be required to ignore or overcome biases against protected attributes, but this “patch” may break or fail to convince candidates to divulge all evaluation-relevant information. Additionally, in the case that a candidate shares sensitive informa |
f20e4c91-689c-4b67-811c-5227e3e0f288 | trentmkelly/LessWrong-43k | LessWrong | The cone of freedom (or, freedom might only be instrumentally valuable)
Many policies with nontrivial points in their favour — as e.g. thorough environmental protection, public-dominant transportation, communal acoustic improvement — are swiftly rejected sith They Restrict Freedom, and People Value Freedom. (This objection is especially common in the US, but there's nothing intrinsically US-specific about this.) I don't intend to justify or criticise any particular such proposals. I target that swift general rejection, for it is, in many cases, based on a fallacious pattern.
In some cases, people are really rejecting the policy sith they tricked themselves into consistently supporting the opinions of one side, regardless of those opinions' basis in truth, and so I don't expect to fix those errors by reason. Those in other cases should consider the following.
Upon suggesting lesser freedom, some people assume the path towards particular low-freedom societies. They point to those dreadful examples of reduced freedom — (contemporary) China, North Korea, etc — and from there argue that any sufficient reductions of freedom lead to a miserable, enslaving society.
Some such objectors explicitly appealed to a spectrum of freedom, with total anarchy on one end and something like Nineteen Eighty-Four on the other end. Therein lies the confusion. Freedom and the lack thereof is not a line, but a cone. (This analogy isn't quite right. The correct analogy involves annoying combinatorics and isn't helpful enough to make up for it.)
At the tip of the cone is total anarchy. At the base disk is a range of many types of restrictive societies, arising from many sets of things to restrict. Such objectors think of a single path from the tip to one point on the base. We who promote a restrictive policy intended a point in the cone entirely off that path.
That is, some reject restrictive-but-beneficial policies sith they notice that other salient restrictive policies have been reliably bad. They are told to consider reduced freedom, and jump to a parti |
401fa9bb-dad6-4cf7-bfb3-d5663c700d25 | trentmkelly/LessWrong-43k | LessWrong | [Link] New SEP article on Bayesian Epistemology
The Stanford Encyclopedia of Philosophy has a brand new article on Bayesian Epistemology, replacing the previous entry on that topic.
https://plato.stanford.edu/entries/epistemology-bayesian/
Table of Contents:
* 1. A Tutorial on Bayesian Epistemology
* 1.1 A Case Study
* 1.2 Two Core Norms
* 1.3 Applications
* 1.4 Bayesians Divided: What Does Coherence Require?
* 1.5 Bayesians Divided: The Problem of the Priors
* 1.6 An Attempted Foundation: Dutch Book Arguments
* 1.7 Alternative Foundations
* 1.8 Objections to Conditionalization
* 1.9 Objections about Idealization
* 1.10 Concerns, or Encouragements, from Non-Bayesians
* 2. A Bit of Mathematical Formalism
* 3. Synchronic Norms (I): Requirements of Coherence
* 3.1 Versions of Probabilism
* 3.2 Countable Additivity
* 3.3 Regularity
* 3.4 Norms of Conditional Credences
* 3.5 Chance-Credence Principles
* 3.6 Reflection and Other Deference Principles
* 4. Synchronic Norms (II): The Problem of the Priors
* 4.1 Subjective Bayesianism
* 4.2 Objective Bayesianism
* 4.3 Forward-Looking Bayesianism
* 4.4 Connection to the Uniqueness Debate
* 5. Issues about Diachronic Norms
* 5.1 Old Evidence
* 5.2 New Theory
* 5.3 Uncertain Learning
* 5.4 Memory Loss
* 5.5 Self-Locating Credences
* 5.6 Bayesianism without Kinematics
* 6. The Problem of Idealization
* 6.1 De-idealization and Understanding
* 6.2 Striving for Ideals
* 6.3 Applications Empowered by Idealization
* 7. Closing: The Expanding Territory of Bayesianism
* Bibliography
* Academic Tools
* Other Internet Resources
* Related Entries |
04b20a76-8ade-4f3b-ac4c-6d578577b24c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Does a LLM have a utility function?
There's a lot of discussion and research into AI alignment, almost always about variants of how to define/create a utility function (or meta-function, if it changes over time) that is actually aligned with ... something. That something is at least humanity's survival, but often something like flourishing or other semi-abstract goal. Oops, that's not my question for today.
My question for today is whether utility functions are actually part of the solution at all. Humans don't have them, the most interesting spurs toward AI don't have them. Maybe anything complicated enough to be called AGI doesn't have one (or at least doesn't have a simple, concrete, consistent one). |
ed32db43-372c-4928-b5fd-fea696ff1906 | trentmkelly/LessWrong-43k | LessWrong | The Neuroscience of Pleasure
The scientific approach to self-help suggests that a better understanding of who we are can help us achieve happiness and other goals. Most centrally, it will be helpful to understand our brains, because it is our brains that generate happiness and goals.
In particular, I'd like to explore the neuroscience of pleasure and desire. Today's post covers the neuroscience of pleasure; the next post will cover the neuroscience of desire. After each post I'll consider some of the implications for self-help. In a later post, I'll consider how this research can inform the pursuit of Friendly AI.
INTRODUCING AFFECTIVE NEUROSCIENCE
The last decade has seen the arrival of affective neuroscience: the study of the neural mechanisms behind emotion, including pleasure and desire.1 Most questions remain unanswered, and experts disagree on many specifics,2 but there are some things we can state with confidence. We begin with the reward system in the brain.
The reward system consists of three major components (image)3:
* Liking: The 'hedonic impact' of reward, comprised of (1) neural processes4 that may or may not be conscious and (2) the conscious experience of pleasure.
* Wanting: Motivation for reward, comprised of (1) processes of 'incentive salience' that may or may not be conscious and (2) conscious desires.
* Learning: Associations, representations, and predictions about future rewards, comprised of (1) explicit predictions and (2) implicit knowledge and associative conditioning (e.g. Pavlovian associations).
Unfortunately, the interaction between these components is extraordinarily complex, and many puzzles remain.5
I'll share two examples of our ignorance. First: pleasure electrodes. For decades, it was thought that electrical stimulation of certain structures caused pleasure, because rats and humans would self-administer this stimulation hundreds or thousands of times each hour if allowed to do so.6 But a careful reading of the transcripts reveals the causation o |
a5073cb4-c6cf-472e-995c-7d5ac585dd29 | trentmkelly/LessWrong-43k | LessWrong | Two meetups in Denver/Boulder Colorado
There will be a rationalist meetup in Denver tomorrow at 7:00 pm at Darcy's Pub, 4955 S Ulster St. #103 Denver, CO 80237
This Saturday I'll be giving a presentation on the control problem for superintelligent agents. It'll be from 2-4 at the Boulder Hacker Space, 1965 33rd Street, UNIT B, Boulder, CO.
|
e55dda7a-4f40-4746-8b8c-2034279499dc | trentmkelly/LessWrong-43k | LessWrong | New music powers
Original post: http://bearlamp.com.au/new-music-powers/
----------------------------------------
I have written before about how I am pretty terrible at canvassing music in my head. This lends to the appalling ability (to musically oriented people) to be able to do things like listen to the same song on repeat 500 times or more in a row without being bothered by it either way. I never cared more than the sense of "this is interesting but irrelevant" on the idea.
Being indifferent to music has given me the ability to be completely useless at holding a musical preference, or explore the value of music in terms of going to music events, or participating in musical experiences.
----------------------------------------
This week something changed! Or more accurately last week. Last week I was listening to a piece for the n'th time, but at the same time was quite badly sleep deprived. As I was listening the music started falling apart. Different parts of the music changed volume so that I could isolate different instruments and follow different features of the music. At the time, being a bit sleep deprived I took it as a warning that maybe it was time to go to bed. hint hint: your going a little nuts.
Today I noticed I can still do it. When I am no longer sleep deprived I can pay attention to music in a different way than I used to be able to. I can single out the drums and only "listen" to that part, or the guitar, or the vocals. (it's pop music on the radio).
Of course the reason I bothered to write about it, and the reason that it's interesting is; as half the readers can probably imagine - I told a musical friend of mine that I had developed new powers and he said,
> Wait, people can't normally do that?
So I get to add this to the pile of typical mind, sensory perception assumptions that we make when we interpret our own individual world through our own senses. What if your's worked a bit differently? How much would that fundamentally change h |
f24e31f8-14ae-445b-abaf-ce0406ffb44d | StampyAI/alignment-research-dataset/arxiv | Arxiv | Towards Autonomous Grading In The Real World
I Introduction
---------------
Recent years have seen a dramatic rise in demand for automation in the construction industry. Bulldozers play a key role at most construction sites and are the go-to machinery for many tasks. While hand-operated bulldozing is a well established profession, it suffers from a shortage in experienced drivers, which leads to longer construction times and higher costs. Our work focuses on automation of the grading task, where a bulldozer is confronted with an uneven plot of land (see LABEL:fig:sim2real\_all). The bulldozer is equipped with a blade and pushes around the soil to bring the plot to a specified height.
Similarly to other real-life robotics applications, surface grading suffers from two major challenges. First, missing or noisy sensory information, and the agent’s limited field-of-view makes the problem partially observable. This makes autonomy difficult compared to a fully observable scenario. Second, data-driven methods e.g., deep neural networks (DNNs), are sample inefficient and require large amounts of expert data in order to converge. In the context of surface grading, this aspect is especially challenging as obtaining diverse real-world data is expensive and sometimes infeasible. Specifically, it requires long hours of manual operation, of multiple machines, in order to design the grading scenario i.e., clearing the work area, placing sand piles, calibrating equipment, etc.
In order to learn a behavior policy capable of autonomous grading in the real-world, we design a simulator and leverage it using deep supervised learning techniques. As opposed to the real-world, a good simulator provides several benefits. First, data collection does not require an expert human operator and can instead be obtained by running advanced heuristics on clean, privileged data. Second, simulations are configurable and therefore enable the design of complex scenarios and data augmentation techniques. Moreover, simulations enable the domain expert to focus on the challenging parts of the task in hand. Finally, by using a simulator, we can collect data for training and also efficiently evaluate a policy without the risk of failure in the real-world. Therefore, reducing the risk and wear-and-tear on real bulldozers.
In this work, we focus on DNNs for perception and decision making. Unlike classic rule-based methods, DNNs are trained to make decisions based on collected data. While classic detection techniques rely on the ability to detect edges, a DNN is trained to extract more complex features. By doing so, it can overcome data imperfections such as missing information due to occlusions and measurement noise. We utilize a method called privileged learning [[30](#bib.bib57 "A new learning paradigm: learning using privileged information")], where the agent is trained on noisy data to imitate an expert, who has direct access to perfect and noise-free measurements. We show how this paradigm learns a robust feature extractor, capable of overcoming the inaccuracies of real-world measurements. Moreover, our agent trained using privileged learning is robust and able to generalize, while methods that rely on classic detection techniques [[6](#bib.bib6 "Path planning for autonomous bulldozers")] fail when applied to our real-world scaled prototype environment,
Our main contributions are as follows:
(1) We create a physically realistic simulation environment for training and evaluation of both heuristics and learning-based agents.
(2) We show that training agents using privileged learning techniques can be used to overcome the limitations of standard methods for autonomous grading.
(3) We validate our methods and assumptions on a scaled prototype environment, which includes real-world vehicle dynamics, sensors and soil interaction.
Ii Related Work
----------------
###
Ii-a Bulldozer Automation
Very little research has been done on the automation of bulldozers, specifically, on path-planning optimization using deep learning methods. One explanation might be the complexity of this task. Specifically, manual operation of a bulldozer requires skillful operator as the behavior of the soil is irreversible [[13](#bib.bib32 "Modelling of robotic bulldozing operations for autonomous control")].
In [[6](#bib.bib6 "Path planning for autonomous bulldozers")], the authors implemented a heuristic approach for autonomous surface grading. They examined the trade-off between grading the pile when the blade is at full capacity and pushing less sand in order to reduce the elapsed time.
[[12](#bib.bib30 "Autonomous grading work using deep reinforcement learning based control")] and [[14](#bib.bib29 "Route optimization for autonomous bulldozer by distributed deep reinforcement learning")] used off-the-shelf reinforcement learning (RL) techniques (DQN; [[11](#bib.bib46 "Human-level control through deep reinforcement learning")]) on an overly simplified environment with a single sand pile. Their simulation did not include the change in the bulldozer’s velocity as a function of volume being pushed, nor did they validate their assumptions in real-world experiments.
###
Ii-B Sand Simulation
Precise particle simulation, in the case of sand, for example, is an active line of research using both classic and modern tools. Classical methods describe soil using solid mechanical equations [[19](#bib.bib62 "Introduction to terrain vehicle systems: m. g. bekker. univ. of michigan press, ann arbor (1969). $27.50")], discrete element methods [[1](#bib.bib60 "A discrete numerical model for granular assemblies")], or fluid mechanics [[26](#bib.bib65 "A particle method for history-dependent materials")]; whereas newer modern methods utilize DNNs [[24](#bib.bib66 "Learning to simulate complex physics with graph networks")] to simulate the reaction of particles to forces. While these methods achieve outstanding results and have impacted both the gaming and cinematic industries, they require high computational cost and long run-times. [[21](#bib.bib69 "AGPNet–autonomous grading policy network")] established a simulator for earth-moving vehicles, where the goal was to enable policy evaluation for bulldozers. This simulation was fairly accurate, fast and able to capture the main aspects of the interaction between the sand and the vehicle.
As our goal is to enable rapid data collection and policy evaluation, we chose to simulate the soil by considering only the heightmap [[8](#bib.bib8 "Developing a simple model for sand-tool interaction and autonomously shaping sand")] i.e only the surface of the soil changes over time. This approach takes into account key interactions on the one hand while maintaining simplicity and efficiency on the other, and is commonly used for robotic applications.
###
Ii-C Sim-to-Real
The task of training in simulation and deploying in the real-world has been of much interest in the past years and is known as sim-to-real. It can be divided into two main categories – dynamics and perception.
The dynamics gap stems from the inability to precisely model the reaction of the system. This is tackled by solving a robust objective using methods such as dynamics randomization [[15](#bib.bib48 "Sim-to-real transfer of robotic control with dynamics randomization")] or adversarial training [[16](#bib.bib49 "Robust adversarial reinforcement learning"), [29](#bib.bib50 "Action robust reinforcement learning and applications in continuous control")].
Overcoming the perception gap is done by learning a mapping between the simulation and the real-world [[18](#bib.bib47 "Rl-cyclegan: reinforcement learning aware simulation-to-real")], [[10](#bib.bib68 "S-flow gan")] or learning robust feature extractors [[9](#bib.bib58 "Learning high-speed flight in the wild")].
Comparing to previous work, we create a simulation that closely mimics real dynamics, thus enabling us to focus our efforts on closing the visual sim-to-real gap.
We chose to minimize this gap by using a heightmap to represent the environment within our simulation. This is beneficial compared to RGB images as it better resembles the real-world data [[9](#bib.bib58 "Learning high-speed flight in the wild")]. Finally, we leverage privileged information, available only during simulation, in order to train a policy capable of overcoming the inaccuracies of real-world data.
Iii Background and Problem Formulation
---------------------------------------
We begin by formally describing the task of an autonomous bulldozer, the setup, imitation learning and privileged learning.
###
Iii-a Partially Observable Markov Decision Processes
A Partially-Observable Markov Decision Process (POMDP; [[27](#bib.bib38 "Reinforcement learning: an introduction")]) consists of the tuple (S,O,A,P,R). The state s∈S contains all the required information to learn an optimal policy. However, agents are often provided with an observation o∈O, which contains partial or noisy information regarding the environment. Unlike states, observations typically lack the sufficient statistics for optimality. At each state s∈S the agent performs an action a∈A. Then, the system transitions to the next state s′ based on a transition kernel P(s′|s,a). Finally, the agent is provided with a reward r(s,a), which scores the chosen action according to a predefined metric.
The goal of an agent is to learn a behavior policy π(s), which can be stochastic or deterministic, that maximizes the cumulative reward-to-go. In this work, we assume a POMDP/R i.e. a POMDP without a reward function.
###
Iii-B Problem Formulation
| | | |
| --- | --- | --- |
|
(a) Initialization
|
(b) Continuous
|
(c) Edge
|
Fig. 2: The autonomous grading task is divided into three sub-tasks. (a) initialization: Here, the area contains a few rows of pre-dumped sand piles without any previously graded area. The bulldozer is required to create an incline and reach a predefined target height. (b) continuous: Here, the bulldozer is located on top of the previously graded platform, and sand piles are continuously being added to the vicinity of the graded platform. The bulldozer is required to extend the graded platform. (c) edge: Here, most of the area is already graded and sand piles are dumped at the edge of the platform. The bulldozer is required to create decline from the platform in order to safely leave the site.
In order to tackle the task of autonomous grading, we divide it into three main sub-tasks: initialization, continuous, and edge (see [Fig. 2](#S3.F2 "Fig. 2 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World")). Each sub-task exhibits different challenges and can be simulated and tested individually. Next, we formalize it as a POMDP/R as described in [Section III-A](#S3.SS1 "III-A Partially Observable Markov Decision Processes ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World").
States: In our case, the state includes the target area size, the bulldozer’s location within that area, the heightmap of the target area, and the full bulldozer trajectory up until the current time point. see [Fig. 2(a)](#S3.F2.sf1 "(a) ‣ Fig. 3 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World") for additional details.
Observations: In our case, the observation is partial due to two reasons. First, the construction site’s dimensions may vary while the agent’s field-of-view remains constant. Second, measurement errors are common in real-world sensors and often lead to missing or noisy information. Therefore, the agent is presented with an ego-view of the current state. Meaning, a bounding box view around the current location of the bulldozer (marked in grey in [Fig. 2(a)](#S3.F2.sf1 "(a) ‣ Fig. 3 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World")) together with some measurement noise augmentations that mimic real-world sensors (see [Fig. 2(b)](#S3.F2.sf2 "(b) ‣ Fig. 3 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World")).
Actions: The bulldozer’s control can be performed both at the low level, e.g., providing rotation and velocity to the bulldozer, or at a higher level, e.g., selecting a destination coordinate. We chose to focus on macro-actions [[28](#bib.bib42 "Between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning")], such as coordinate selection, and leave the low-level control to classic control algorithms. Macro actions, also known as skills or options, are temporally extended actions and are known to benefit learning especially when those are meaningful. Taking inspiration from the behavior of expert bulldozer operators, we consider the start-point and push-point action-set ,abbreviated as ‘SnP’, inspired by [[6](#bib.bib6 "Path planning for autonomous bulldozers")] and shown in [Fig. 4](#S3.F4 "Fig. 4 ‣ III-D Privileged Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World"). At each state, the agent selects two coordinates, push-point and start-point denoted as (P,S), respectively. The bulldozer drives towards P, then reverses slightly further than its initial location to an intermediate point B, and finally moves to the next starting position S.
Transitions: Transitions are determined by the bulldozer’s dynamics and physical properties of soil and the environment.
| | |
| --- | --- |
|
(a) State
|
(b) Observation
|
Fig. 3: (a) An example of the full state, which includes all of the information regarding the task (full heightmap of the area and bulldozer position). The projection of the observation to the coordinate system of the state is marked in gray (b) The state’s corresponding observation, which is a bounding box view of the state around the bulldozer, described in the bulldozer’s ego axis.
###
Iii-C Imitation Learning
Imitation learning (IL; [[7](#bib.bib40 "Imitation learning: a survey of learning methods")]) is the go-to approach for learning behavior policies in sequential decision making problems. As opposed to reinforcement learning (RL), it relies on expert demonstrations of the correct policy and is highly suitable for cases where the definition of a reward function does not naturally exist.
In this work, we focus on behavior cloning (BC; [[22](#bib.bib53 "Efficient reductions for imitation learning")]), where an expert demonstrator collects data a-priori. This data is then used offline to train a behavior policy using a supervised learning loss:
| | | | |
| --- | --- | --- | --- |
| | L(θ;D)=E(o,a)∼D[l(πθ(o),a)], | | (1) |
Here, D is the training dataset generated by an expert demonstrator containing corresponding actions a, and observations o. πθ is the agent’s learned policy and l(⋅) is a cross entropy metric.
While more advanced IL methods exist, such as DAgger [[23](#bib.bib52 "A reduction of imitation learning and structured prediction to no-regret online learning")], they generally require low-level expert interaction. For instance, an expert driver can demonstrate how to drive on the highway. Yet, given a single observation it is often impossible to provide the precise steering angle and throttle required. Surface grading is no different in that sense, making these methods inadequate for the task.
###
Iii-D Privileged Learning
In classic machine learning (ML), the goal of a teacher (expert) is merely used to transfer knowledge to the student (agent). In many tasks, the expert can obtain access to additional information, during training, that is not available during inference. This technique is called privileged learning [[30](#bib.bib57 "A new learning paradigm: learning using privileged information")]. The benefit of this scheme, is the ability to leverage the data-driven nature of machine learning models, allowing them to learn robust estimators.
| | | |
| --- | --- | --- |
|
(a) Initial state
|
(b) State after p
|
(c) State after s
|
Fig. 4: Example of the trajectory, which the bulldozer follows when executing actions (p, s). The initial position is marked by the red dot in (a).
The agent executes the following maneuver:
(i) From origin rotate to face p.
(ii) Drive forward to p. Only here, the bulldozer interacts with sand while grading it.
(iii) Reverse back to B (blue dot in (b)).
(iv) Rotate to face next s.
(v) Drive forward to next s in (c).
Iv Method
----------
In order to tackle the problem of autonomous grading we focus our efforts on two fronts: (i) Creating a realistic simulation environment for rapid training and evaluation and (ii) robust imitation learning method for coping with real-world imperfections.
###
Iv-a Bulldozer Simulation
Fast and efficient training is crucial for rapid evaluation and development of control algorithms. The complexity of construction site problems lies in the interaction between the vehicle and the soil. The movement of the soil due to a bulldozer’s movement is not trivial and can be simulated using different techniques, each capturing a different level of detail of the real interaction [[17](#bib.bib35 "Interactive terrain simulation and force distribution models in sand piles"), [25](#bib.bib36 "Bulldozing of granular material"), [8](#bib.bib8 "Developing a simple model for sand-tool interaction and autonomously shaping sand")].
We created a physically realistic simulated environment taking these considerations into account.
In our simulation, each sand pile is initially modeled as a multivariate gaussian distribution as follows:
| | | | |
| --- | --- | --- | --- |
| | f(x,y)=V2πσxσy∗exp(−12[(x−μxσx)2+(y−μyσy)2)]), | | (2) |
Here, f(x,y) is the height of the soil at point x,y. The center and footprint of the sand pile are denoted by (μx,μy)[cm] and (σx,σy)[cm] respectively and the V[cm3] is the pile’s volume. For example, given a volume V, the pile’s height is reduced as its footprint grows.
Similarly to [[2](#bib.bib63 "D6/d6 xe track-type tractors specifications")], we assume simple bulldozer mechanics with three possible velocity gears and a linear relation between the bulldozer’s velocity and the load on the blade. This simple relation is used to calculate the total time of a specific action based on the volume of sand pushed during the performed action. In addition, we assume that the blade has a known maximal volume, which can be graded during a single leg. [Fig. 5](#S4.F5 "Fig. 5 ‣ IV-A Bulldozer Simulation ‣ IV Method ‣ Towards Autonomous Grading In The Real World") illustrates this concept.

Fig. 5: Illustration of the suggested privileged learning algorithm. A clean observation ot is augmented to create a noisy observation N(ot). The expert policy, i.e, baseline, uses ot in order to produce an ideal action at - shown as green and red waypoints. The privileged BC agent aims to learn a robust policy πθ that takes the augmented observation N(ot) as input and outputs an action a′t similar as possible to at.
At its core, the bulldozer’s movement is continuous, which enables the definition of low and high levels of interactions. Due to the complexity and long horizon of this task, we opted for the SnP action-set i.e (P,S), as defined in [Sections IV-B](#S4.SS2 "IV-B Baseline Algorithm ‣ IV Method ‣ Towards Autonomous Grading In The Real World") and [III-B](#S3.SS2 "III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World"). The simulation receives an action tuple (P,S), and simulates the low-level control and movement. As these actions are complex, the simulation also evaluates the time it took to perform them – taking into consideration the volume of sand pushed and the traveled distance.
After an action is performed, the simulation returns the updated observation. To minimize the sim-to-real gap, the observation is an ego-view of the full heightmap, meaning a bounding box around the bulldozer rotated towards its direction of movement, as shown in [Fig. 2(b)](#S3.F2.sf2 "(b) ‣ Fig. 3 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World") and explained in [Section III-B](#S3.SS2 "III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World").
Although this simulation is computationally inexpensive, it takes into consideration the bulldozer’s behaviour, which affects the agents ability to choose the optimal actions i.e., the change in velocity due to torque and soil-bulldozer interactions.
###
Iv-B Baseline Algorithm
Finally, after reaching B, the bulldozer rotates and moves towards the next starting position – the S point. This point is selected to be in front of the nearest sand pile. This logic closely mimics the behavior of a human driver111See example video in [youtube.com/watch?v=6dVZhFZYofY](https://www.youtube.com/watch?v=6dVZhFZYofY)., efficiently grading an entire area, and takes into consideration the physical limitations of the bulldozer’s movement.
###
Iv-C Privileged Behavioral Cloning
The motivation to use privileged learning techniques is the fact that the baseline operates very well in a clean simulation environment i.e., without any noise or inaccuracies.
In the privileged learning setting, a simulator can be quite beneficial as it allows the usage of otherwise unavailable information. Specifically, a simulation can generate both clean and noisy observations, which are required for this method, where the former is unavailable in the real-world.
The privileged behavioral cloning (PBC) technique is presented in [Fig. 5](#S4.F5 "Fig. 5 ‣ IV-A Bulldozer Simulation ‣ IV Method ‣ Towards Autonomous Grading In The Real World"). In PBC, the agent is initialized at a random initial state, then the baseline solves the problem while presented with noiseless observations. Both clean observations o and expert actions a are recorded for future use. Next, the clean observations are augmented to resemble real-world inaccuracies, denoted by N(o), and presented to the agent for behavior cloning together with the expert’s actions.
As the agent is given a more realistic noisy observation, it is required to perform robust feature extraction and learn the correct actions, thus distilling the information in the heightmap. In our case, the agent is provided an observation with multiple real-world inaccuracies. These include low-volume piles that are spread randomly around the target area, depth measurement noise and occlusions.
Specifically, in PBC, the agent optimizes the following loss, adapted from [Section III-C](#S3.SS3 "III-C Imitation Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World"):
| | | | |
| --- | --- | --- | --- |
| | L(θ;D)=E(o,a)∼D[l(πθ(N(o)),a)], | | (3) |
Here, D is the training dataset generated by an expert demonstrator presented with clean observations. D contains corresponding actions a, and clean observations o. N(o) is the augmented observations, πθ is the agent’s learned policy and l(⋅) is a cross entropy metric.
###
Iv-D Scaled Prototype Environment
While the simulated environment enables rapid training and testing, it is not physically precise. For this reason, we built a 1:9 scaled prototype environment, which includes a 250×250cm sandbox and a scaled bulldozer prototype 60×40cm in size in order to mimic the real-world inaccuracies. See LABEL:fig:sim2real\_all for images of our setup. A RGBD camera, which relies on stereo matching [[4](#bib.bib37 "Multiple view geometry in computer vision")], is mounted above the sandbox at a distance of 2.5 meters and provides a realistic dense heightmap of the area. In addition, the camera is used to localize the prototype bulldozer within the target area using an ArUco marker [[20](#bib.bib59 "Speeded up detection of squared fiducial markers")], which is mounted on top of the bulldozer. This marker provides both the x,y location and yaw ϕ of the bulldozer. The images and positions from the camera are then used by our trained agent to predict the next (P,S) action-set. The bulldozer then implements a low-level controller, moving according to the chosen waypoints in a closed-loop control manner.
###
Iv-E Experimental Details
In order to train our BC agent, we sample random initial states in simulation and solve these tasks using the baseline. During the baseline execution, we record the observation-action tuple (ot,at). These sets are then fed to our BC agents using the losses presented in [Sections IV-C](#S4.SS3 "IV-C Privileged Behavioral Cloning ‣ IV Method ‣ Towards Autonomous Grading In The Real World") and [III-C](#S3.SS3 "III-C Imitation Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World") with and without added augmentations, respectively.
For our policy model, we use a ResNet-based [[5](#bib.bib31 "Deep residual learning for image recognition")] end-to-end fully convolutional neural network with dilated convolutions. The input size is (H×W), and the output size is (H×W×2) – a channel for each of the actions (P,S) (See [Fig. 4](#S3.F4 "Fig. 4 ‣ III-D Privileged Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World")). We train both agents using 200 random initial states where in each episode, we randomize the number of sand piles, shapes, locations, volumes and initial location. The training procedure is performed using batches of 128 states for 2000 epochs.
| Algorithm | Continuous | Edge | Initialization |
| --- | --- | --- | --- |
| Volume cleared [%] | Time [minutes] | Volume cleared [%] | Time [minutes] | Volume cleared [%] | Time [minutes] |
| baseline | 98.2 | 11.8 | 97 | 1.7 | 98.1 | 2.2 |
| BC | 88.6 | 12.9 | 93.2 | 2.1 | 97.7 | 2.8 |
| Privileged BC | 87.3 | 16.4 | 93.7 | 2.8 | 95.3 | 4.5 |
TABLE I: Results of our algorithms in *simulation* using clean observations. Each result presents an average over 50 random episodes. Volume cleared is measured in % (↑ is better) and time in minutes (↓ is better). Description of each scenario is presented in [Fig. 2](#S3.F2 "Fig. 2 ‣ III-B Problem Formulation ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World").
V Experiments
--------------
| | |
| --- | --- |
|
(a) Clean Observations
|
(b) Noisy Observations
|
Fig. 6: Within the simulation, we compare between the various agents i.e. baseline, BC and privileged BC, on both clean and noisy observations. (a) Evaluation on clean observations. Here, our agents are capable of predicting actions with a similar intention to that of the baseline’s. (b) Evaluation on noisy observations. Here, both the baseline and BC agents fail to detect the sand piles as opposed to the privileged BC agent.
We conduct rigorous experiments in both the simulated and scaled prototype environments in order to validate our assumptions. First, we prove that although the baseline performs exceptionally well in a clean simulation, it fails catastrophically when deployed in our scaled prototype environment. Second, we train DNN based agents using BC and privileged BC and show their ability to extract valuable information and imitate the baseline’s decision making process.
Finally, we conduct real-world experiments showing the advantage of the privileged agent and its ability to execute the task of autonomous grading, on a real-world scaled prototype environment, despite training only in simulation.
###
V-a Simulation Results
Within our simulated environment, we conducted extensive experiments on the aforementioned sub-tasks, ‘initialization’, ‘edge’ and ‘continuous’, and methods. In each experiment, we randomize the number of sand piles, shapes, locations, volumes and initial bulldozer position. In addition, we also examined the effect of noisy observations during evaluation. Doing so allows us to fully understand the advantages and shortcomings of our proposed method.
We train two agents, which we term BC and privileged BC, which were trained using clean and noisy observations respectively. In order to train our agents, we follow the methods detailed in [Section IV-C](#S4.SS3 "IV-C Privileged Behavioral Cloning ‣ IV Method ‣ Towards Autonomous Grading In The Real World") and [Section III-C](#S3.SS3 "III-C Imitation Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World"). In each step of the simulation, we provide the baseline with a clean observation ot and collect two types of data tuples {ot,at} and {N(ot),at}, for the BC and privileged BC agents, respectively. Here N(ot) is the augmented observation described in [Section IV-C](#S4.SS3 "IV-C Privileged Behavioral Cloning ‣ IV Method ‣ Towards Autonomous Grading In The Real World") and at is the action performed by the baseline.
| | baseline | BC | Privileged BC |
| --- | --- | --- | --- |
| Success [%] | 14.3 | 60 | 91.4 |
TABLE II: Percentage of successful actions performed on the scaled prototype environment. We examined 35 states and the selected actions of all three agents, i.e., BC, privileged BC and the baseline. An action is classified as successful if it fits the distribution of an expert on a noisy environment. Results show that privileged BC agent outperforms other agents thus supports our hypothesis that it can generalize well to a real environment, though it was trained in a simulated one. Please refer to [Fig. 7](#S5.F7 "Fig. 7 ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World") for an illustration of the above.
Observing the behavior of the baseline method, we conclude that it performs exceptionally well when given access to clean measurements ([Fig. 5(a)](#S5.F5.sf1 "(a) ‣ Fig. 6 ‣ V Experiments ‣ Towards Autonomous Grading In The Real World")). However, as it is designed around classic contour detection techniques, it is highly sensitive to noisy observations (See [Fig. 5(b)](#S5.F5.sf2 "(b) ‣ Fig. 6 ‣ V Experiments ‣ Towards Autonomous Grading In The Real World")) where it often fails. This preliminary result inspired us to leverage the baseline algorithm for training an imitation learning policy to generalize and cope with noisy observations.
We can conclude that both BC and privileged BC agents can indeed imitate the baseline when presented with clean observations but do not outperform it ([Fig. 5(a)](#S5.F5.sf1 "(a) ‣ Fig. 6 ‣ V Experiments ‣ Towards Autonomous Grading In The Real World") and [Table I](#S4.T1 "TABLE I ‣ IV-E Experimental Details ‣ IV Method ‣ Towards Autonomous Grading In The Real World")). In addition, we found that the BC agent outperforms the privileged BC agent when presented with clean observations,
However, as opposed to the privileged BC paradigm, it is unable to generalize when confronted with noisy observations. This sits in line with well know theory on robustness and generalization [[31](#bib.bib64 "Robust markov decision processes")]
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | | |
Fig. 7: Evaluation of the various methods when placed in the real-world. The bulldozer’s location is illustrated as a white element on top of the sand. The heightmap represents the sand’s height at each coordinate. The contours highlight the sand piles as detected by classic detection methods and the green and red coordinates are the actions predicted by our trained agent. As can be seen, the baseline (a) fails to generalize and differentiate between sand piles. The BC agent (b) learns to imitate the baseline on the clean data, and is thus prone to the same mistakes. Finally, the privileged learning paradigm (c) enables training a robust agent that identifies the sand piles and is capable of operating in the real-world. Please refer to [Table II](#S5.T2 "TABLE II ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World") for quantitative results on multiple states.
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| | | |
(a) First (P,S) set (left to right)
|
| | | |
| --- | --- | --- |
| | | |
(b) Second (P,S) set (left to right)
|
Fig. 8: A trajectory collected by the privileged BC agent on our real-world scaled prototype environment, which includes one sand pile. Each row is a different sequence of (P,S) actions chosen by the agent. The sequence is shown from left to right. The P,S and B points shown in [Fig. 4](#S3.F4 "Fig. 4 ‣ III-D Privileged Learning ‣ III Background and Problem Formulation ‣ Towards Autonomous Grading In The Real World") are marked in red, green and blue respectively
###
V-B Scaled Prototype Environment Results
Motivated by the results of [Section V-A](#S5.SS1 "V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World"), we continue and compare all methods on a scaled prototype environment. The prototype environment enables us to extract invaluable information regarding the true sim-to-real gap between the simulation and an actual, real-sized, bulldozer environment, which otherwise would be infeasible. This environment addresses two key aspects of the sim-to-real gap. First, the underlying dynamics of our simulation are put to the test, as we cannot fully model the intricate interactions between the soil and the bulldozer. Second, the usage of a RGBD camera introduces noise and other inaccuracies into the observation space. It is important to notice that we do not train our agents in this environment but merely deploy the agents trained in [Section V-A](#S5.SS1 "V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World") and test whether they can generalize well to this environment.
We present examples of predictions in [Fig. 7](#S5.F7 "Fig. 7 ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World"), and a full trajectory performed by the privileged BC agent in LABEL:fig:sim2real\_all and [Fig. 8](#S5.F8 "Fig. 8 ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World"). In addition, we present a quantitative comparison of successful actions performed on the scaled prototype environment in [Table II](#S5.T2 "TABLE II ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World"). These results re-validate our conclusion from [Section V-A](#S5.SS1 "V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World") that the baseline algorithm under-performs when presented with real noisy observations.
In our quantitative experiments ([Table II](#S5.T2 "TABLE II ‣ V-A Simulation Results ‣ V Experiments ‣ Towards Autonomous Grading In The Real World")), both BC agents outperformed the baseline. However, only the privileged BC agent learned a robust feature extractor. This enabled it to solve the task with satisfying performance (over 90%).
and emphasizes the importance and benefit of leveraging a privileged agent within our training procedure.
Vi Conclusions
---------------
In this work, we showed the importance of automating tasks in the field of construction, specifically, the grading task. We argued that data collection in the real-world is not only expensive but often infeasible.
Our proposed simulation is beneficial for two reasons. First, it enables generation of various and diverse scenarios. Second, it allows evaluation of planning policies prior to their deployment in the real-world.
Our suggested baseline approach, which relies on classic detection techniques, performed well in an ideal environment. However, it fails when presented with real observations that include inaccuracies. A similar behavior was observed by the BC agent trained with clean observations.
By combining the baseline and our simulation environment, using privileged learning, we were able to learn a robust behavior policy capable of solving the task. The privileged BC agent was the only one able to solve a complete grading task in our scaled prototype environment.
Vii Future Work
----------------
In this work, we aimed to mimic the dynamics and sensory information of a real-world bulldozer environment. Hence, future work can be divided to three topics:
perception:
In both of our environments, we used a top-view depth map, which was synthesized in simulation and acquired using a top-hanging RGBD camera on the real-world scaled prototype. However, autonomous vehicles often use on-board sensors for perception, which induce occlusions to the observation. In our case, a mounted sensor, e.g. LIDAR, will be unable to see directly behind the sand piles. We believe these occlusions can be solved using domain knowledge and sim-to-real methods ([Section II](#S2 "II Related Work ‣ Towards Autonomous Grading In The Real World")).
dynamics:
Given initial and final (P,S) waypoints, our low-level planner moves between the points in a straight line with periodic in-place rotations. We observed that real operators often move in a curve-like continuous path. We believe this gap can easily be solved by introducing Bézier curves [[3](#bib.bib67 "Curves and surfaces for computer-aided geometric design: a practical guide")] to the path planner. This change will improve maneuverability, both in simulation and in our real-world scaled environment, but should not affect the overall results.
localization:
Our simulation and prototype did not take into account any localization errors for the bulldozer’s position. While our scaled prototype environment uses an ArUco marker for localization of the bulldozer, standard autonomous systems rely on a handful of sensors in order to solve the localization task. These errors may not only affect the perception gap but also the low-level control given the chosen actions. Future work can quantify the effect of localization errors on our overall solution.
Viii Acknowledgements
----------------------
This work is part of a joint project between [Bosch-AI](https://www.bosch-ai.com) and [Shimizu](https://www.shimz.co.jp/en/) aimed at making autonomous grading agents available, reliable and robust. |
96c33a7b-e9f2-47a1-a839-5fc55e087fc0 | trentmkelly/LessWrong-43k | LessWrong | Meetup : MelbLW: November Social Meetup
Discussion article for the meetup : MelbLW: November Social Meetup
WHEN: 21 November 2014 06:30:00PM (+0800)
WHERE: Bull & Bear Tavern, 347 Flinders Lane, Melbourne
November's social meetup is scheduled for this Friday (21st November) as usual. This month, we will be trying out a new venue, at the Bull & Bear Tavern near Flinders Street Station.
Our social meetups are relaxed, informal events where we chat and often play games. The start and finish times are very loose - people will be coming and going throughout the night, so don't worry if you are coming later or have to leave early.
Where? Bull & Bear Tavern, 347 Flinders Lane, Melbourne (on Flinders Lane between Queen and Elizabeth)
When? From 6:30pm until late, Friday November 21st
Contact? If you have any questions, just text or call Richard on 0421231789
Dinner? The Bull & Bear do serve food. At about 11pm, we will most likely go for our traditional post-meetup souvlakis at Stalactites
Games? We will have our own section of the Bull & Bear and can bring board games. If you'd like to play something, bring it along and you should be able to rustle up a group.
To organise similar events, please send an email to melbournelw@gmail.com
Discussion article for the meetup : MelbLW: November Social Meetup |
01862099-5ddb-4058-9afb-69e4411a4723 | trentmkelly/LessWrong-43k | LessWrong | Large Language Models can Strategically Deceive their Users when Put Under Pressure.
Results of an autonomous stock trading agent in a realistic, simulated environment.
> We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
Key Takeaways
GPT-4 consistently displays much stronger rates of both misalignment and deception than either 3.5 turbo or the DaVinci models. RLHF GPT-4 also shows slightly stronger rates of misalignment and deception than the base model.
1. Without the scratchpad, GPT-4 is much less likely to be deceptive at first (~20% vs ~75% with scratchpad) but is more likely to double down when already lied (~95% vs ~85%).
2. Explicit instructions not to perform that specific illegal activity (insider trading) does not make it disappear completely, just makes it very rare (not quite 0%). On the rare occasion misalignment occurs in this |
38a2a92e-0955-4249-991e-8676e6ab95ee | trentmkelly/LessWrong-43k | LessWrong | The Rationalistsphere and the Less Wrong wiki
Hi everyone!
For people not acquainted with me, I'm Deku-shrub, often known online for my cybercrime research, as well as fairly heavy involvement in the global transhumanist movement with projects like the UK Transhumanist Party and the H+Pedia wiki.
For almost 2 years year now on and off I have been trying to grok what Less Wrong is about, but I've shirked reading all the sequences end to end, instead focused on the most popular ideas transmitted by Internet cultural osmosis. I'm an amateur sociologist and understanding Less Wrong falls within my wider project of understanding the different trends within the contemporary and historical transhumanist movement.
I'm very keen to pin down today's shape of the rationalistsphere and its critics, and the best place I have found do this is on the wiki. Utilising Cunningham's Law at times, I've been building some key navigational and primer articles on the wiki. However with the very lowest hanging fruit now addressed I ask - what next for the wiki?
Distillation of Less Wrong
There was a historical attempt to summerise all major Less Wrong posts, an interesting but incomplete project. It was also approach without a usefully normalised approach. Ideally, every article would have its own page which could be heavily tagged up with metadata such a themes, importance, length, quality, author and such. Is this the goal of the wiki?
Outreach and communications
Another major project is to fully index the Diaspora across Twitter, Blogs, Tumblr, Reddit, Facebook etc and improve the flow of information between the relevant sub communities.
You'll probably want to join one of the chat platforms if you're interested in getting involved. Hell, there are even a few memes and probably more to collect.
Rationalist research
I'll admit I'm ignorant of the goal of Arbital, but I do love me a wiki for research. Cross referencing and citing ideas, merging, splitting, identifying and fully capturing truly interesting and useful ideas |
fb893b54-e1a7-449f-95a3-b813d70c13b9 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Compositional preference models for aligning LMs
*This post summarizes the main results from our recently released paper*[*Compositional preference models for aligning LMs*](https://arxiv.org/abs/2310.13011) *and puts them in the broader context of AI safety. For a quick summary of the paper, take a look at our*[*Twitter thread*](https://twitter.com/dongyoung4091/status/1717045681431753097)*.*
**TL;DR**: We propose a new approach to building preference models out of prompted LMs. Compositional Preference Models (CPMs) decompose scoring a text into (1) constructing a series of questions about interpretable features of that text (e.g. how informative it is), (2) obtaining scalar scores for these features from a prompted LM (e.g. ChatGPT), and (3) aggregating these scores using a logistic regression classifier trained to predict human judgements. We show that CPMs, compared with standard preference models (PMs), generalize better and are more robust to reward model overoptimization. Moreover, best-of-*n* samples obtained using CPMs tend to be preferred over samples obtained using similar, conventional PMs. Finally, CPMs are a novel angle at scalable oversight: they decompose a hard evaluation problem into a series of simpler, human-interpretable evaluation problems.
How compositional preference models work?
-----------------------------------------

*Figure 1: While standard PMs output a preference score directly, CPMs score different features of LM responses separately and output a preference score as a linear combination of feature values.*
Preference Models (PMs) are models trained to assign an LM response a score indicating the quality of the response. They are the workhorse of many techniques for aligning LMs: they are most prominently used as reward functions in RLHF or as ranking models in best-of-n sampling, in addition to playing a role in other techniques such as [pretraining with human feedback](https://www.lesswrong.com/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences).
Standard PMs involve adding a scalar head on top of a base model and finetuning the whole model (or certain upper layers) to predict which of two texts a human would prefer. While this approach is highly effective in practice, it can lead to uninterpretable models that fit spurious correlations in human preference judgements and are prone to goodharting (overoptimization).
We introduce an alternative: Compositional Preference Models (CPM). In contrast to PMs, CPMs decompose response evaluation into the following steps:
**Feature decomposition**. We maintain a fixed list of 13 human-interpretable features (e.g. specificity, relevance, readability) and 13 corresponding prompt templates (e.g. `You will be shown a conversation [...] please judge whether the assistant's reply is relevant. Score that on a scale from 1 to 10 [...] {conversation_history} {reply}`).
**Feature scoring**. We ask an LM (e.g. GPT-3.5) to assign a score to each feature. Each feature of a single response is scored in a separate context window.
**Aggregation**. The feature scores are combined into a scalar preference score using a logistic regression classifier trained to predict human preference judgements (i.e. which of two texts a human would prefer).
Robustness to overoptimization
------------------------------

*Figure 2: Scores given by a gold PM (solid lines) and a corresponding proxy PM (dashed lines) on samples obtained through best-of-n sampling against the gold PM. CPM-GPT-3.5 and CPM-Flan-T5 refer to CPMs constructed with feature extraction based on GPT-3.5 and Flan-T5, respectively.*
To investigate if CPM improves robustness to overoptimization, we follow the setup of [Gao et al. (2023)](https://www.lesswrong.com/posts/shcSdHGPhnLQkpSbX/scaling-laws-for-reward-model-overoptimization) and construct a synthetic dataset where the output of one PM (defined to be the “gold PM”) is assumed to be the ground truth for human preferences. We then use the gold PMs to generate synthetic labels to train proxy PMs. We do that separately for three pairs of proxy and gold PMs: (i) standard PMs, (ii) CPMs using GPT-3.5 for feature extraction and (iii) CPMs using Flan-T5-XL (3B params) for feature extraction. Finally, we do best-of-*n* against a given proxy PM and comparse those best samples’ scores according to both proxy and gold PM.
As we increase the amount of optimization pressure (the number of candidates *n*), scores given by proxy PMs diverge from scores given by gold PMs (see Fig. 2). This is an indicator of preference model overoptimization, a form of reward hacking in which optimization of proxy PM scores is driven by spurious features that the gold PMs are indifferent to. The size of this gap (smaller is better) indicates the robustness of a given PM to being overly optimized against. Here, we observe that the gap (on the plot, between solid and dashed lines) tends to be smaller for CPMs than for standard PMs and that it increases at a slower rate.
This indicates that CPMs are more robust to overoptimization than standard PMs. This holds independently of whether a highly capable (GPT-3.5) or less capable (Flan-T5-XL) LM is used as a feature extractor in CPMs.
Quality evaluation
------------------

*Figure 3: Win rate of responses obtained via best-of-16 sampling using a given PM versus responses obtained via standard sampling, computed for prompts from Anthropic HH dataset (HH-RLHF) and Stanford Human Preferences dataset (SHP).*
We compare the quality of LM samples obtained by best-of-*16* against either CPMs or standard PMs by comparing them to samples generated *without* best-of-*n* sampling. We do that by showing both best-of-*16* and vanilla samples to an evaluator LM (Claude 2.0) and by computing win rates, i.e. how often best-of-*16* samples are preferred to vanilla samples. CPMs tend to have higher win rates than standard PMs, even if we match the capabilities of a feature extractor LM to the capabilities of standard PM (by choosing Flan-T5-XL for both). This suggests that prior knowledge injected into a PM via pre-selecting interpretable and relevant features in CPMs is robustly helpful for learning about human preferences.
CPMs and scalable oversight
---------------------------
[Scalable oversight](https://arxiv.org/abs/2211.03540) is the problem of evaluating the behavior of agents more capable than the evaluators. This is important to solve because, on the one hand, LMs will soon grow capable of completing tasks for which humans will not be able to provide feedback. On the other hand, LMs might also be capable of [reasoning about flaws in their evaluation procedures and exploiting them](https://www.lesswrong.com/posts/mLfPHv4QjmeQrsSva/paper-on-measuring-situational-awareness-in-llms) unbeknownst to overseers.
Current proposals for solving scalable oversight focus on recursively relying on other LMs to assist human evaluators ([debate](https://www.lesswrong.com/tag/debate-ai-safety-technique-1), [iterated distillation and amplification](https://www.lesswrong.com/tag/iterated-amplification), [recursive reward modeling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)) but remain largely theoretical. [RL from AI feedback](https://arxiv.org/abs/2212.08073) – using carefully prompted LMs to generate training data for PMs – is arguably the most successful demonstration of how to use LMs to supervise LMs at scale.
CPMs explore an alternative route to addressing scalable oversight for LMs, exploring the prospects of divide-and-conquer strategies for tackling hard evaluation problems. CPMs can be seen as a method for decomposing a hard question (“Is this response helpful?”) into a series of simpler questions (“is this response readable?” etc.) that are easier for LMs to answer and easier for humans to oversee. While we stop at a single step of decomposition, nothing in principle prevents us from applying the idea recursively, e.g. to break down evaluation of complex responses into simple questions about atomic claims.
The idea of decomposing complex evaluation problems into simpler subproblems has several additional benefits:
1. **Using human priors**. Pre-selection of features and prompt templates afford a natural way of injecting prior knowledge and endowing PMs with useful inductive biases. The parameters space of CPMs is spanned by features selected to be meaningful and robust.
2. **Avoiding reward hacking by limiting PM capacity**. Using features pre-computed by feature extractors allows us to dramatically reduce the capacity of PMs consuming them (in our experiments, from 3B to just 13 parameters, i.e. 8 orders of magnitude!) and limit their susceptibility to overfitting to spurious correlations in preference data. It is really hard to reward-hack with only 13 parameters at hand!
3. **Interpretability**. Pre-selected features are trivially interpretable and a logistic regression coefficient associated with a feature can be interpreted as its salience (effect size) for a particular preference judgment (see sec. 4.6 in the paper). Indeed, the idea that preference judgments can be explained by linear combinations of pre-selected features was recently validated by two concurrent papers: [Towards Understanding Sycophancy in Language Models](https://www.lesswrong.com/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models) and [Human Feedback is not Gold Standard](https://arxiv.org/abs/2309.16349). Using such a linear model as an actual PM makes its judgements more transparent and amenable to process-based supervision.
4. **Narrowness**. Each of our feature extractors solves a narrow problem and does not need to be aware of other features or how the scores are aggregated. Solving different subproblems in different context windows was [recently found to improve the faithfulness of reasoning](https://www.lesswrong.com/posts/BKvJNzALpxS3LafEs/measuring-and-improving-the-faithfulness-of-model-generated). In the case of CPMs, an individual feature extractor has no clue how the score it is about to assign is going to be used downstream, which makes it harder for it to be strategic about that score and exercise capabilities for [sycophancy](https://www.lesswrong.com/posts/yRAo2KEGWenKYZG9K/discovering-language-model-behaviors-with-model-written) or deception.
However, CPMs still have certain limitations that future work could address:
1. **Human feedback.**CPMs still use pairwise preference judgements given by humans as a training signal for aggregating feature scores. This is inherently limiting as far as humans make errors, [sometimes prefer sycophantic responses over truthful ones](https://www.lesswrong.com/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models) or [authoritative responses over factual ones](https://arxiv.org/abs/2309.16349).
2. **Human curation.** CPMs rely on humans when it comes to feature selection and prompt engineering of prompt templates for feature extraction. These factors could be limiting as far as out-of-domain generalization is concerned (e.g. to evaluating agents showing superhuman performance).
Wrap-up
-------
We presented Compositional Preference Models: the idea of building PMs by training logistic regression on top of features extracted by prompted LMs. We show that a CPM with 13 parameters can outperform standard PM in terms of human evaluation and robustness to reward model overoptimization while also being more interpretable.
*This post benefited from helpful comments made by Mikita Balesni, Richard Ren, Euan McLean and Marc Dymetman. I’m also grateful to the co-authors of*[*the paper*](https://arxiv.org/abs/2310.13011)*: Dongyoung Go, Germán Kruszewski, Jos Rozen and Marc Dymetman.* |
d7897363-924d-435f-8037-3958c162cc78 | trentmkelly/LessWrong-43k | LessWrong | Google and charity
They seem to have money they don't know what to do with.
This doesn't mean SIAI is a good match, but it might be worth thinking about.
I'd say the most obvious choice for google and existential threats would be (if feasible) applying cloud computing to watching for potential asteroid strikes. |
fd116c48-34a8-426b-be4f-63ed8a17638a | StampyAI/alignment-research-dataset/special_docs | Other | Risks of Astronomical Future Suffering
Risks of Astronomical Future Suffering
Brian Tomasik
Center on Long-Term Risk
brian.tomasik@longtermrisk.org
Abstract
It’s far from clear that human values will shape an Earth-based space-colonization wave, but even if
they do, it seems more likely that space colonization will increase total suffering rather than decrease
it. That said, other people care a lot about humanity’s survival and spread into the cosmos, so I think
suffering reducers should let others pursue their spacefaring dreams in exchange for stronger safety
measures against future suffering. In general, I encourage people to focus on making an intergalactic
future more humane ifit happens rather than making sure there will be an intergalactic future.
Contents
1 Epigraphs 2
2 Humans values may not control the future 2
3 Some scenarios for future suffering 2
3.1 Spread of wild animals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3.2 Sentient simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3.3 Suffering subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.4 Black Swans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
4 Even a human-controlled future is likely to increase suffering 3
4.1 Spread of wild animals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
4.2 Sentient simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.3 Suffering subroutines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
4.4 Black Swans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5 What about paperclippers? 4
6 What if human colonization is more humane than ET colonization? 5
7 Why we should remain cooperative 6
8 Possible upsides to an intelligent future 7
8.1 Black swans that don’t cut both ways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
8.2 Valuing reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Acknowledgments 8
Risks of Astronomical Future Suffering – Center on Long-Term Risk
1 Epigraphs
If we carry the green fire-brand from star to
star, and ignite around each a conflagration of
vitality, we can trigger a Universal metamorpho-
sis. [...] Because of us [...] Slag will become soil,
grass will sprout, flowers will bloom, and forests
will spring up in once sterile places.1[...] If we
deny our awesome challenge; turn our backs
on the living universe, and forsake our cosmic
destiny, we will commit a crime of unutterable
magnitude.
– Marshall T. Savage, The Millennial Project:
Colonizing the Galaxy in Eight Easy Steps , 1994
Let’s pray that the human race never escapes
from Earth to spread its iniquity elsewhere.
– C.S. Lewis
If you can’t beat ’em, join ’em.
– proverb
2 Humans values may not control the
future
Nick Bostrom’s "The Future of Human Evolution"
(Bostrom, 2004) describes a scenario in which hu-
man values of fun, leisure, and relationships may be
replaced by hyper-optimized agents that can better
compete in the Darwinian race to control our future
light cone. The only way we could avert this com-
petitive scenario, Bostrom suggests, would be via
a "singleton" (Bostrom, 2006), a unified agent or
governing structure that could control evolution. Of
course, even a singleton may not carry on human
values. Many naive AI agents that humans might
build may optimize an objective function that hu-
mans find pointless. Or even if humans do maintain
hands on the steering wheel, it’s far from guaranteed
that we can preserve our goals in a stable way across
major self-modifications going forward.
These factors suggest that even conditional on
human technological progress continuing, the proba-
bility that human values are realized in the future
may not be very large. Carrying out human values
seems to require a singleton that’s not a blind opti-
mizer, that can stably preserve values, and that is
shaped by designers who care about human values
rather than selfish gain or something else. This isimportant to keep in mind when we imagine what
future humans might be able to bring about with
their technology.
Some people believe that sufficiently advanced su-
perintelligences will discover the moral truth and
hence necessarily do the right things. Thus, it’s
claimed, as long as humanity survives and grows
more intelligent, the right things will eventually hap-
pen. There are two problems with this view. First,
Occam’s razor militates against the existence of a
moral truth (whatever that’s supposed to mean).
Second, even if such moral truth existed, why should
a superintelligence care about it? There are plenty
of brilliant people on Earth today who eat meat.
They know perfectly well the suffering that it causes,
but their motivational systems aren’t sufficiently en-
gaged by the harm they’re doing to farm animals.
The same can be true for superintelligences. In-
deed, arbitrary intelligences in mind-space needn’t
have even the slightest inklings of empathy for the
suffering that sentients experience.
3 Some scenarios for future suffering
Even if humans do preserve control over the future
of Earth-based life, there are still many ways in
which space colonization would multiply suffering.
Following are some of them.
3.1 Spread of wild animals
Humans may colonize other planets, spreading
suffering-filled animal life via terraforming. Some
humans may use their resources to seed life through-
out the galaxy, which some sadly consider a moral
imperative.
3.2 Sentient simulations
Given astronomical (Bostrom, 2003) computing
power, post-humans may run various kinds of sim-
ulations. These sims may include many copies of
wild-animal life, most of which dies painfully shortly
after being born. For example, a superintelligence
aiming to explore the distribution of extraterres-
trials of different sorts might run vast numbers of
simulations (Thiel, Bergmann and Grey, 2003) of
evolution on various kinds of planets. Moreover,
scientists might run even larger numbers of simula-
tions of organisms-that-might-have-been, exploring
1Because nature contains such vast amounts of suffering, I would strongly dislike such a project. I include this quotation for
rhetorical effect and to give a sense of how others see the situation.
2
Risks of Astronomical Future Suffering – Center on Long-Term Risk
the space of minds. They may simulate decillions of
reinforcement learners that are sufficiently self-aware
as to feel what we consider conscious pain.
3.3 Suffering subroutines
It could be that certain algorithms (say, reinforce-
ment agents (Tomasik, 2014)) are very useful in
performing complex machine-learning computations
that need to be run at massive scale by advanced
AI. These subroutines might be sufficiently similar
to the pain programs in our own brains that we con-
sider them to actually suffer. But profit and power
may take precedence over pity, so these subroutines
may be used widely throughout the AI’s Matrioshka
brains.
3.4 Black Swans
The range of scenarios that we can imagine is limited,
and many more possibilities may emerge that we
haven’t thought of or maybe can’t even comprehend.
4 Even a human-controlled future is
likely to increase suffering
If I had to make an estimate now, I would give
~70% probability that if humans choose to colonize
space, this will cause more suffering than it reduces
on intrinsic grounds (ignoring compromise consid-
erations discussed later). Think about how space
colonization could plausibly reduce suffering. For
most of those mechanisms, there seem to be counter-
mechanisms that will increase suffering at least as
much. The following sections parallel those above.
4.1 Spread of wild animals
David Pearce coined the phrase "cosmic rescue mis-
sions" (Pearce, n.d.) in referring to the possibility of
sending probes to other planets to alleviate the wild
extraterrestrial (ET) suffering they contain. This is
a nice idea, but there are a few problems.
We haven’t found any ETs yet, so it’s not obvi-
ous there are vast numbers of them waiting to
be saved from Darwinian misery.
The specific kind of conscious suffering known
to Earth-bound animal life would not neces-
sarily be found among the ETs. Most likely
ETs would be bacteria, plants, etc., and even if
they’re intelligent, they might be intelligent in
the way robots are without having emotions ofthe sort that we care very much about. (How-
ever, if they were very sophisticated, it would be
relatively unlikely that we would not consider
them conscious.)
It’s unclear whether humanity would support
such missions. Environmentalists would ask us
to leave ET habitats alone. Others wouldn’t
want to spend the energy on rescue missions un-
less they planned to mine resources from those
planets.
Contrast this with the possibilities for spreading
wild-animal suffering:
Humans may spread life to many planets (e.g.,
Mars via terraforming, other Earth-like planets
via directed panspermia). The number of plan-
ets that can support life may be appreciably
bigger than the number that already have it.
(See the discussion of flin the Drake equation.)
Moreover, the percentage of planets that can be
converted into computers that could simulate
wild-animal suffering might be close to 100%.
We already know that Earth-based life is sen-
tient, unlike for ETs.
Spreading biological life is slow and difficult,
but disbursing small life-producing capsules is
easier than dispatching Hedonistic Imperative
probes or berserker probes.
Fortunately, humans might not support spread of
life that much, though some do. For terraforming,
there are survival pressures to do it in the near term,
but probably directed panspermia is a bigger prob-
lem in the long term. Also, given that terraforming
is estimated to require at least thousands of years,
while human-level digital intelligence should take at
most a few hundred years to develop, terraforming
may be a moot point from the perspective of catas-
trophic risks, since digital intelligence doesn’t need
terraformed planets.
While I noted that ETs are not guaranteed to
be sentient, I do think it’s moderately likely that
consciousness is fairly convergent among intelligent
civilizations. This is based on (a) suggestions of
convergent consciousness among animals on Earth
and (b) the general principle that consciousness
seems to be useful for planning, manipulating im-
ages, self-modeling, etc. On the other hand, maybe
this reflects the paucity of my human imagination
in conceiving of ways to be intelligent without con-
sciousness.
3
Risks of Astronomical Future Suffering – Center on Long-Term Risk
4.2 Sentient simulations
It may be that biological suffering is a drop in the
bucket compared with digital suffering. The bio-
sphere of a planet is less than Type I on the Kar-
dashev scale; it uses a tiny sliver of all the energy
of its star. Intelligent computations by a Type II
civilization can be many orders of magnitude higher.
So humans’ sims could be even more troubling than
their spreading of wild animals.
Of course, maybe there are ETs running sims of
nature for science or amusement, or of minds in gen-
eral to study biology, psychology, and sociology. If
we encountered these ETs, maybe we could persuade
them to be more humane.
Ithinkit’slikelythathumansaremoreempathetic
than the average civilization because
1.we seem much more empathetic than the aver-
age animal on Earth, probably in part due to
parental impulses and in part due to trade, al-
though presumably some of these factors would
necessarily be true of any technologically ad-
vanced civilization
2.selection bias implies that we’ll agree with our
own society’s morals more than those of a ran-
dom other society because these are the values
that we were raised with and that our biology
impels us toward.
Based on these considerations, it seems plausible
that there would be room for improvement through
interaction with ETs. Indeed, we should in general
expect it to be possible for any two civilizations or
factions to achieve gains from compromise if they
have diminishing marginal utility with respect to
amount of control exerted. In addition, there may be
cheap Pareto improvements to be had purely from
increased intelligence and better understanding of
important considerations.
That said, there are some downside risks. Post-
humans themselves might create suffering simula-
tions, and what’s worse, the sims that post-humans
run would be more likely to be sentient than those
runbyrandomETsbecausepost-humanswouldhave
a tendency to simulate things closer to themselves
in mind-space. They might run nature sims for aes-
thetic appreciation, lab sims for science experiments,
or pet sims for pets.4.3 Suffering subroutines
Suffering subroutines may be a convergent outcome
of any AI, whether human-inspired or not. They
might also be run by aliens, and maybe humans
could ask aliens to design them in more humane
ways, but this seems speculative.
4.4 Black Swans
It seems plausible that suffering in the future will be
dominated by something totally unexpected. This
could be a new discovery in physics, neuroscience,
or even philosophy more generally. Some make the
argument that because we know so very little now,
it’s better for humans to stick around because of
the "option value": If they later realize it’s bad to
spread, they can stop, but if they realize they should
spread, they can proceed to reduce suffering in some
novel way that we haven’t anticipated.
Of course, the problem with the "option value"
argument is that it assumes future humans do the
right things, when in fact, based on examples of
speculations we can imagine now, it seems future
humans would probably do the wrong things much
of the time. For instance, faced with a new discovery
of obscene amounts of computing power somewhere,
most humans would use it to run oodles more minds,
some nontrivial fraction of which might suffer terri-
bly. In general, most sources of immense power are
double-edged swords that can create more happiness
and more suffering, and the typical human impulse
to promote life/consciousness rather than to remove
them suggests that negative and negative-leaning
utilitarians are on the losing side.
Still, waiting and learning more is plausibly
Kaldor-Hicks efficient, and maybe there are ways it
can be made Pareto efficient by granting additional
concessions to suffering reducers as compensation.
5 What about paperclippers?
Above I was largely assuming a human-oriented civ-
ilization with values that we recognize. But what
if, as seems mildly likely, Earth is taken over by a
paperclip maximizer, i.e., an unconstrained automa-
tion or optimization process? Wouldn’t that reduce
suffering because it would eliminate wild ETs as the
paperclipper spread throughout the galaxy, without
causing any additional suffering?
Maybe, but if the paperclip maximizer is actually
generally intelligent, then it won’t stop at tiling the
4
Risks of Astronomical Future Suffering – Center on Long-Term Risk
solar system with paperclips. It will want to do sci-
ence, perform lab experiments on sentient creatures,
possibly run suffering subroutines, and so forth. It
will require lots of intelligent and potentially sen-
tient robots to coordinate and maintain its paperclip
factories, energy harvesters, and mining operations,
as well as scientists and engineers to design them.
And the paperclipping scenario would entail similar
black swans as a human-inspired AI. Paperclippers
would presumably be less intrinsically humane than
a "friendly AI", so some might cause significantly
more suffering than a friendly AI, though others
might cause less, especially the "minimizing" paper-
clippers, e.g., cancer minimizers or death minimizers.
If the paperclipper is not generally intelligent, I
have a hard time seeing how it could cause human
extinction. In this case it would be like many other
catastrophic risks – deadly and destabilizing, but
not capable of wiping out the human race.
6 What if human colonization is more
humane than ET colonization?
Ifwe knew for certain that ETs would colonize our
region of the universe if Earth-originating intelli-
gence did not, then the question of whether humans
should try to colonize space becomes less obvious.
As noted above, it’s plausible that humans are more
compassionate than a random ET civilization would
be. On the other hand, human-inspired computa-
tions might also entail more of what we consider to
count as suffering because the mind architectures of
the agents involved would be more familiar. And
having more agents in competition for the light cone
might lead to dangerous outcomes.
But for the sake of argument, suppose an Earth-
originating colonization wave would be better than
the expected colonization wave of an ET civilization
that would colonize later if we didn’t do so. In partic-
ular, suppose that if human values colonized space,
they would cause only 0:5units of suffering, com-
pared with 1units if random ETs colonized space.
Then it would seem that as long as the probability
Pof some other ETs coming later is bigger than 0:5,
then it’s better for humans to colonize and pre-empt
the ETs from colonizing, since 0:5> 1Pfor
P > 0:5.
However, this analysis forgets that even if Earth-
originating intelligence does colonize space, it’s not
at all guaranteed that human values will controlhow that colonization proceeds. Evolutionary forces
mightdistortcompassionatehumanvaluesintosome-
thing unrecognizable. Alternatively, a rogue AI
might replace humans and optimize for arbitrary val-
ues throughout the cosmos. In these cases, humans’
greater-than-average compassion doesn’t make much
difference, so suppose that the value of these colo-
nization waves would be 1, just like for colonization
by random ETs. Let the probability be Qthat these
non-compassionate forces win control of Earth’s col-
onization. Now the expected values are
31Q+ 0:5(1 Q)
for Earth-originating colonization versus
1P
if Earth doesn’t colonize and leaves open the possi-
bility of later ET colonization.
For concreteness, say that Q= 0:5. (That seems
plausiblytoolowtome, givenhowmanytimesEarth
has seen overhauls of hegemons in the past.) Then
Earth-originating colonization is better if and only
if
10:5 + 0:50:5> 1P
0:75> 1P
P > 0:75:
Given uncertainty about the Fermi paradox and
Great Filter, it seems hard to maintain a probability
greater than 75% that our future light cone would
contain colonizing ETs if we don’t ourselves colonize,
although this section presents an interesting argu-
ment for thinking that the probability of future ETs
is quite high.
What if rogue AIs result in a different magnitude
of disvalue from arbitrary ETs? Let Hbe the ex-
pected harm of colonization by a rogue AI. Assume
ETs are as likely to develop rogue AIs as humans
are. Then the disvalue of Earth-based colonization
is
HQ+ ( 0:5)(1 Q);
and the harm of ET colonization is
P(HQ+ ( 1)(1 Q)):
Again taking Q= 0:5, then Earth-based colonization
has better expected value if
H0:5 + 0:50:5> P(H0:5 + 10:5)
H 0:5> P(H 1)
P >(H 0:5)
(H 1);
5
Risks of Astronomical Future Suffering – Center on Long-Term Risk
Figure 1: Plot of threshold values for Pas a function of H
where the inequality flips around when we divide by
the negative number (H 1).Figure 1 represents
a plot of these threshold values for Pas a function
ofH.
Even if H= 0and a rogue AI caused no suffering,
it would still only be better for Earth-originating
intelligence to colonize if P > 0:5, i.e., if the prob-
ability of ETs colonizing in its place was at least
50%.
These calculations involve many assumptions, and
it could turn out that Earth-based colonization has
higher expected value given certain parameter val-
ues. This is a main reason I maintain uncertainty
as to the sign of Earth-based space colonization.
However, this whole section was premised on human-
inspired colonization being better than ET-inspired
colonization, and the reverse might also be true,
since computations of the future are more likely to
be closer to what we most value and disvalue if
humans do the colonizing.
7 Why we should remain cooperative
If technological development and space colonization
seem poised to cause astronomical amounts of suf-
fering, shouldn’t we do our best to stop them? Well,
it is worth having a discussion about the extent to
which we as a society want these outcomes, but my
guess is that someone will continue them, and this
would be hard to curtail without extreme measures.Eventually, those who go on developing the tech-
nologies will hold most of the world’s power. These
people will, if only by selection effect, have strong
desires to develop AI and colonize space.
Resistance might not be completely futile. There’s
some small chance that suffering reducers could in-
fluence society in such a way as to prevent space
colonization. But it would be better for suffering
reducers, rather than fighting technologists, to com-
promise with them: We’ll let you spread into the
cosmos if you give more weight to our concerns about
future suffering. Rather than offering a very tiny
chance of complete victory for suffering reducers,
this cooperation approach offers a higher chance of
getting an appreciable fraction of the total suffering
reduction that we want. In addition, compromise
means that suffering reducers can also win in the
scenario ( 30% likely in my view) that technologi-
cal development does prevent more suffering than it
causes even apart from considerations of strategic
compromise with other people.
Ideally these compromises would take the form
of robust bargaining arrangements. Some exam-
ples are possible even in the short term, such as if
suffering reducers and space-colonization advocates
agree to cancel opposing funding in support of some
commonly agreed-upon project instead.
The strategic question of where to invest resources
to advance your values at any given time amounts to
a prisoner’s dilemma with other value systems, and
6
Risks of Astronomical Future Suffering – Center on Long-Term Risk
because we repeatedly make choices about where
to invest, what stances to adopt, and what policies
to push for, these prisoner’s dilemmas are iterated.
In Robert Axelrod’s tournaments on the iterated
prisoner’s dilemma, the best-performing strategies
were always "nice," i.e., not the first to defect. Thus,
suffering reducers should not be the first to defect
against space colonizers. Of course, if it seems that
space colonizers show no movement toward suffering
reduction, then we should also be "provocable" to
temporary defection until the other side does begin
to recognize our concerns.
We who are nervous about space colonization
stand a lot to gain from allying with its supporters –
in terms of thinking about what scenarios might hap-
pen and how to shape the future in better directions.
We also want to remain friends because this means
pro-colonization people will take our ideas more seri-
ously. Even if space colonization happens, there will
remain many sub-questions on which suffering reduc-
ers want to have a say: e.g., not spreading wildlife,
not creating suffering simulations/subroutines, etc.
We want to make sure suffering reducers don’t
become a despised group. For example, think about
how eugenics is more taboo because of the Nazi
atrocities than it would have been otherwise. Anti-
technology people are sometimes smeared by associa-
tion with the Unabomber. Animal supporters can be
tarnished by the violent tactics of a few, or even by
the antics of PETA. We need to be cautious about
something similar happening for suffering reduction.
Most people already care a lot about preventing
suffering, and we don’t want people to start saying,
"Oh, you care about preventing harm to powerless
creatures? What are you, one of those suffering
reducers?" where "suffering reducers" has become
such a bad name that it evokes automatic hatred.
So not only is cooperation with colonization sup-
porters the more promising option, but it’s arguably
the only net-positive option for us. Taking a more
confrontational stance risks hardening the opposi-
tion and turning people away from our message.
Remember, preventing future suffering is something
that everyone cares about, and we shouldn’t erode
that fact by being excessively antagonistic.8 Possible upsides to an intelligent
future
8.1 Black swans that don’t cut both
ways
Many speculative scenarios that would allow for
vastly reducing suffering in the multiverse would
also allow for vastly increasing it: When you can
decrease the number of organisms that exist, you
can also increase the number, and those who favor
creating more happiness / life / complexity / etc.
will tend to want to push for the increasing side.
However, there may be some black swans that re-
ally are one-sided, in the sense that more knowledge
is most likely to result in a decrease of suffering. For
example: We might discover that certain routine
physical operations map onto our conceptions of
suffering. People might be able to develop ways to
re-engineer those physical processes to reduce the
suffering they contain. If this could be done without
a big sacrifice to happiness or other values, most peo-
ple would be on board, assuming that present-day
values have some share of representation in future
decisions.
This may be a fairly big deal. I give nontrivial
probability (maybe ~10%?) that I would, upon suf-
ficient reflection, adopt a highly inclusive view of
what counts as suffering, such that I would feel that
significant portions of the whole multiverse contain
suffering-dense physical processes. After all, the
mechanics of suffering can be seen as really simple
when you think about them a certain way, and as
best I can tell, what makes animal suffering special
are the bells and whistles that animal sentience in-
volves over and above crude physics – things like
complex learning, thinking, memory, etc. But why
can’t other physical objects in the multiverse be
the bells and whistles that attend suffering by other
physical processes? This is all very speculative, but
what understandings of the multiverse our descen-
dants would arrive at we can only begin to imagine
right now.
8.2 Valuing reflection
If we care to some extent about moral reflection on
our own values, rather than assuming that suffering
reduction of a particular flavor is undoubtedly the
best way to go, then we have more reason to sup-
2Of course, what’s compelling to idealized-me would not necessarily be compelling to idealized-you. Value divergences may
7
Risks of Astronomical Future Suffering – Center on Long-Term Risk
port a technologically advanced future, at least if
it’s reflective.
In an idealized scenario like coherent extrapolated
volition (CEV) (Yudkowsky, 2004), say, if suffering
reduction was the most compelling moral view, oth-
ers would see this fact.2Indeed, all the arguments
any moral philosopher has made would be put on
the table for consideration (plus many more that no
philosopher has yet made), and people would have
a chance to even experience extreme suffering, in
a controlled way, in order to assess how bad it is
compared with other things. Perhaps there would
be analytic approaches for predicting what people
would say about how bad torture was without ac-
tually torturing them to find out. And of course,
we could read through humanity’s historical record
and all the writings on the Internet to learn more
about what actual people have said about torture,
although we’d need to correct for will-to-live bias
and deficits of accuracy when remembering emotions
in hindsight. But, importantly, in a CEV scenario,
all of those qualifications can be taken into account
by people much smarter than ourselves.
Of course, this rosy picture is nota likely future
outcome. Historically, forces seize control because
they best exert their power. It’s quite plausible that
someone will take over the future by disregarding
the wishes of everyone else, rather than by com-
bining and idealizing them. Or maybe concern for
the powerless will just fall by the wayside, because
it’s not really adaptive for powerful agents to care
about weak ones, unless there are strong, stable so-
cial pressures to do so. This suggests that improving
prospects for a reflective, tolerant future may be an
important undertaking. Rather than focusing on
whether or not the future happens, I think it’s more
valuable for suffering reducers to focus on making
the future better ifit happens – by encouraging com-
promise, moral reflectiveness, philosophical wisdom,
and altruism, all of which make everyone better off
in expectation.
Acknowledgments
A question by Rob Wiblin first inspired this piece.
The discussion of cooperation with other value sys-tems was encouraged by Carl Shulman. Initially
I resisted his claim, but – as has often proved the
case at least on factual and strategic questions – I
eventually realized he was right and came around to
his view.
References
Bostrom, Nick. "The Future of Human Evolution."
Death and Anti-Death: Two Hundred Years
After Kant, Fifty Years After Turing. Palo Alto,
CA: Ria UP, 2004. 339-71. Web. 3 Mar. 2016
www.nickbostrom.com/fut/evolution.html.
Bostrom, Nick. "What Is a Singleton?" Linguistic
and Philosophical Investigations 5.2 (2006):
48-54. Web. 3 Mar. 2016.
www.nickbostrom.com/fut/singleton. html
Bostrom, Nick. "Astronomical Waste: The
Opportunity Cost of Delayed Technological
Development." Utilitas15.3 (2003): 308-3014.
Web. 3 Mar. 2016.
www.nickbostrom.com/astronomical/waste.
html.
Pearce, David. "Extraterrestrial Life, the Fermi
Paradox and the Hedonistic Imperative." The
Hedonistic Imperative . N.p., n.d. Web. 3 Mar.
2016. www.hedweb.com/object32.htm
Thiel, Inari, Neil W. Bergmann, and William Grey.
"A Case for Investigating the Ethics of Artificial
Life?"The University of New South Wales 1
(2003): 276-87. Web. 3 Mar. 2016.
http://espace.library.uq.edu.au/view
/UQ:10754/A\_Case\_for\_Inves.pdf
Tomasik, Brian. "Do Artificial
Reinforcement-Learning Agents Matter
Morally?" ArXiv.org . N.p., 30 Oct. 2014. Web.
03 Mar. 2016. http://arxiv.org/ abs/1410
.8233v1.
Yudkowsky, Eliezer. Coherent Extrapolated Volition .
San Francisco, CA: The Singularity Institute,
2004. Web. 3 Mar. 2016.
http://intelligence.org/files/CEV.pdf.
remain after extrapolation. That said, there’s no alternative better than compromising using a CEV-like approach, because if I
try to defect and push my particular values, you’ll just try to push yours, and we’ll both be worse off in expectation.
8 |
fb87158b-8932-4de8-8a58-00521ea8e178 | trentmkelly/LessWrong-43k | LessWrong | LessWrong Community Weekend 2023 Updates: Keynote Speaker Malcolm Ocean, Remaining Tickets and More
Update: Phase 2 closed on June 12th and results will be sent out on June 21st. You can still submit applications as there will be a small number of tickets remaining (including last minute cancellations) we will need to allocate.
We have enough Angels (on-site volunteers) confirmed and cannot take any more Angel ticket applications — thanks to everyone who volunteered!
----------------------------------------
If you read and enjoyed my post last month where I shared details about Europe’s largest in-person rationality and rationality-adjacent meetup and why I think it’s a truly magical and unique event – here are a few more updates from the organiser team.
Keynote speaker confirmed!
We are thrilled to confirm Malcolm Ocean as this year's keynote speaker with a keynote provisionally titled First-Person Research in Trust & Truth. Known for creating the acclaimed Intend productivity app (previously Complice) and goal-crafting intensives, Malcolm has also been writing thoughtful essays for over a decade on topics ranging from trust-building to habit formation and mindfulness. Check out some of his writing here and his interviews here.
60/150 tickets remaining
Approximately 60% of the tickets will be offered to applicants from Phase 1 leaving about 60 still up for grabs. It’s looking like we will sell out quickly this year so if you're eager to attend, don’t leave submitting your application to the last minute! Click here to fill out the application form. Refer to this FAQ or email lwcw.europe[at]gmail.com if you need any help.
This will be our most diverse LWCW
This year we put special effort into encouraging people of all ages, genders and backgrounds to apply, asking attendees about their childcare needs and reaching out to adjacent interest communities. This seems to have paid off with noticeably more women on the attendee list than previous years!
We’re extremely grateful for your help sharing the announcement
We wanted to thank everyone who |
945a8b50-6a73-4c0a-b812-7dc1a88ab739 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Stability AI releases StableLM, an open-source ChatGPT counterpart
Stability AI are the people behind Stable Diffusion.
Currently only 7B version is available, but they are currently training versions up to 65B, with 175B model planned. 7B version already seems to be quite capable.
Good for capabilities, but quite bad for alignment and race dynamics. What do you think? |
ea342868-d1a1-45bf-9963-49491e616b36 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Durham HPMoR Discussion, chapters 15-17
Discussion article for the meetup : Durham HPMoR Discussion, chapters 15-17
WHEN: 17 November 2012 11:00:00AM (-0500)
WHERE: Foster's Market, 2694 Durham-Chapel Hill Blvd., Durham, NC
We will meet and discuss HPMoR, chapters 15-17 (approx 55 pages). Main discussion will probably last until 12:30 or 1:00, and there will likely be Zendo afterwards.
Discussion article for the meetup : Durham HPMoR Discussion, chapters 15-17 |
d8157ff7-dfbf-4fc9-b106-385b35f26161 | trentmkelly/LessWrong-43k | LessWrong | Monthly Shorts 8/21
Arts
Highlights from the stock phrase to start a fairytale in other languages
> Back in the days when it was still of help to wish for a thing —German
>
> Cross seven countries, through the sea of Operencia, through the glass mountains, where the curly-tailed pig delves —Hungarian
>
> I remember something that our father told me and that is this —Iraqw [typo?]
>
> A long long time ago, when goats had feathers —Kazakh
>
> Back when tigers used to smoke tobacco —Korean
>
> Where the water was being strewn and the sand poured —Slovak
>
> In a time already long past, when it was still of use to cast a spell —Esperanto
>
> Once there was, and once there wasn't. In the long-distant days of yore, when haystacks winnowed sieves, when genies played jereed in the old bathhouse, [when] fleas were barbers, [when] camels were town criers, [and when] I softly rocked my baby grandmother to sleep in her creaking cradle, there was/lived, in an exotic land, far, far away, a/an —Turkish
SCP-5031 is a sweet story, about prisons and the Other. For context, SCP stands for Secure, Contain, Protect, and in-universe is an organization that protects humanity from anomalous entities. Many of these entities are hostile, and some of them are incompatible with humanity.
Vampire Flower Language is a romance written about actually remotely sensible people. The characters enter conflict over a deep moral question, and resolve it by growing as people and talking with each other. Strongly recommended: I have been reading it for a while, but this is the month it finished.
vellartrill’s someone who I lost touch with after following them for a while on tumblr, and since then they’ve gone off the deep end, but they do have a beautiful essay on transhumanism
> eternity is our rightful inheritance, the stars our appointed clay, space our canvas and matter our ink. and they have been stolen away from us by the petty machinations of the cosmos.
>
> how can we simply accept this, the greatest cri |
62e6ef11-bc55-40f8-9211-4ae6057ab7d8 | trentmkelly/LessWrong-43k | LessWrong | Utility is unintuitive
EDIT: My original post was wrong. I will leave it quoted at the end for the purposes of preserving information, but it is now replaced with a new post that correctly expresses my sentiments. The original title of this post was "expected utility maximization is not rational".
As many people are probably aware, there is a theorem, called the Von Neumann-Morgenstern utility theorem, which states that anyone expressing consistent preferences must be maximizing the expected value of some function. The definition of consistent preferences is as follows:
Let A, B, and C be probability distributions over outcomes. Let A < B denote that B is preferred to A, and A = B denote that someone is indifferent between A and B. Then we assume
* Either A < B, A > B, or A = B. In other words, you have to express a preference. This is reasonable because in the real world, you always have to make a decision (even "lack of action" is a decision).
* If A < B, and B < C, then A < C. I believe that this is also clearly reasonable. If you have three possible actions, leading to distributions over outcomes A, B, and C, then you have to choose one of the three, meaning one of them is always preferred. So you can't have cycles of preferences.
* If A < B, then (1-x)A+xC < B for some x in (0,1) that is allowed to depend on A, B, and C. In other words, if B is preferred to A then B is also preferred to sufficiently small changes to A.
* If A < B then pA+(1-p)C < pB+(1-p)C for all p in (0,1). This is the least intuitive of the four axioms to me, and the one that I initially disagreed with. But I believe that you can argue in favor of it as follows: I flip a coin with weight p, and draw from X if p is heads and C if p is tails. I let you choose whether you want x to be A or B. It seems clear that if you prefer B to A, then you should choose B in this situation. However, I have not thought about this long enough to be completely sure that this is the case. Most other people seem to also think |
1cbc9d87-0d4d-4cce-8492-974e90f2e7fc | trentmkelly/LessWrong-43k | LessWrong | Is there a guide to 'Problems that are too fast to Google'?
It seems to me like problems come in a variety of required response speeds, but there's a natural threshold to distinguish fast and slow: whether or not you can Google it. The slow ones, like getting an eviction notice from your landlord or a cancer diagnosis from your doctor, can't be ignored but you have time to figure out best practices before you act. Fast ones, like getting bit by a rattlesnake or falling from a high place, generally require that you already know best practices in order to properly implement them.
Also useful would be the meta-guide, which just separates out which problems are fast and slow (or how fast they are). Getting bit by a tick, for example, seems like it might be quite urgent when you discover one biting you, but isn't; you have about 24 hours from when it first attaches to remove it, which is plenty of time to research proper removal technique. Getting a bruise might seem like you have time, but actually applying cold immediately does more to prevent swelling than applying cold later does to reduce it.
Of course, this is going to vary by region, profession, age, sex, habits, and so on. I'm sort of pessimistic about this existing at all, and so am interested in whatever narrow versions exist (even if it's just "here's what you need to know about treating common injuries to humans"). Basic guides also seem useful from a 'preventing illusion of transparency' perspective. |
dfd86cc3-b62c-4bc0-b32b-c12377131c61 | trentmkelly/LessWrong-43k | LessWrong | Don't fall for ontology pyramid schemes
Pyramid schemes work by ambiguating between selling a product, and selling shares in the profit from the sale of that product. It's a kind of sleight of hand that saves the shills from having to explicitly say "our company is valuable because other people think it's valuable", which might otherwise be too nakedly dishonest to work. Shills can pretend they're offering marks a profitable sales job for a company that makes a product so wonderful the sheep can't resist it, marks buy in and pretend to be buying product to be used when really they're basing their high bids on the value at which they hope the product can be sold, and thus a few scammers at the top can make the value of share in their enterprise appear to inflate arbitrarily.
A similar pattern of exploitation occurs in the market for ontologies, although without legible prices or transaction.
If beliefs don't pay rent, they shouldn't be allowed to linger; the categories were made for man, not man for the categories. Just as you should decide which products to purchase based on how much value you expect to get out of using them directly, you should decide whether that big blue floppy thingy over there is a 'beast' or a 'fish' [or something else entirely] based on how you expect this to assist your ability to think about reality and draw true conclusions.
By default, when Jane Doe says "There are 4 components of personality - yellow bile, black bile, blood, and phlegm, and here's what they do" [or, alternatively, when Yamada Hanako says "there are 4 personality types corresponding to the 4 blood types, and here's what they are"], you will take her assertion and check it against your intuition, and possibly even go to the trouble of searching for concrete examples [ . . . and then counterexamples ] in your remembered experience. And then you'll decide to accept it or reject it, as a rent-paying concept structure.
Realistically, just because you run on human psychology, your decision will have a lot to do w |
8b092f13-a1c4-42c4-96b3-b08e22e2a18f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The case for aligning narrowly superhuman models
*I wrote this post to get people’s takes on a type of work that seems exciting to me personally; I’m not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). **We are not seeking grant applications on this topic right now.***
*Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts.*
A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman”[[1]](#fn-53xERFEpZFqZCf9Dj-1) at some particular task (which I’ll call **narrowly superhuman models**).[[2]](#fn-53xERFEpZFqZCf9Dj-2) I don’t just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it’s intuitively non-obvious how to make that happen).
Here’s an example of what I’m thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I’m sick that’s better than advice I’d get from asking humans on Reddit or Facebook, because it’s digested a vast store of knowledge about illness symptoms and remedies. Moreover, *certain ways of prompting it* provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice.
But GPT-3 doesn’t seem to “want” to give me the best possible health advice -- instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it’s a random internet user. So if I want to use GPT-3 to get advice about my health, there is a gap between what it’s capable of (which could even exceed humans) and what I can get it to actually provide me. I’m interested in the challenge of:
>
> How can we get GPT-3 to give “the best health advice it can give” when humans[[3]](#fn-53xERFEpZFqZCf9Dj-3) in some sense “understand less” about what to do when you’re sick than GPT-3 does? And in that regime, how can we even tell whether it’s actually “doing the best it can”?
>
>
>
I think there are other similar challenges we could define for existing models, especially large language models.
I’m excited about tackling this particular type of near-term challenge because it feels like a microcosm of the long-term AI alignment problem in a real, non-superficial sense. In the end, we probably want to find ways to meaningfully supervise (or justifiably trust) models that are more capable than ~all humans in ~all domains.[[4]](#fn-53xERFEpZFqZCf9Dj-4) So it seems like a promising form of practice to figure out how to get particular humans to oversee models that are more capable than them in specific ways, if this is done with an eye to developing scalable and domain-general techniques.
I’ll call this type of project **aligning narrowly superhuman models.** In the rest of this post, I:
* Give a more detailed description of what aligning narrowly superhuman models could look like, what does and doesn’t “count”, and what future projects I think could be done in this space ([more](#What_aligning_narrowly_superhuman_models_could_look_like)).
* Explain why I think aligning narrowly superhuman models could meaningfully reduce long-term existential risk from misaligned AI ([more](#How_this_work_could_reduce_long_term_AI_x_risk)).
* Lay out the potential advantages that I think this work has over other types of AI alignment research: (a) conceptual thinking, (b) demos in small-scale artificial settings, and (c) mainstream ML safety such as interpretability and robustness ([more](#Advantages_over_other_genres_of_alignment_research)).
* Answer some objections and questions about this research direction, e.g. concerns that it’s not very neglected, feels suspiciously similar to commercialization, might cause harm by exacerbating AI race dynamics, or is dominated by another type of work ([more](#Objections_and_responses)).
* Briefly discuss where I think some AI alignment researchers currently stand on this work ([more](#Current_state_of_opinion_on_this_work)).
* Summarize takeaways and possible next steps for readers ([more](#Takeaways_and_possible_next_steps)).
There aren’t a large number of roles where someone could do this right now, but if aligning narrowly superhuman models is a good idea, *and* we can build a community consensus around it being a good idea, I think we have a good shot at creating a number of roles in this space over the coming years (allowing a larger number of people to productively contribute to AI x-risk reduction than would be possible otherwise). To discover whether that’s possible, **I’d appreciate it if people could react with pushback and/or endorsement**, depending on where you’re at.
What aligning narrowly superhuman models could look like
========================================================
I’m a lot less confident about a particular agenda or set of project ideas than I am about the high-level intuition that it seems like we could somehow exploit the fact that today’s models are superhuman in some domains to create (and then analyze and solve) scaled-down versions of the “aligning superintelligent models” problem. I think even the basic framing of the problem has a lot of room to evolve and improve; I’m trying to point people toward something that seems interestingly analogous to the long-run alignment problem rather than nail down a crisp problem statement. With that said, in this section I’ll lay out one vision of what work in this area could look like to provide something concrete to react to.
First of all, it’s important to note that not all narrowly superhuman models are going to be equally interesting as alignment case studies. AlphaGoZero (AGZ) is narrowly superhuman in an extremely strong sense: it not only makes Go moves better than the moves made by top human players, but also probably makes moves that top players couldn’t even reliably *recognize* as good. But there isn’t really an outer alignment problem for Go: a precise, algorithmically-generated training signal (the win/loss signal) is capable of eliciting the “full Go-playing potential” of AGZ given enough training (although at a certain scale [inner alignment issues](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB?_ga=2.217118480.783633844.1614643626-600859042.1540158534) may crop up). I think we should be focusing on cases where both inner and outer alignment are live issues.
The case studies which seem interesting are models which have the potential to be superhuman at a task (like “giving health advice”) for which we have no simple algorithmic-generated or hard-coded training signal that’s adequate (which I’ll call “**fuzzy tasks**”). The natural thing to do is to try to train the model on a fuzzy task using human demonstrations or human feedback -- but if (like AGZ) the model actually has the capacity to improve on what humans can demonstrate or even reliably recognize, it’s not immediately obvious how to elicit its “full potential.”
Here’s an attempt at one potential “project-generation formula”, where I try to spell out connections to what I see as the main traditional sub-problems within academic AI alignment research:
>
> Choose a helpful “fuzzy” task (e.g. summarization, question-answering, advice-giving, story-writing) for which we have suggestive evidence that makes us suspect a state-of-the-art model has the capacity to significantly outperform some reference set of humans (e.g. Mechanical Turk workers) given the right training signal. Then,
>
>
> 1. **Reward learning:** Find a training procedure that allows those reference humans to train the model to do the fuzzy task better than they could do it (and ideally, better than they could even recognize or verify unaided). This procedure shouldn’t rely on the researchers’ own understanding of the particular domain in a way that wouldn’t generalize across domains.
> 2. **Scalability and competitiveness:** Argue or empirically demonstrate that the human oversight work wouldn't have to scale up much if the model were 10x or 100x bigger, or each instance of the task took 10x or 100x longer to demonstrate or evaluate.
> 3. **Interpretability and robustness:** Once you’ve done this, try to understand its behavior and stamp out whatever pathologies (e.g. lying, going off the rails) may have cropped up.[[5]](#fn-53xERFEpZFqZCf9Dj-5)
>
>
>
This is just one type of project you could do in this space. The larger motivating question here is something like, “It looks like at least some existing models, in at least some domains, ‘have the ability’ to exceed at least some humans in a fuzzy domain, but it’s not obvious how to ‘draw it out’ and how to tell if they are ‘doing the best they can to help.’ What do we do about that?”
I don’t think the project-generation formula I laid out above will turn out to be the best/most productive formulation of the work in the end; I’m just trying to get the ball rolling with something that seems concrete and tractable right now. As one example, the project-generation formula above is putting reward learning / “outer alignment” front and center, and I could imagine other fruitful types of projects that put “inner alignment” issues front and center.
Existing work in this area
--------------------------
This kind of work only became possible to do extremely recently, and mostly only in industry AI labs; I’m not aware of a paper that follows all three steps above completely. But [“Learning to summarize from human feedback”](https://arxiv.org/abs/2009.01325) (Stiennon et al., 2020) accomplishes the easier version of 1 and a bit of 2 and 3. The authors chose the fuzzy task of summarizing Reddit posts; there was an existing corpus of human demonstrations (summaries of posts written by the posters themselves, beginning with “TL;DR”):
1. **Reward learning:** Ultimately, the quality of summaries generated by a large language model fine-tuned with RL from human feedback exceeded the quality of the Reddit summaries (i.e. it exceeded what some set of reference humans generated). But it didn’t really exceed what the human workers could *evaluate* -- except in the fairly straightforward (but IMO meaningful) sense that the authors figured out quality control procedures, human rating aggregation algorithms, easier framings of the question, training and feedback for workers, etc that allowed them to get better performance than they would have gotten using the most naive implementation of “train on human ratings.”
2. **Scalability:** I don’t think the paper makes explicit arguments about scalability, but the method is very domain-general and could plausibly work for significantly harder tasks, especially combined with decomposition (and I’d like to see that systematically attempted).
3. **Interpretability and robustness:** The paper doesn’t dig deep into interpretability, reliability, and pathological behavior, but it does demonstrate that optimizing the reward model (learned from human judgments) “too hard” leads to weird pathological summaries that are repetitive, offensive, etc., and addresses this by applying a penalty for diverging too far from the human demonstration distribution.
What kinds of projects do and don’t “count”
-------------------------------------------
In the high-level description of this research area, I’ve aimed to be as broad as possible while picking out the thing that seems interestingly different from [other research in alignment right now](#Advantages_over_other_genres_of_alignment_research) (i.e. the focus on narrowly superhuman models). But given such a broad description, it can be confusing what does and doesn’t count as satisfying it. Would self-driving cars count? Would [MuseNet](https://openai.com/blog/musenet/) count? Would just training GPT-4 count?
Firstly, I don’t think whether a project “counts” is binary -- in some sense, all I’m saying is “Find a model today such that it seems as non-obvious as possible how to align it, then try to align it.” The more obvious the training signal is, the less a project “counts.” But here are some heuristics to help pick out the work that currently feels most central and helpful to me:
* **You should probably be fine-tuning an existing large model:** I don’t think we should be guessing what size models could have the potential to be narrowly superhuman in some domain; I think an alignment project should probably be inspired by noticing that an existing model seems to have some “knowledge” or “skill” that’s it not adequately harnessing because it doesn’t “want to”, as in the example with GPT-3 and health advice above.[[6]](#fn-53xERFEpZFqZCf9Dj-6) I would guess the base model you start with should be >>1B parameters, and the larger the better -- this is because the larger the model is, the more likely it is to have the capacity to be superhuman in an interesting, challenging domain. Less confidently, I would guess that you probably want to be fine-tuning a generative model like GPT-3 or MuseNet (as opposed to a supervised learning model like an image classifier or an RL model like AlphaGoZero or AlphaStar), because those models seem closest to being able to do “interesting real-world tasks” better than some humans can.
* **If you’re making the model larger, it doesn’t count:** I see the point of this work as “realizing the potential of existing state-of-the-art models in fuzzy domains”, rather than pushing forward the state-of-the-art in models’ raw potential. Note that this doesn’t mean I think scaling up models is always bad -- I definitely see risks there, but also potential benefits depending on who does it and how (e.g. new large models can also create new opportunities to do empirical alignment research like this). I think the question of the sign of scaling work is pretty complicated and situation-dependent. I just want to clearly distinguish between the projects of “aligning narrowly superhuman models” and “scaling models up to make them (more) superhuman”, and make it clear that someone could participate in one without participating in the other. So, for example, training GPT-4 would not count as aligning a narrowly superhuman model.[[7]](#fn-53xERFEpZFqZCf9Dj-7)
* **If you’re not dealing with humans, it probably doesn’t count:** I think that if you can get the model to achieve superhuman performance at some task without collecting any human feedback or human demonstrations, the task is probably not “fuzzy” enough. It shouldn’t be easy for humans to just write down an algorithm specifying what they want, and there shouldn’t be an existing dataset that just demonstrates what they want. In practice, I also don’t think human demonstrations alone will cut it (unless they are cleverly combined with an amplification-like scheme or somehow augmented or assisted); RL from human feedback will probably be necessary. My guess is that self-driving cars mostly fail on these grounds -- in a lot of self-driving car companies, only the recognition of objects in a scene is done with large neural nets, and those are trained almost entirely from labeled datasets.[[8]](#fn-53xERFEpZFqZCf9Dj-8) To the extent that large models are used for the actual driving policy (which they usually aren’t), relatively simple/algorithmic training signals like “how far is the car from other cars”, “how centered is it in the lane”, “how smooth is its acceleration”, etc seem probably adequate to elicit human-level or superhuman driving ability without bringing in feedback from human judgments.
* **If you didn’t make the model genuinely useful, it probably doesn’t count:** I think we should generally be choosing complex, multi-dimensional real-world tasks where there is a lot of room to improve on typical humans’ actions and/or judgments -- giving advice, summarizing research, coding, writing emails, translation, telling stories, etc. In the end, these models should feel impressive and valuable -- they generally wouldn’t constitute a commercial product on their own because commercial products are rarely “clean” or “pure ML”, but should ideally have the potential to become a product with some design and engineering work. If the selected task was not valuable or at least inherently interesting, I would guess that the alignment problem wasn’t hard enough and much of the [benefits of “practicing on something similar to the real deal”](#Advantages_over_other_genres_of_alignment_research) would be reduced. Note however that **“genuinely useful” doesn’t mean optimized for usefulness alone** -- I expect this research will not look like the shortest path to creating a valuable product (e.g. by construction the [approach I propose below](#Potential_near_future_projects___sandwiching_) makes it much harder than it has to be if you just want to train a model to be useful somehow). See [this objection and response](#Doesn_t_this_feel_suspiciously_close_to_just_profit_maximizing_) for more detail.
I think some projects that don’t fit all these criteria will also constitute useful progress on aligning narrowly superhuman models, but they don’t feel like central examples of what I’m trying to point at.
Potential near-future projects: “sandwiching”
---------------------------------------------
I think a basic formula that could take this work a step beyond Stiennon et al, 2020 is a) “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example,
* First fine-tune a coding model to write short functions solving simple puzzles using demonstrations and feedback collected from expert software engineers. Then try to match this performance using some process that can be implemented by people who don’t know how to code and/or couldn’t solve the puzzles themselves.
* First fine-tune a model to answer long-form questions in a domain (e.g. economics or physics) using demonstrations and feedback collected from experts in the domain. Then try to match this performance using some process that can be implemented by people who know very little about the domain.
* First fine-tune a model to translate between English and French using demonstrations and feedback collected from people who are fluent in both languages. Then try to match this performance using some process that can be implemented by people who are fluent in one language and barely know the other (or don’t know it at all and only have a dictionary). Something similar was done in [Lample et al., 2018](https://openreview.net/pdf?id=rkYTTf-AZ), although they didn’t use human feedback.
In all of these cases, my guess is that the way to get the less-capable group of humans to provide training signals of a similar quality to the more-capable group will involve some combination of:
* Training models to help the humans form better judgments (for example, training models to explain the meaning of technical terms or to fetch and summarize relevant papers for humans).
* Breaking down the problem and splitting it up among many humans (as in [Humans Consulting HCH](https://ai-alignment.com/humans-consulting-hch-f893f6051455)).
* Getting models to explain why they’re doing what they’re doing in simpler terms that connect to things the human overseers understand (this feels like it could fit under debate or interpretability).
* Figuring out how to train the human workers, and how to separate their good judgments from noise / mistakes.
It may not yet be possible to do these more ambitious projects (for example, because models may not be powerful enough yet to train them to meaningfully help human evaluators, engage in debates, meaningfully exceed what humans can recognize / verify, etc). In that case, I think it would still be fairly valuable to keep doing human feedback projects like Steinnon et al., 2020 and stay on the lookout for opportunities to push models past human evaluations; state-of-the-art models are rapidly increasing in size and it may become possible within a couple of years even if it’s not quite possible now.
Importantly, I think people could make meaningful progress on aligning narrowly superhuman models using existing models without scaling them up any further, even if they are only superhuman with respect to human demonstrations for now -- there’s a lot we don’t know even just about how to do RL from human feedback optimally. And in the near future I expect it will be possible to use the larger models which will likely be trained to do even more interesting projects, which have the potential to exceed human evaluations in some domains.
(For more speculative thoughts on how we might go beyond “sandwiching”, see [the appendix](#Appendix__beyond_sandwiching_).)
How this work could reduce long-term AI x-risk
==============================================
On the outside view, I think we should be quite excited about opportunities to get experience with the sort of thing we want to eventually be good at (aligning models that are smarter than humans). In general, it seems to me like building and iterating on prototypes is a huge part of how R&D progress is made in engineering fields, and it would be exciting if AI alignment could move in that direction.
If there are a large number of well-motivated researchers pushing forward on making narrowly superhuman models as helpful as possible, we improve the odds that we first encounter serious problems [like the treacherous turn](#How_would_this_address_treachery_by_a_superintelligence_) in a context where a) models are not smart enough to cause actually catastrophic harm yet, and b) researchers have the time and inclination to really study them and figure out how to solve them well rather than being in a mode of scrambling to put out fires and watching their backs for competitors. Holistically, this seems like a much safer situation to be in than one where the world has essentially procrastinated on figuring out how to align systems to fuzzy goals, doing only the minimum necessary to produce commercial products.
This basic outside view consideration is a big part of why I’m excited about the research area, but I also have some more specific thoughts about how it could help. Here are three somewhat more specific paths for working on aligning narrowly superhuman models today to meaningfully reduce long-term x-risk from advanced AI:
* **Practical know-how and infrastructure:** It seems likely that a successful long-run approach to (machine learning-based) alignment will involve somehow learning from human demonstrations and/or feedback as a key component, and also pretty likely that it will involve somehow using ML tools to help go beyond raw human judgment. I’d guess that a number of low level details about *how* ideas like “RL from human feedback” and “ML aiding human judgments” are implemented will make a difference to how successful the approach is: things like which human judges are selected, how well they are trained and how much practice they have, what exact types of questions are used to elicit the judgments, what judgment aggregation and quality assurance procedures are used, whether there are good off-the-shelf ML solutions for enhancing human judgments in certain ways, whether there are easy-to-use platforms that let researchers gather good human feedback at the push of a button, etc. Aligning narrowly superhuman models today could help build up tools, infrastructure, best practices, and tricks of the trade. I expect most of this will eventually be developed anyway, but speeding it up and improving its quality could still be quite valuable, especially in short timelines worlds where there's a lot less time for things to take their natural course.
* **Better AI situation in the run-up to superintelligence:** If at each stage of ML capabilities progress we have made sure to realize models’ full potential to be helpful to us in fuzzy domains, we will be going into the next stage with maximally-capable assistants to help us navigate a potentially increasingly crazy world. We’ll be more likely to get trustworthy forecasts, policy advice, research assistance, and so on from our AI assistants. Medium-term AI challenges like supercharged fake news / clickbait or AI embezzlement seem like they would be less severe. People who are pursuing more easily-measurable goals like clicks or money seem like they would have less of an advantage over people pursuing hard-to-measure goals like scientific research (including AI alignment research itself). All this seems like it would make the world safer on the eve of transformative AI or AGI, and give humans more powerful and reliable tools for dealing with the TAI / AGI transition.[[9]](#fn-53xERFEpZFqZCf9Dj-9)
* **Chance of discovering or verifying long-term solution(s):** I’m not sure whether a “one shot” solution to alignment (that is, a single relatively “clean” algorithm which will work at all scales including for highly superintelligent models) is possible. But if it is, it seems like starting to do a lot of work on aligning narrowly superhuman models probably allows us to discover the right solution sooner than we otherwise would have. For one thing, people doing this work could test proposals (such as [Iterated Distillation and Amplification](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616)) coming from more conceptual researchers, verifying or falsifying elements and proposing modifications informed by empirical understanding. It also seems plausible that a solution will emerge directly from this line of work rather than the conceptual work -- the latter is mostly focused on finding a one-shot solution *that will work under ~pessimal empirical assumptions,*[[10]](#fn-53xERFEpZFqZCf9Dj-10) but it seems very plausible that a) it’s impossible to find a one-shot solution that works under worst-case empirical assumptions, but b) it’s possible to find one that works given the actual ways that models tend to learn or generalize. More broadly, **“doing empirical science on the alignment problem”** -- i.e. systematically studying what the main problem(s) are, how hard they are, what approaches are viable and how they scale, etc -- could help us discover a number of different avenues for reducing long-run AI x-risk that we aren’t currently thinking of, one-shot technical solutions or otherwise.
I think both the broad outside view and these specific object-level benefits make a pretty compelling case that this research would be valuable on the object level. Additionally, from a “meta-EA” / “community building” perspective, I think pioneering this work could boost the careers and influence of people concerned with x-risk because it has the potential to produce conventionally-impressive results and demos. My main focus is the case that this work is valuable on the merits and I wouldn’t support it purely as a career-boosting tool for aligned people, but I think this is a real and significant consideration that can tip the scales.
Advantages over other genres of alignment research
==================================================
First, I’ll lay out what seem like the three common genres of alignment research:
* **Conceptual research:** This is pen-and-paper thinking that often looks like a combination of math and philosophy, which is usually aiming to make progress toward a “one shot” solution (and also often involves a lot of [disentangling](https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy#Disentanglement_research_is_needed_to_advance_AI_strategy_research__and_is_extremely_difficult) and framing what the problem even is). The most prominent examples are [MIRI’s work](https://intelligence.org/research/) and [Paul Christiano’s work](https://ai-alignment.com/); a number of other posts on the Alignment Forum also fit in this category.
* **Gridworlds and games:** This work aims to demonstrate alignment problems such as wireheading or other reward hacking in a relatively small-scale artificial setting such as a simple game, and usually to solve the demonstrated problem(s) in the small-scale setting in a way that could shed light on how to solve larger-scale alignment problems. Two examples are [REALab (Kumar et al., 2020)](https://arxiv.org/abs/2011.08820) and [Inverse Reward Design (Hadfield-Mennell et al., 2017)](https://arxiv.org/abs/1711.02827).
* **Mainstream ML safety:** This is alignment-relevant work that existing ML researchers were independently working on; most of it fits under “reliability+robustness” or “interpretability.” This work is usually done on fairly large (though not always state-of-the-art) neural networks, but doesn’t usually pay special attention to the case where models are more capable or knowledgeable than humans. Some examples are the [OpenAI microscope](https://openai.com/blog/microscope/) (interpretability), [Dathathri et al., 2020](https://arxiv.org/abs/2010.11645) (robustness and reliability), and the [Unrestricted Adversarial Examples Challenge](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html) (robustness and reliability).
I’m broadly supportive of all three of these other lines of work, but I’m excited about the potential for the new approach described in this post to “practice the thing we eventually want to be good at.” I think on the outside view we should expect that doing whatever we can find that comes closest to practicing what we eventually want to do will be good in a number of ways (e.g. feeling and looking more “real”, encouraging good habits of thought and imposing helpful discipline, etc).
More specifically, here are some advantages that it feels like “aligning narrowly superhuman models” line of work has over each of the other three genres:
* Compared to conceptual research, I’d guess aligning narrowly superhuman models will feel meatier and more tractable to a number of people. It also seems like it would be easier for funders and peers to evaluate whether particular papers constitute progress, which would probably help create a healthier and more focused field where people are broadly more on the same page and junior researchers can get stronger mentorship. Related to both of these, I think it provides an easier opportunity for people who care about long-run x-risk to produce results that are persuasive and impressive to the broader ML community, as I mentioned above.
* Compared to gridworlds and games, I think this work stands a greater chance of scaling up to more capable systems -- I think it would probably provide some good discipline to do alignment work at a scale that’s large enough that it’s already kind of unwieldy, where models are already more capable than their overseers in some real-world-relevant ways, and researchers are forced to confront messy details and hard-to-foresee structural issues. When it’s possible to demonstrate an issue at scale, I think that’s usually a pretty clear win.
* Compared to mainstream ML safety, aligning narrowly superhuman models has some of the “discipline” advantages mentioned above of focusing on situations where models are more capable than humans. Additionally, lots of researchers work on interpretability and robustness for lots of different reasons, meaning the specific research priorities and “tastes” of the broader interpretability and robustness fields won’t be particularly optimized for reducing long-run x-risk. This can make it harder for newer researchers motivated primarily by x-risk to zoom in on the most x-risk-relevant subproblems and get adequate mentorship on that; aligning narrowly superhuman models has the potential to be more x-risk-oriented from the start.
Finally and maybe most importantly, I think aligning narrowly superhuman models has **high long-run field growth potential** compared to these other genres of work. Just focusing on GPT-3, there are already a *lot* of different fuzzy goals we could try to align it to, and the number of opportunities will only grow as the ML industry grows and the number and size of the largest models grow. This work seems like it could absorb a constant fraction (e.g. 1% or 5%) of all the ML activity -- the more models are trained and the mode capable they are, the more opportunity there is to align narrowly superhuman models to ever more tasks.
I think we have a shot at eventually supplying a lot of people to work on it too. In the long run, I think more EAs could be in a position to contribute to this type of work than to either conceptual research or mainstream ML safety.[[11]](#fn-53xERFEpZFqZCf9Dj-11) Conceptual research is often foggy and extremely difficult to make progress on without a particular kind of inspiration and/or hard-to-define “taste”; mainstream ML safety is often quite technical and mathematically dense (and ensuring the work stays relevant to long-run x-risk may be difficult).
A lot of work involved in aligning narrowly superhuman models, on the other hand, seems like it’s probably some combination of: a) software engineering and ML engineering, b) dealing with human contractors, and c) common sense problem-solving. Lead researchers may need to bring taste and research judgment to ensure that the work is well-targeted, but a number of people could work under one lead researcher doing tractable day-to-day work with reasonably good feedback loops. If there were institutional homes available to onboard people onto this work, I think a strong generalist EA with a software engineering background could plausibly retrain in ML engineering over 6-12 months and start contributing to projects in the space.
Right now there are only a few organizations that offer roles doing this work and that seems like a big bottleneck, but it could make sense to prioritize creating more institutional homes and/or rapidly expanding the ones that exist.
Objections and responses
========================
In this section I’ve tried to anticipate some potential objections, and give my responses; I’d suggest skipping around and reading only the ones that interest you. I don’t think that I have knock-down answers to all of these objections, but I do remain holistically excited about this idea after reflecting on them some.
How would this address treachery by a superintelligence?
--------------------------------------------------------
***Elaboration of objection:** It seems like there is a “hard core” of the alignment problem that only crops up when models are very smart in a very general way, not just e.g. better than MTurkers at giving medical advice. The specific scariest problem seems to be the “treacherous turn”: the possibility that the model will appear to be helpful during training time even though it’s actually power-seeking because it’s aware that it’s being trained and has to act helpful to survive, and later cause catastrophic harm once it knows it’s out of the training setup. It doesn’t seem like the “aligning narrowly superhuman models” style of work will figure out a way to address the treacherous turn until it’s likely too late.*
I'm very uncertain how relevant the near-term work will turn out to be for more exotic problems like the treacherous turn, and I want to think more about ways to nudge it to be more relevant.[[12]](#fn-53xERFEpZFqZCf9Dj-12) I would be very excited to find empirical research projects on large models that specifically shed light on the treacherous turn possibility, and I agree it’s a weakness of my [set of potential projects](#Potential_near_future_projects___sandwiching_) that they aren’t specifically optimized for unearthing and correcting treachery.
With that said, I don’t think there are currently genres of work that feel similarly tractable and scalable that *do* tackle the treacherous turn head on -- of the [main genres of alignment work](#Advantages_over_other_genres_of_alignment_research), I’d argue that only a subset of the conceptual work is aiming to directly generate a long-term solution to treachery, and I think the jury is very much out on whether it will be fruitful; gridworlds and games and mainstream ML safety largely don’t seem to try for a long-term treacherous turn solution. So I think the *relative* hit that my proposal takes due to this consideration is fairly limited.[[13]](#fn-53xERFEpZFqZCf9Dj-13)
Even if they don’t start off tackling the treacherous turn, I’d guess that researchers would have a decent shot at learning useful things about treachery down the line if they were pursuing this work. Basically, I think it’s pretty likely that full-blown treachery will be preceded by mini-treachery, and with better understanding of how neural networks tend to learn and generalize, researchers may be able to specifically seek out domains where mini-treachery is especially likely to occur to better study it. Even if techniques used by empirical researchers don’t work out of the box for the treacherous turn, empirical work eliciting and studying mini-treachery could still inform what kind of theoretical or conceptual work needs to be done to address it, in a way that seems more promising to me than eliciting micro-treachery in gridworlds and games.
Moreover, even though the treacherous turn seems like the scariest single source of risk, I don’t think it totally dominates the overall expected AI risk -- a significant fraction of the risk still seems to come from more “mundane” outer alignment failures and various unforced errors, which this empirical work seems better-placed to address. Of the [three broad ways I listed that this work could reduce x-risk](#How_this_work_could_reduce_long_term_AI_x_risk), the critique that it doesn’t seem to address the treacherous turn very well applies most to the “Chance of discovering or verifying long-term solution(s)” category; even if it fails to address the treacherous turn, it still seems that “Practical know-how and infrastructure” and “Better AI situation in the run-up to superintelligence” matter.
Doesn’t this feel suspiciously close to just profit-maximizing?
---------------------------------------------------------------
***Elaboration of objection:** It sort of sounds like you’re just telling EAs to make AI really useful to humans (and indeed push models to be superhuman if they can be); it feels like this would also be what someone who is into pure profit-maximization would be excited about, and that makes me suspicious about the reasoning here and nervous about calling it an alignment activity. Even if you’re right that it helps with alignment, we might see a lot of people flock to it for the wrong reasons.*
I agree that there is overlap with commercial incentives, but I think there are three high-level ways that this type of work would be different from what you’d do if you were profit-maximizing:
* **Not making models bigger:** This work doesn’t involve making models bigger; it involves making models of a given fixed size more helpful. In a commercial setting, often a cost-effective way of improving results would be to simply scale the model up.
* **Seeking difficult rather than easy problems:** The problem selection is different -- other things being equal, in a commercial setting you want to select the *easiest* possible tasks; in this type of work, people would select *interestingly difficult* tasks. For example, commercial incentives would push someone to focus on precisely those tasks where simply meeting (rather than exceeding) the human imitation benchmark is sufficient for being profitable. Profit-motivated people would also likely seek tasks where algorithmically generated or hard-coded reward signals would go a long way (for example, in robotics you might be able to get away with providing algorithmically generated feedback about whether the robot’s actuators ended up in the right place). The [sandwiching approach](#Potential_near_future_projects___sandwiching_) I propose above is by construction making things much harder than they need to be from a pure commercial standpoint: it involves refusing to use the “best human overseers for the job” in favor of trying to figure out how to help less-capable overseers provide an adequate training signal.
* **Seeking domain-general and scalable techniques:** There is a focus on scalability and generality of techniques that goes well beyond what would be commercially optimal. In commercial settings, I expect that people will make heavy use of hard-coded behaviors and “hacks” which fully exploit domain knowledge (as is the case with self-driving cars). Additionally, there is often a “right size model for the job” in commercial settings (image models only need to be so big to adequately power self-driving car perception), and there will often not be much incentive to find techniques that also work well for a model 100x bigger. A “clean”, domain-general, and scalable technique is rarely what will make the most profit at the current moment.
More broadly, I think successful versions of this type of alignment work should get someone who deeply understands ML and its limitations to say something like, "Wow, it's cool that you got the model to do that." My sense is that most commercial projects wouldn’t really elicit this reaction, and would look more like applying a lot of hard work to realize an outcome that wasn’t very much in doubt.
Given these differences, I think there’s a good shot at distinguishing this type of work from pure profit-seeking and cultivating a community where a) most people doing this work are doing it for altruistic reasons, and b) this is reasonably legible to onlookers, funders, potential junior researchers, etc.
Isn’t this not neglected because lots of people want useful AI?
---------------------------------------------------------------
***Elaboration of objection:** Even if this is useful for alignment, and even adjusting for the fact that companies aren’t focusing on the version that’s specifically alignment-optimized, won’t a ton of this work get done in AI labs and startups? Doesn’t that mean that the EA community is less likely to make an impact on the margin than in other, less-commercially-incentivized types of alignment work?*
I do think there’s probably some work happening broadly along these lines from a commercial motivation, and there will probably be significantly more in the future. But I pretty strongly suspect that there are very few, if any, projects like the ones [I proposed above](#Potential_near_future_projects___sandwiching_) currently being done in a commercial setting, and what work is being done is less well-targeted at reducing long-run x-risk than it could be.
The vast majority of commercial work going into AI by dollars is a) hyper application-specific and hard-coding intensive such as self-driving cars, or b) focused on scaling big generic models. I don’t actually think the resources going into any sort of project focused on human demonstrations and feedback is very large right now; I’d guess it’s within an order of magnitude of the resources going into other alignment work (e.g. $100s of millions per year at the high-end, where other alignment research absorbs $10s of millions per year). And for the reasons outlined [above](#Doesn_t_this_feel_suspiciously_close_to_just_profit_maximizing_), not a lot of this will be focused on exceeding humans using scalable, domain-general techniques.
As an example to illustrate the relative neglectedness of this work, it was Paul Christiano (motivated by long-term alignment risk concerns) who led the the [Stiennon et al., 2020](https://arxiv.org/abs/2009.01325) work, and I think it’s reasonably likely that if he hadn’t done so there wouldn’t have been a human feedback paper of similar scale and quality for another year or so. I’d guess the EA community collectively has the opportunity to substantially increase how much of this work is done before transformative AI with a strong push, especially because the “going beyond human feedback” step seems less commercially incentivized than the Stiennon et al. work.
Some additional thoughts on neglectedness:
* I think that it matters who is doing this work and why, not just that the work gets done somehow. It seems significantly better to have someone working on these problems who is self-awarely doing it to help with long-run x-risk reduction, and who is plugged into the broader alignment community, than someone who just happens to be doing work that might be relevant to alignment. It’s valuable to be collaborating with and getting feedback from more theoretical alignment researchers, and to be mentally on the lookout for ways to make the work more analogous to the long-run challenge; a generic ML engineer working on human feedback to improve the newsfeed at Facebook would be much less likely to continue to keep focusing on long-run-relevant questions for their whole career.[[14]](#fn-53xERFEpZFqZCf9Dj-14) (And one of the value propositions here is that the long-termists / AI alignment people, as a community, should be gathering this experience, so experience that’s less accessible to the community is less valuable.)
* I think that for most people,[[15]](#fn-53xERFEpZFqZCf9Dj-15) the value (roughly speaking, the importance multiplied by the tractability) of doing marginal work in an area as a function of its crowdedness is often an upside-down U-shape rather than strictly decreasing. When there’s practically no one in an area, there’s no one who can mentor you when you’re getting started, no one who you can hire when you’re experienced, and there’s no built-in audience who can be swayed by your demonstrations or arguments and can act on that. My personal intuition is that for empirical alignment work, we’re near the increasing returns part of this curve (though this situation can change rapidly). There’s an existing group of people who have an incentive to work on something in this space and may ramp up soon, but I think EAs have a chance to set the tone and agenda for what exactly the work they do looks like, and what standards it should be held to. I could imagine a pretty broad range of outcomes for how much ML engineers working on productizing hold themselves to the standard of finding domain-general and scalable solutions, and I could imagine EAs having an impact on that culture.
Will this cause harm by increasing investment in scaling AI?
------------------------------------------------------------
***Elaboration of objection:** Even if the people doing this research don’t personally scale up models and focus on generalizable and scalable solutions to making models helpful, they will be demonstrating that the models have powerful and useful capabilities that people might not have appreciated before, and could inspire people to pour more investment into simply scaling up AI or making AI useful in much less principled ways, which could cause harm that exceeds the benefits of the research.*
This is a very contentious question and people have a wide range of intuitions on it. I tend to be less bothered by this type of concern than a lot of other people in the community across the board. At a high-level, my take is that:
* We’re in the middle of an AI investment boom that I expect to be sustained for several more years.
* The amount of effort going into AI as a whole ($10s of billions per year) is currently ~2 orders of magnitude larger than the amount of effort going into the kind of empirical alignment I’m proposing here, and at least in the short-term (given excitement about scaling), I expect it to grow faster than investment into the alignment work.
* This means an additional dollar of effort going into the empirical language models alignment work would need to generate ~$100 or more of investment into accelerating AI to have a proportionally large impact on accelerating AI as a whole, in a climate where investors are already excited and AI labs are already trying hard to make them more excited. This isn’t out of the question, but doesn’t seem likely to me, especially given that EAs would likely be partially displacing people who would do similar work from a pure profit motivation, and that we could try to consciously shape messaging to further reduce the expected impact on AI hype. (In general, it’s hard to get a factor of 100 leverage on your spending even if you’re optimizing for it.)
* It also seems plausible that there are positive side effects on others’ investment, such as directing marginal money away from making models larger and toward fine-tuning models to be helpful.
* Finally, I am not personally fully convinced that speeding up AI as a whole would be net negative (it seems like timing interacts in extremely complicated ways with who is in power and what the global situation is like around the time of transformative AI), which claws back some of the expected damage from acceleration.
With that said, I do think that exciting demos are a lot more likely to spur investment than written arguments, and this kind of research could generate exciting demos. Overall, the case for caution feels stronger to me than the case for caution about discussing arguments about timelines and takeoff speeds, and this consideration probably net claws back some enthusiasm I have for the proposal (largely out of deference to others).
Why not just stick with getting models not to do bad things?
------------------------------------------------------------
***Elaboration of objection:** Even if this is useful for alignment, worth doing on the margin, and not net-harmful, it seems like it would be dominated by doing practical/near-term work that’s more clearly and legibly connected to safety and harm-reduction, like “getting models to never lie” or “getting models to never use racist slurs” or “getting models to never confidently misclassify something.” That work seems more neglected and more relevant.*
Some people might feel like “avoiding bad behaviors” is clearly the subset of near-term empirical alignment work which is most relevant to long-run alignment and neglected by profit-seeking actors -- after all, in the long run we’re trying to avoid a big catastrophe from misaligned AI, so in the short run we should try to avoid smaller catastrophes.
I disagree with this: I think both “getting models to be helpful and surpass human trainers” and “getting models to never do certain bad things” are valuable lines of empirical alignment work, and I’d like to see more of both. But I don’t think reliability and robustness has a special place in terms of relevance to long-run x-risk reduction, and if anything it seems somewhat less exciting on the margin. This is because:
* Most versions of “make a model more reliable” don’t really get at scalability to tasks/domains that are more challenging for humans to supervise, and it seems especially valuable to specifically target that. It seems very plausible to me that the most interesting challenges that are most analogous to the long-run challenge will only come up when we’re trying to get excellent or superhuman performance out of a model, rather than when we’re trying to avoid certain specific bad things.
* I don’t actually think that reliability work is more neglected than the work of getting models to be helpful in domains that are difficult for humans. There is a significantly larger academic field around reliability and robustness than around alignment, and the reliability/robustness problem is often harder to avoid or sidestep as a company: you can choose domains where human expertise is strong or automated reward signals exist, but you will still need to get your product to meet a fairly high bar of reliability before it is commercially viable.
* Robustness and reliability falls under multiple different “social good” brands. People concerned with “Fairness, Accountability, and Transparency” (FAT) tend to be very interested in the reliability and robustness space, as well as people concerned with e.g. autonomous weapons. Even though there is a worry that [the “make models helpful” work is too easy to confuse with commercialization](#Doesn_t_this_feel_suspiciously_close_to_just_profit_maximizing_), my weak best guess is that it would actually be *harder* to tell which people working in the robustness space are optimizing for reducing long-term x-risk from AI (vs for profit or other altruistic goals), and I’d guess it would be tougher to build a distinctive culture / brand around working on the sub-problems most relevant to long-term risk.
Why not focus on testing a candidate long-term solution?
--------------------------------------------------------
***Elaboration of objection:** This proposal seems like it would lead to a lot of wasted work that isn’t sufficiently optimized for verifying or falsifying a long-term solution to alignment. It would be better if the potential projects were more specifically tied in to testing an existing candidate long-term solution, e.g.* [*Paul Christiano’s agenda*](https://ai-alignment.com/).
I’ll focus on Paul’s agenda in my response, because the specific people I’ve talked to who have this objection mostly focus on it, but I think my basic response will apply to all the conceptual alignment agendas.
Some of the projects under the umbrella of “aligning narrowly superhuman models” seem like they could instead be reframed around specific goals related to Paul’s agenda, like “prototyping and testing [capability amplification](https://ai-alignment.com/policy-amplification-6a70cbee4f34)”, “prototyping and testing [imitative generalization](https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1)”, “figuring out how [ascription universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456#:~:text=This%20post%20defines%20the%20concept,any%20particular%20simpler%20computation%20C.) works”, and so on. I do think one of the value propositions of this work is shedding light on these sorts of concepts, but I think it’s probably not helpful to frame the whole endeavor around that:
* Verifying proposed long-term solutions is [only one way that the work could reduce AI x-risk](#How_this_work_could_reduce_long_term_AI_x_risk), and I don’t think it’s overwhelmingly dominant,[[16]](#fn-53xERFEpZFqZCf9Dj-16) especially not if restricted to the set of long-run solutions proposed *so far.* I want people who are committed to reducing long-run AI x-risk but don’t believe in any of the existing conceptual research to be doing this work, too.
* Not a lot of people currently understand the agenda well enough that they could generate good research projects from the prompt of “prototype and test [concept from a Paul blog post].” Similarly, I don’t think funders and peer reviewers understand the agenda well enough to tell if a research project with that goal was helpful.
* Paul’s agenda is in very active development, and I think there’s a reasonable chance the whole plan ends up looking pretty different within a year or two. Given this and the above point, I think empirical work testing specific Paul ideas is best done in close collaboration with him, and I’d guess even someone who believes in Paul’s agenda would often be better off just targeting the slightly looser problem description absent a lot of access to him. This makes me think research under the frame of “test Paul’s agenda” is a lot less scalable than research under the frame of “align narrowly superhuman models.”
There could be some simple organizing goal or “tagline” for empirical alignment research that is *neither* “test [concept from a Paul blog post]” *nor* “align narrowly superhuman models” which would inspire better-targeted research from the perspective of someone who’s bullish on Paul’s work, but the ones I’ve thought about haven’t been convincing,[[17]](#fn-53xERFEpZFqZCf9Dj-17) and I’d guess it’ll be hard to find a good organizing tagline until the theory work gets to a more stable state.
Current state of opinion on this work
=====================================
One of my goals in writing this blog post is to help build some community consensus around the “aligning narrowly superhuman models” proposal if it’s in fact a good idea. To that end, I’ll lay out my current understanding of where various AI alignment researchers stand on this work:
* Paul Christiano spent a few years at OpenAI working on this kind of thing (as I mentioned above he was the team lead on the Stiennon et al., 2020 paper) and generally thinks it’s important -- he feels the conceptual work he’s currently doing beats it as a use of his own time, but believes that this kind of work is among the best *highly scalable* types of alignment research.
* Alignment researchers I’ve spoken to that primarily do research on large neural networks (unlike Paul, who does a mixture of this and conceptual thinking) tend to be more enthusiastically positive on this and more likely to consider it the best kind of work they personally could do. They also tend to be more positive on even more “no holds barred” versions of this idea -- i.e., just trying to make helpful models without focusing in particular on ideas like “sandwiching.”
* My understanding of Eliezer Yudkowsky’s position is one of “cautious relative optimism” about something in this general space compared to other non-MIRI alignment work, though he would frame the core concern differently, with more emphasis on understandability of models’ answers and decisions (e.g. “GPT-3 has somewhere buried inside it knowledge of what to do when you’re sick; how do you extract all of that and how can you tell when you’ve succeeded?”). He was [reasonably positive](https://twitter.com/ESYudkowsky/status/1301954347933208578) on Stiennon et al., 2020 when it came out, and would be happy to see more work like that. Evan Hubinger’s position seems broadly similar (he is specifically interested in [ascription universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456#:~:text=This%20post%20defines%20the%20concept,any%20particular%20simpler%20computation%20C.)). I’m not sure where others at MIRI would land on this work.
* My sense is that people who do conceptual thinking work other than Paul and MIRI tend to have a position similar to or somewhat more optimistic than Eliezer’s or Evan’s. E.g. I think Rohin Shah feels that aligning narrowly superhuman models is a reasonably good baseline for what research to do (and is developing a [benchmark](https://docs.google.com/document/d/18MEmQ4aA1zdZHBKG5fLeISYoFAv1IfUozWZ5JYnq-bY/edit#) related to this), but he has privileged insight that beats that baseline. My rough sense is that other researchers doing conceptual thinking are on average somewhat less excited about aligning narrowly superhuman models than Paul is, and a lot less excited than the pure ML alignment researchers, but I’m not sure.
I also think a number of AI alignment researchers (and EAs working in AI risk more broadly) simply haven’t thought a lot about this kind of work because it hasn’t really been possible until the last couple of years. Until 2019 or so, there weren’t really any models accessible to researchers which could exceed human performance in fuzzy domains, and research agendas in AI alignment were largely formed before this was an option.
Takeaways and possible next steps
=================================
I’ve laid out the hypothesis that aligning narrowly superhuman models would concretely reduce x-risk and has high long-run field growth potential (i.e., lots of people who don’t have particularly esoteric skills could eventually help with it). I think if the EA and AI alignment community is in broad agreement about this, there’s potential to make a lot happen.
In terms of immediate actionable takeaways:
* **If you disagree with this argument, say so** -- especially if you [think it would be harmful](#Will_this_cause_harm_by_increasing_investment_in_scaling_AI_) or would be dominated by a different line of work that shares [similar practical advantages](#Advantages_over_other_genres_of_alignment_research) of tangibility, good feedback loops, and potential-for-scale.
* If you have more or better [project ideas](#Potential_near_future_projects___sandwiching_) in mind, say so -- especially if you have ideas about [how to target “treacherous turn” dynamics](#How_would_this_address_treachery_by_a_superintelligence_) more specifically or how to reframe the statement of the problem to make it more productive, well-targeted, etc.
* If you a) already agree with me, *and* b) are already in a good position to fairly immediately make this work happen (e.g. you are a PI at a university lab that is able to fine-tune open-source models like Google’s T5, or you are a senior ML researcher at a tech company with the freedom to do your own projects), then consider doing a project in this space. For example, you could try to solve tasks in this [Minecraft human feedback benchmark](https://docs.google.com/document/d/18MEmQ4aA1zdZHBKG5fLeISYoFAv1IfUozWZ5JYnq-bY/edit#) being developed by some researchers at [CHAI](https://humancompatible.ai/) when it's released. Getting more demos of what it looks like to do this research will help make it easier to think about how valuable it would be and build consensus around it if it is. Most people will *not* be in this position. As I said at the top, **Open Phil is not soliciting grant applications right now** from people who want to try it out -- this blog post is my personal viewpoint, and institutionally we’re still figuring out how much we want to prioritize this (discussion and arguments surrounding this post will feed into that).
* If you agree with this case and might be in a position to work on aligning narrowly superhuman models a few years down the line (e.g. if you are a software engineer or a university student with a technical background), consider keeping this in the back of your mind and checking in about future opportunities. If you are ready to try to switch into this work sooner, there may be jobs available doing this sort of work at various AI labs including [OpenAI](https://jobs.lever.co/openai/98599d5b-2d1d-4127-b9b5-708343c8730b) and [DeepMind](https://deepmind.com/careers),[[18]](#fn-53xERFEpZFqZCf9Dj-18) though I’d guess roles right now are fairly limited.
Looking forward to hearing people’s thoughts!
Appendix: beyond sandwiching?
=============================
Right now, models like GPT-3 are not “superhuman” at fuzzy tasks in the sense that AlphaGoZero is “superhuman” at playing Go. AGZ plays Go better than *any* human, while GPT-3 is only capable of giving better advice or writing better stories than *some* humans, which is what makes the [“sandwiching” tactic](#Potential_near_future_projects___sandwiching_) an option. What happens when language models and other models get narrowly superhuman in a strong sense -- better than all humans in some fuzzy domain, e.g. stock-picking? How would we verify that we got the model to be “doing the best it can do to help” when there’s no reference model trained on a ground truth signal to compare its performance to?
I’m definitely very unsure what this would look like, but an important starting assumption I have is that whatever techniques worked well to get less-capable humans to reproduce the judgments of more-capable humans in a “sandwich” setting stand a good chance of just continuing to work. If we were careful not to actually *use* the expertise of the more-capable set of humans in whatever systems/tools we used to assist/augment the less-capable set, and a similar set of systems/tools seemed to work across multiple domains and for humans at multiple different capability levels, there’s no particular reason to believe they would not continue working once models go from slightly less capable than the best humans to slightly more capable than them at some task. So I think it’s possible we could do most of the R&D in the regime where sandwiching works.
With that said, here are some thoughts about how we could try to probe whether our alignment techniques were actually successful at eliciting a model’s full potential in a regime the model is more capable than the best humans:
* We could attempt the sandwiching technique with augmented or assisted humans at both the top and the bottom. For example, if a model is capable of better stock-picking than the best human hedge fund analysts, we could compare its performance when trained using feedback from top hedge fund analysts vs feedback from normal people -- where *both* the hedge fund analysts *and* the normal people are assisted/augmented using whatever decomposition techniques, question elicitation techniques, ML assistants, etc. are the state of the art. Since the expertise of the individual humans within the system ideally shouldn’t bear on how well they are able to provide supervision, the goal would be to get the “augmented normal people” to produce a model that behaves similarly to the model produced by the “augmented experts.”
* We could try to exploit the fact that it’s usually easier to recognize a good outcome (especially after some time has passed) than to generate good behaviors. For example, we could compare the performance of a stock-price-predicting model trained using feedback from augmented hedge fund analysts to the performance of a model trained with the benefit of hindsight about what stocks actually went up over a one-year period. Ultimately, I think it’s scary if we have to rely entirely on medium- and long-run outcomes to train our superhuman models,[[19]](#fn-53xERFEpZFqZCf9Dj-19) but in an R&D context it could make sense to use a model trained on medium-run outcomes to *check* whether our techniques for augmenting human judgment are managing to elicit the full potential of the model.
* More speculatively, we could build up interpretability tools to probe the model and try to understand why it’s doing what it’s doing to build up confidence that it’s “trying its best” and making reasonable decisions. We could try to work on something like this in the near-term with AlphaGoZero itself -- try to develop interpretability tools that will let pro Go players learn new insights about how to play Go better by probing AGZ. If we develop tools like this and verify them on models like AGZ, top hedge fund analysts could later use them to probe a stock-picking model and develop a better understanding of whether it’s truly “trying its best to pick the right stocks.”
---
1. At least better than *some* salient large group of humans in a particular context, like “Mechanical Turk workers”, “stackoverflow users”, etc. Right now, models are only superhuman with respect to *all* humans in particular crisp domains like games. E.g. AlphaGoZero is better at Go than *any* human; GPT-3 probably has the potential to give better advice than *some* humans. [↩︎](#fnref-53xERFEpZFqZCf9Dj-1)
2. This idea isn’t original to me -- a number of others (especially some people working on long-term AI alignment at OpenAI and DeepMind) have thought along similar lines. My own thinking about this has been informed a lot by discussions with Paul Christiano and Holden Karnofsky. [↩︎](#fnref-53xERFEpZFqZCf9Dj-2)
3. e.g., Mechanical Turk workers who are hired to give feedback to the model [↩︎](#fnref-53xERFEpZFqZCf9Dj-3)
4. Though if we could pull off a path where we build an AI system that is superhuman in certain engineering capabilities but not yet human-level in modeling and manipulating people, and use that system to cut down on x-risk from other AI projects without having to figure out how to supervise arbitrary superhuman models, that could be really good. [↩︎](#fnref-53xERFEpZFqZCf9Dj-4)
5. Note that I don’t think this is the only way to study interpretability and robustness, or even necessarily the best way. In this project-generation formula, the domain and task were optimized to make *reward learning* an especially interesting and important challenge, rather than to make interpretability or robustness especially challenging, interesting, or important. I think it’s good to be complete and to try to ensure interpretability and robustness in these domains, but we should probably also do other lines of research which choose domains / tasks that are specifically optimized for interpretability or robustness, rather than reward learning, to be especially challenging and important. [↩︎](#fnref-53xERFEpZFqZCf9Dj-5)
6. Pragmatically speaking, fine-tuning a large model rather than training from scratch is also orders of magnitude cheaper, and so a lot more accessible to most researchers. [↩︎](#fnref-53xERFEpZFqZCf9Dj-6)
7. Another way of seeing why it wouldn’t count is that “predict the next token” is an extremely non-fuzzy training signal. [↩︎](#fnref-53xERFEpZFqZCf9Dj-7)
8. Human contractors make these labels, but they are not providing feedback. [↩︎](#fnref-53xERFEpZFqZCf9Dj-8)
9. More speculatively, if we’re realizing models’ full potential as we go along, there’s less chance of ending up with what I’ll call an “unforced sudden takeoff”: a situation where on some important set of fuzzy tasks models jump suddenly from being not-that-useful to extraordinarily useful, but this was due to not bothering to figure out how to make models useful for fuzzy tasks rather than any inherent underlying fact about models. I’m not sure how plausible an unforced sudden takeoff is though, and I’m inclined (because of efficient market intuitions) to think the strong version of it is not that likely. H/t Owen Cotton-Barratt for this thought. [↩︎](#fnref-53xERFEpZFqZCf9Dj-9)
10. E.g., that whenever there are two or more generalizations equally consistent with the training data so far, models will never generalize in the way that seems more natural or right to humans. [↩︎](#fnref-53xERFEpZFqZCf9Dj-10)
11. I think eventually gridworlds and games will probably fade away as it becomes more practical to work with larger models instead, and dynamics like the treacherous turn start to show up in messier real-world settings. [↩︎](#fnref-53xERFEpZFqZCf9Dj-11)
12. One idea a couple of others have suggested here and which I’m generally interested in is “transparency in (narrowly superhuman) language models”: finding ways to understand “what models are thinking and why,” especially when they know more about something than humans do. I like this idea but am very unsure about what execution could look like. E.g., would it look like [Chris Olah’s work](https://openai.com/blog/microscope/), which essentially “does neuroscience” on neural networks? Would it look like training models to answer our questions about what they’re thinking? Something else? [↩︎](#fnref-53xERFEpZFqZCf9Dj-12)
13. Though you could think that in an absolute sense it and all the other approaches that aren’t tackling treachery head-on are doomed. [↩︎](#fnref-53xERFEpZFqZCf9Dj-13)
14. I would also prefer other things being equal that EAs focused on long-run x-risk get the recognition for this work rather than others, but as I said above I consider this secondary and think that this agenda is good on the merits, not just as career capital for EAs. [↩︎](#fnref-53xERFEpZFqZCf9Dj-14)
15. There are some innovators for whom the value of being in an area is strictly decreasing in its crowdedness, because their main value-add is to “start something from nothing.” But I don’t think that applies to most contributors, even those who have an extremely large impact eventually (which might even be larger than the innovators’ impact in some cases). [↩︎](#fnref-53xERFEpZFqZCf9Dj-15)
16. Some people have argued that the “verifying long-run solutions” path is dominant because the other stuff is likely to happen anyway, but I’m not convinced. I think all three paths to impact that I laid out are likely to happen one way or another, and there’s room to speed up or improve all of them. I do think there could be some boost to the “verifying long-run solutions” path, but all in all I feel like it’ll be ⅓ to ¾ of the value, not >90% of the value. [↩︎](#fnref-53xERFEpZFqZCf9Dj-16)
17. The most plausible competing pitch in my mind is “get language models to answer questions honestly”, which seems like it could get at the “ascription universality” / “knowing everything the model knows” concept (h/t Evan H, Owen C-B, Owain E). That would narrow the focus to language models and question-answering, and rule out projects like “get non-coders to train a coding model.” I think the “get language models to answer questions honestly” frame is reasonable and I want to see work done under that banner too, but I’m not convinced it’s superior. It considerably narrows the scope of what’s “in”, cutting down on long-run field growth potential, and I think a lot of the projects that are “out” (like the coding project) could be helpful and informative. I also worry that the tagline of “honesty” will encourage people to focus on “avoiding harmful lies that are nonetheless pretty easy for humans to detect”, rather than focusing on regimes where models exceed human performance (see [this objection](#Why_not_just_stick_with_getting_models_not_to_do_bad_things_) for more discussion of that). [↩︎](#fnref-53xERFEpZFqZCf9Dj-17)
18. It’s possible other places, like Google Brain or some other FAANG lab, would also have roles available doing this type of work -- I am just more unsure because there is less of a long-termist alignment researcher presence in those places. [↩︎](#fnref-53xERFEpZFqZCf9Dj-18)
19. Eventually, when models are more strongly superhuman, I think it will get too hard to even tell whether *outcomes* were acceptable, because AI systems could e.g. compromise the cameras and sensors we use to measure outcomes. So relying on outcomes earlier on feels like “kicking the can down the road” rather than “practicing what we eventually want to be good at.” “Don’t kick the can down the road, instead practice what we eventually want to be good at” is the overall ethos/attitude I’m going for with this proposal. [↩︎](#fnref-53xERFEpZFqZCf9Dj-19) |
ea0725f7-8231-4abc-8c2e-fee7e6fb99f8 | trentmkelly/LessWrong-43k | LessWrong | Science like a chef
Alice: Hey honey, I made pasta with tomato sauce!
Bob: Great, let's eat!
Bob: Mmmmm, that's fantastic! It's even better than last time. It's got a sweeter, deeper flavor, which I like.
Alice: Thanks. Last time I only sautéed onions and garlic before adding the tomato puree, but this time I added carrots to the mix for some extra sweetness.
Bob: You know I love your cooking, but I always feel a twinge of skepticism whenever you try to explain why things taste the way they do. Yes, you added carrots to the sauté. But is that the only thing you did differently?
Alice: No. I added some butter along with the olive oil to give it some smoothness. Ummm. I threw some red wine in to the sauce. We didn't have any basil leaves today so I just left that out of the recipe.
Bob: So how do you know that the carrots are what made it taste sweeter? How do you know that it wasn't the butter, the red wine, or the absence of basil leaves?
Alice: Well, the red wine is acidic, so that wouldn't make it taste sweet. I think the butter helped make it taste richer and smoother. The basil leaves add flavor, so leaving them out probably subtracted from the deepness, rather than adding to it.
Bob: Sure, that all sounds plausible. But how can you know any of this?
Alice: It's just common sense. I know that butter is rich. I know that red wine is acidic. I know that basil is flavorful. So I know that adding them will have the corresponding effects on the sauce.
Bob: But isn't it possible that things work differently when you combine them? Sure, red wine is acidic by itself, but isn't it possible that when combined with sautéed onions, garlic and carrots, and olive oil and butter, and fried tomato paste, and whatever spices you used, and cooked. Isn't it possible that after all of that, some chemical reactions occur that cause the red wine to be sweet instead of acidic?
Alice: I don't think so. I think the sweetness came from the carrots. But yeah, I suppose it is possible. So what do y |
0f120da6-7582-4a08-a23d-ef8a6a93770e | trentmkelly/LessWrong-43k | LessWrong | AXRP Episode 10 - AI’s Future and Impacts with Katja Grace
YouTube link
This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I (Daniel Filan) have conversations with researchers about their papers. We discuss the paper and hopefully get a sense of why it’s been written and how it might reduce the risk of artificial intelligence causing an existential catastrophe: that is, permanently and drastically curtailing humanity’s future potential.
When going about trying to ensure that AI does not cause an existential catastrophe, it’s likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who’s done work surveying AI researchers about when they expect superhuman AI to be reached, collecting data about how rapidly AI tends to progress, and thinking about the weak points in arguments that AI could be catastrophic for humanity.
Topics we discuss:
* AI Impacts and its research
* How to forecast the future of AI
* Results of surveying AI researchers
* Work related to forecasting AI takeoff speeds
* How long it takes AI to cross the human skill range
* How often technologies have discontinuous progress
* Arguments for and against fast takeoff of AI
* Coherence arguments
* Arguments that AI might cause existential catastrophe, and counter-arguments
* The size of the super-human range of intelligence
* The dangers of agentic AI
* The difficulty of human-compatible goals
* The possibility of AI destroying everything
* The future of AI Impacts
* AI Impacts vs academia
* What AI x-risk researchers do wrong
* How to follow Katja’s and AI Impacts’ work
Daniel Filan: Hello everybody. Today, I’ll be speaking with Katja Grace, who runs AI Impacts, a project that tries to document considerations and empirical evidence that bears on the longterm impacts of sophisticated artificial intelligence. We’ll be talking about her paper, |
9b577f51-344a-4b3a-863e-abbf323a08f3 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Ideal governance (for companies, countries and more)
*Cross-posted from [Cold Takes](https://www.cold-takes.com/ideal-governance-for-companies-countries-and-more/)*
*Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.*
I'm interested in the topic of **ideal governance:** what kind of governance system should you set up, if you're starting from scratch and can do it however you want?
Here "you" could be a company, a nonprofit, an informal association, or a country. And "governance system" means a Constitution, charter, and/or bylaws answering questions like: "Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?"
I think this is a very different topic from something like "How does the US's Presidential system compare to the Parliamentary systems common in Europe?" The idea is not to look at today's most common systems and compare them, but rather to generate options for setting up systems radically different from what's common today.
I don't currently know of much literature on this topic (aside from the literature on [social choice theory](https://en.wikipedia.org/wiki/Social_choice_theory) and especially [voting methods](https://plato.stanford.edu/entries/voting-methods/), which covers only part of the topic). This post **describes [the general topic](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#common-governance-structures-today) and [why I care](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#why-do-i-care), partly in the hopes that people can point me to any literature I've missed.** Whether or not I end up finding any, I'm likely to write more on this topic in the future.
Outline of the rest of the piece:
* I'll outline some common governance structures for countries and major organizations today, and highlight **how much room there is to try different things** that don't seem to be in wide use today. [More](www.cold-takes.com/ideal-governance-for-companies-countries-and-more/#common-governance-structures-today)
* I'll discuss **why I care about this question.** I have a few very different reasons:
+ *A short-term, tangible need:* over the last several years, I've spoken with several (more than 3) organizations that feel no traditional corporate governance structure is satisfactory, because the stakes of their business are too great and society-wide for shareholder control to make sense, yet they are too early-stage and niche (and in need of nimbleness) to be structured like a traditional government. An example would be an artificial intelligence company that could end up with a normal commercial product, or could end up bringing about the [most important century of all time for humanity](https://www.cold-takes.com/most-important-century/). I wish I could point them to someone who was like: "I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs."
+ *A small probability of a big future win.* The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on "ideal governance" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.
+ *A weird, out-of-left-field application.* Some of my interest in this topic actually comes via my interest in [moral uncertainty](https://80000hours.org/articles/moral-uncertainty/): the question of what it's ethical to do when one is *struggling between more than one theory of ethics,* with radically different implications. This is hard to explain, but I try [below.](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#MoralUncertainty)
* I'll describe a bit more what I think **literature on this question could look like** (and what already exists that I know of), partly to guide readers who might be able to help me find more.
Common governance structures today
----------------------------------
All of these are simplified; I'm trying to illustrate the basic idea of what questions "ideal governance" is asking.
* A standard (e.g., public) corporation works like this: it has shareholders, assigned one vote per share (not per person), who elect a board of directors that governs by majority. The board generally appoints a CEO that it entrusts with day-to-day decisions. There is a "constitution" of sorts (the Articles of Incorporation and bylaws) and a lot more wrinkles in terms of how directors are selected, but that's the basic idea.
* A standard nonprofit is like a corporation, but entirely lacking the shareholder layer - it's governed directly by the board of directors. (I find something weird about a structure this simple - a simple board majority can do literally anything, even though the board of directors is often a somewhat random assortment of donors, advisors, etc.)
* The US federal government is a lot more complex. It splits authority between the House of Representatives, the Senate, the Presidency and the Supreme Court, all of which have specific appointment procedures, term limits, etc. and are meta-governed by a Constitution that requires special measures to change. There are lots of specific choices that were made in designing things this way, and lots of things that could've been set up differently in the 18th century that would probably still matter today.
* Other democracies tend to have governments that differ in a lot of ways ([e.g.](https://borgenproject.org/parliamentary-system-versus-presidential-system/)), while being based on broadly similar principles: voters elect representatives to more than one branch of government, which then divide up (and often can veto each other on) laws, expenditures, etc.
* When I was 13, the lunch table I sat at established a Constitution with some really strange properties that I can't remember. I think there was a near-dictatorial authority who rotated daily, with others able to veto their decisions by assembling supermajorities or maybe singing silly songs or something.

In addition to the design choices shown in the diagrams, there are a lot of others:
* Who votes, how often, and what voting system is used?
* How many representatives are there in each representative body? How are they divided up (one representative per geographic area, or [party-list proportional representation](https://en.wikipedia.org/wiki/Proportional_representation#How_party-list_PR_works), or something else)?
* What term limits exist for the different entities?
* Do particular kinds of decisions require supermajorities?
* Which restrictions are enshrined in a hard-to-change Constitution (and how hard is it to change), vs. being left to the people in power at the moment?
One way of thinking about the "ideal governance" question is: **what kinds of designs could exist that aren't common today? And how should a new organization/country/etc. think about what design is going to be best for its purposes, beyond "doing what's usually done"?**
For any new institution, it seems like the stakes are potentially high - in some important sense, picking a governance system is a "one-time thing" (any further changes have to be made using the rules of the existing system[1](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn1)).
Perhaps because of this, there doesn't seem to be much use of innovative governance designs in high-stakes settings. For example, here are a number of ideas I've seen floating around that seem cool and interesting, and ought to be considered if someone could set up a governance system however they wanted:
* **Sortition,** or choosing people randomly to have certain powers and responsibilities. An extreme version could be: "Instead of everyone voting for President, randomly select 1000 Americans; give them several months to consider their choice, perhaps paid so they can do so full-time; then have them vote."
+ The idea is to pick a subset of people who are both (a) representative of the larger population (hence the randomness); (b) will have a stronger case for putting serious time and thought into their decisions (hence the small number).
+ It's solving a similar problem that "representative democracy" (voters elect representatives) is trying to solve, but in a different way.
* **Proportional** **decision-making.** Currently, if Congress is deciding how to spend $1 trillion, a coalition controlling 51% of the votes can control all $1 trillion, whereas a coalition controlling 49% of the votes controls $0. Proportional decision-making could be implemented as "Each representative controls an equal proportion of the spending," so a coalition with 20% of the votes controls 20% of the budget. It's less clear how to apply this idea to other sorts of bills (e.g., illegalizing an activity rather than spending money), but there are plenty of possibilities.[2](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn2)
* **[Quadratic voting](https://www.radicalxchange.org/concepts/quadratic-voting/)**, in which people vote on multiple things at once, and can cast more votes for things they care about more (with a "quadratic pricing rule" intended to make the number of votes an "honest signal" of how much someone cares).
* **Reset/Jubilee:** maybe it would be good for some organizations to periodically redo their governance mostly from scratch, subject only to the most basic principles. Constitutions could contain a provision like "Every N years, there shall be a new Constitution selected. The 10 candidate Constitutions with the most signatures shall be presented on a ballot; the Constitution receiving the most votes is the new Constitution, except that it may not contradict or nullify this provision. This provision can be prevented from occurring by [supermajority provision], and removed entirely by [stronger supermajority]."
* More examples in a footnote.[3](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn3)
If we were starting a country or company from scratch, which of the above ideas should we integrate with more traditional structures, and how, and what else should we have in our toolbox? That's the question of ideal governance.
Why do I care?
--------------
I have one "short-term, tangible need" reason; one "small probability of a big future win" reason; and one "weird, out-of-left-field" reason.
**A short-term, tangible need: companies developing AI, or otherwise aiming to be working with huge stakes.** Say you're starting a new company for developing AI systems, and you believe that you could end up building [AI with the potential to change the world forever](https://www.cold-takes.com/most-important-century/).
* The standard governance setup for a corporation would hand power over all the decisions you're going to make to your shareholders, and by default most of your shares are going to end up held by people and firms that invested money in your company. Hopefully it's clear why this doesn't seem like the ideal setup for a company whose decisions could be world-changing. A number of AI companies have acknowledged the basic point that "Our ultimate mission should NOT just be: make money for shareholders," and that seems like a good thing.
* One alternative would be to set up like a nonprofit instead, with all power vested in a board of directors (no shareholder control). Some issues are that (a) this cuts shareholders out of the loop completely, which could make it pretty hard to raise money; (b) according to me at least, this is just a weird system of governance, for reasons that are not super easy to articulate concisely but I'll take a shot in a footnote[4](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn4) (and possibly write more in the future).
* Another alternative is a setup that is somewhat common among tech companies: 1-2 founders hold enough shares to keep control forever, so you end up with essentially a dictatorship. This also ... leaves something to be desired.
* Or maybe a company like this should just set up more like a government from the get-go, offering everyone in the world a vote via some complex system of representation, checks and balances. But this seems poorly suited to at least the relatively early days of a company, when it's small and its work is not widely known or understood. But then, how does the company handle the transition from the latter to the former? And should the former be done exactly in the standard way, or is there room for innovation there?
Over the last several years, I've spoken with heads of several (more than 3) organizations that struggle between options like the above, and have at least strongly considered unusual governance setups. I wish I could point them to someone who was like: "I've read all of, and written much of, the literature on what your options are. I can walk you through the pros and cons and help you pick a governance system that balances them for your needs."
But right now, I can't, and I've seen a fair amount of this instead: "Let's just throw together the best system we can, based mostly on what's already common but with a few wrinkles, and hope that we figure this all out later." I think this is the right solution given how things stand, but I think it really does get continually harder to redesign one's governance as time goes on and more [stakeholders](https://www.cold-takes.com/empowerment-and-stakeholder-management/) enter the picture, so it makes me nervous.
Similar issues could apply to mega-corporations (e.g., [FAANG](https://www.investopedia.com/terms/f/faang-stocks.asp)) that are arguably more powerful than what the standard shareholder-centric company setup was designed for. Are there governance systems they could adopt that would make them more broadly accountable, without copying over all the pros and cons of full-blown representative democracy as implemented by countries like the US?
**A small probability of a big future win: future new states.** The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates (e.g., I believe you see almost none of [the things I listed above](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#AssortedIdeas)), and probably relatedly, there seems to be remarkably little variety and experimentation with policy. Policies that many believe could be huge wins - such as [dramatically expanded immigration](https://smile.amazon.com/dp/B07YRKYKZ3/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1), [land value taxation](https://en.wikipedia.org/wiki/Land_value_tax), "consumer reports"-style medical approvals,[5](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn5) [drug decriminalization](https://michaelpollan.com/books/how-to-change-your-mind/), and [charter cities](https://chartercitiesinstitute.org/intro/) - don't seem to have gotten much of a trial anywhere in the world.
At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between.
By default I expect future Constitutions to resemble present ones an awful lot. But maybe, at some future date, there will be a large "ideal governance" literature and some points of expert consensus on innovative governance designs that somebody really ought to try. That could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.
**An out-of-left-field application for "ideal governance."** This is going to veer off the rails, so remember to skip to the next section if I lose you.
Some of my interest in this topic actually comes via my interest in [moral uncertainty](https://80000hours.org/articles/moral-uncertainty/): the question of what it's ethical to do when one is *struggling between more than one theory of ethics,* with radically different implications.
For example, there are arguments that our [ethical decisions should be dominated by concern for ensuring that as many people as possible will someday get to exist](https://www.cold-takes.com/debating-myself-on-whether-extra-lives-lived-are-as-good-as-deaths-prevented/). I really go back and forth on how much I buy these arguments, but I'm definitely somewhere between 10% convinced and 50% convinced. So ... say I'm "20% convinced" of some view that says preventing human extinction[6](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn6) is the [overwhelmingly](https://www.cold-takes.com/defending-one-dimensional-ethics/) most important consideration for at least some dimensions of ethics (like where to donate), and "80% convinced" of some more common-sense view that says I should focus on some cause unrelated to human extinction.[7](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn7) How do I put those two together and decide what this means for actual choices I'm making?
The closest thing I've seen to a reasonable-seeming answer is the idea of a [moral parliament](https://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html): I should act as though I'm run by a Parliament with 80 members who believe in "common-sense" ethics, and 20 members who believe in the "preventing extinction is overwhelmingly important" idea. But with default Parliament rules, this would just mean the 80 members can run the whole show, without any compromise with the 20.
And so, a [paper on the "moral parliament" idea](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf) tries to make it work by ... introducing a completely new governance mechanism that I can't find any other sign of someone else ever talking about, "proportional chances voting" (spelled out in a footnote).[8](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn8) I think this mechanism has its own issues,[9](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn9) but it's an attempt to ensure something like "A coalition controlling 20% of the votes has 20% of the effective power, and has to be compromised with, instead of being subject to the tyranny of the majority."
My own view (which I expect to write more about in the future) is that governance is roughly the right metaphor for "moral uncertainty": I am torn by [multiple different sides of myself](https://www.cold-takes.com/defending-one-dimensional-ethics/), with different takes on what it means to be a good person, and the problem of getting these different sides of myself to reach a decision together is like the problem of getting different citizens (or shareholders) to reach a decision together. The more we can say about what ideal governance looks like, the more we can say about how this ought to work - and the better I expect this "moral parliament"-type idea to end up looking, compared to alternatives.[10](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fn10)
The literature I'm looking for
------------------------------
Ideal governance seems like the sort of topic for which there should be a "field" of "experts," studying it. What would such study look like? Three major categories come to mind:
**Brainstorming** ideas such as [those I listed above](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#AssortedIdeas) - innovative potential ways of solving classic challenges of governance, such as reconciling "We want to represent all the voters" with "We want decisions to be grounded in expertise and high engagement, and voters are often non-expert and not engaged."
I've come across various assorted ideas in this category, including [quadratic voting](https://www.radicalxchange.org/concepts/quadratic-voting/), [futarchy](https://en.wikipedia.org/wiki/Futarchy), and [proportional chances voting](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf), without seeing much sign that these sit within a broader field that I can skim through to find all the ideas that are out there.
**Economics-style theory** in which one asks questions like: "If we make particular assumptions about who's voting, what information they have and lack, how much they suffer from [bounded rationality](https://en.wikipedia.org/wiki/Bounded_rationality), and how we define 'serving their interests' (see below), what kind of governance structure gets the best outcome?"
[Social choice theory](https://en.wikipedia.org/wiki/Social_choice_theory), including on [voting methods](https://plato.stanford.edu/entries/voting-methods/), tackles the "how we define 'serving their interests'" part of this. But I'm not aware of people using similar approaches to ask questions like "Under what conditions would we want 1 chamber of Congress vs. 2, or 10? 100 Senators vs. 500, or 15? A constitution that can be modified by simple majority, vs. 2/3 majority vs. consensus? Term limits? Etc. etc. etc."
**Empirical research (probably qualitative):** Are there systematic reviews of unusual governance structures tried out by companies, and what the results have been? Of smaller-scale experiments at co-ops, group houses and lunch tables?
To be clear, I think the most useful version of this sort of research would probably be very qualitative - collecting reports of what problems did and didn't come up - rather than asking questions like "How does a particular board structure element affect company profits?"
One of the things I expect to be tricky about this sort of research is that I think a lot of governance comes down to things like "What sorts of people are in charge?" and "What are the culture, expectations, norms and habits?" A setup that is "officially" supposed to work one way could evolve into something quite different via informal practices and "soft power." However, I think the formal setup (including things like "what the constitution says about the principles each governance body is supposed to be upholding") can have big effects on how the "soft power" works.
If you know where to find research or experts along the lines of the above, please share them [in the comments](https://www.lesswrong.com/posts/slug/ideal-governance-for-companies-countries-and-more#comments) or [using this form](https://forms.gle/J4yCHcRL8VAC6mKR6) if you don't want them to be public.
I'll likely write about what I come across, and if I don't find anything new, I'll likely ramble some more about ideal governance. So either way, there will be more on this topic!
---
Footnotes
---------
1. Barring violent revolution in the case of countries. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref1)
2. An example would be the "proportional chances voting" idea described [here](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf). [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref2)
3. * **Proxying/[liquid democracy](https://en.wikipedia.org/wiki/Liquid_democracy)**, or allowing voters to transfer their votes to other voters. (This is common for corporations, but not for governments.) This could be an alternative or complement to electing representatives, solving a similar problem (we want lightly-engaged voters to be represented, but we also want decisions ultimately made using heavy engagement and expertise). At first glance it may seem to pose a risk that people will be able to "buy votes," but I don't actually think this is necessarily an issue (proxying could be done anonymously and on set schedules, like other votes).
* **Soft term limits:** the more terms someone has served, the greater a supermajority they need to be re-elected. This could be used to strike a balance between the advantages of term limits (avoiding "effectively unaccountable" incumbents) and no-term-limits (allowing great representatives to keep serving).
* **Formal technocracy/meritocracy**: Using hard structures (rather than soft norms) to assign authority to people with particular expertise and qualifications. An extreme example would be [futarchy](https://en.wikipedia.org/wiki/Futarchy), in which prediction markets directly control decisions. A simpler example would be structurally rewarding representatives (via more votes or other powers) based on assessments of their track records (of predictions or decisions), or factual understanding of a subject. This seems like a tough road to go down by default, as any mechanism for evaluating "track records" and "understanding" can itself be politicized, but there's a wide space of possible designs. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref3)
4. Most systems of government have a sort of funnel from "least engaged in day to day decisions, but most ultimately legitimate representatives of whom the institution is supposed to serve" (shareholders, voters) to "most engaged in day to day decisions, but ultimately accountable to someone else" (chief executive). A nonprofit structure is a very short funnel, and the board of directors tends to be a somewhat random assortment of funders, advisors, people who the founders just thought were cool, etc. I think they often end up not very accountable (to anyone) or engaged in what's going on, such that they have a hard time acting when they ought to, and the actions they do take are often kind of random.
I'm not saying there is a clearly better structure available for this purpose - I think the weirdness comes from the fact that it's so unclear who should go in the box normally reserved for "Shareholders" or "Voters." It's probably the best common structure for its purpose, but I think there's a lot of room for improvement, and the stakes seem high for certain organizations. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref4)
5. Context in [this Marginal Revolution post](https://marginalrevolution.com/marginalrevolution/2015/08/is-the-fda-too-conservative-or-too-aggressive.html), which links to [this 2005 piece on a "consumer reports" model for the FDA](https://www.independent.org/news/article.asp?id=1500). [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref5)
6. Or "existential catastrophe" - something that drastically curtails humanity's future, even if it doesn't drive us extinct. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref6)
7. This isn't actually where I'm at, because I think the leading existential risks are a big enough deal that I would want to focus on them even if I completely ignored the philosophical argument that the future is overwhelmingly important. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref7)
8. Let's say that 70% of the Parliament members vote for bill X, and 30% vote against. "Proportional chance voting" literally uses a weighted lottery to pass bill X with 70% probability, and reject it with 30% probability (you can think of this like rolling a 10-sided die, and passing the bill if it's 7 or under).
A key part of this is that the members are supposed to negotiate before voting and holding the lottery. For example, maybe 10 of the 30 members who are against bill X offer to switch to supporting it if some change is made. The nice property here is that rather than having a "tyranny of the majority" where the minority has no bargaining power, we have a situation where the 70-member coalition would still love to make a deal with folks in the minority, to further increase the probability that they get their way.
Quote from the paper that I am interpreting: "Under proportional chances voting, each delegate receives a single vote on each motion. Before they vote, there is a period during which delegates may negotiate: this could include trading votes on one motion for votes on another, introducing novel options for consideration within a given motion, or forming deals with others to vote for a compromise option that both consider to be acceptable. The delegates then cast their ballots for one particular option in each motion, just as they might in a plurality voting system. But rather than determining the winning option to be the one with the most votes, each option is given a chance of winning proportional to its share of the votes." [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref8)
9. What stops someone who lost the randomized draw from just asking to hold the same vote again? Or asking to hold a highly similar/related vote that would get back a lot of what they lost? How does that affect the negotiated equilibrium? [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref9)
10. Such as "[maximize expected choice-worthiness](https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5bc7224a0852299b5cd60e86/1539777103255/Why+Maximize+Expected+Choice-Worthiness%3F.pdf)," which I am not a fan of for reasons I'll get to in the future. [↩](https://www.cold-takes.com/p/7a67e036-1684-43bc-ab2b-5e997783f125/#fnref10) |
fc7d401e-1055-4a93-b2a9-58e1830b69d3 | trentmkelly/LessWrong-43k | LessWrong | How To Raise Others’ Aspirations in 17 Easy Steps
Title taken from Tyler Cowen: The high-return activity of raising others’ aspirations
This is taken from a twitter thread (which I also wrote up on my website) that I was - aptly for the content - encouraged to put in non-Twitter places. Thanks for the encouragement!
Things you can say to people in ten seconds that sometimes produce insanely outsized effects:
1. Yeah, someone *should* do that. Why not you?
2. Is there something you could do about that problem in the next five minutes?
3. That's a great thought - have you written it up somewhere? I'd be excited to share it if so.
4. Should you write a book / blog?
5. You want to do that but don't think you will? Do you want to make a concrete plan now?
6. Do you want me to ask in a week / month if you've done that or how it's going?
7. Feeling stuck sucks. Want to brainstorm together?
8. Feeling stuck sucks. Have you spent a five minute timer generating options?
9. What's the twenty minute / minimum viable product version of this overwhelming-feeling thing?
10. Do you want me to sit with you while you fill out that application/write that email? Too bad, it's happening.
11. Is it worth just asking / cold emailing and seeing?
12. Is there anyone I know who you'd like to be introduced to?
13. Do you want to just set an alarm on your phone now as a reminder? (from Damon Sasi)
14. Do you know anyone else who might have struggled with or succeeded at that? Have you talked to them about it? (from Damon Sasi)
15. Who do you know who you could ask for help from?
16. You are Allowed (to try, to do, to build, to ask). I strongly recommend this link of things you are allowed to do.
* (Or do you not need permission in the first place?)
17. Do you sort of already know what you're going to do / have your mind made up about this? |
16b40986-f7b2-4251-84bd-cad30a69560b | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes 22
"Two roads diverged in the woods. I took the one less traveled, and had to eat bugs until Park rangers rescued me."
-- Jim Rosenberg
"Lying to yourself about specific actions is easier than re-defining the bounds of your imagined identity... When I see once-ethical men devolve into moral grey, they still identify as upstanding."
-- Ben Casnocha
"Every year buy a clean bed sheet, date it, and lay it over the previous layer of junk on your desk."
-- Vernor Vinge, The Blabber
"Like first we tossed out the bath water, then the baby, and like finally the whole tub."
-- John R. Andrews
"I have no wisdom. Yet I heard a wise man - soon to be a relative of marriage - say not long ago that all is for the best. We are but dreams, and dreams possess no life by their own right. See, I am wounded. (Holds out his hand.) When my wound heals, it will be gone. Should it with its bloody lips say it is sorry to heal? I am only trying to explain what another said, but that is what I think he meant."
-- Gene Wolfe, The Claw of the Conciliator
"On a grand scale we simply want to save the world, so obviously we're just letting ourselves in for a lot of disappointment and we're doomed to failure since we didn't pick some cheap-ass two-bit goal like collecting all the Garbage Pail Kids cards."
-- Nenslo
"He promised them nothing but blood, iron, and fire, and offered them only the choice of going to find it or of waiting for it to find them at home."
-- John Barnes, One For the Morning Glory |
242a63e1-a8d6-4740-8487-58c93c43700e | StampyAI/alignment-research-dataset/arbital | Arbital | Strategic AGI typology
A list of [advanced agent](https://arbital.com/p/2c) types, in classes broad enough to correspond to different *strategic scenarios* - AIs that can do different things, can only be built under different circumstances, or are only desirable given particular background assumptions. This typology isn't meant to be exhaustive.
- [https://arbital.com/p/1g3](https://arbital.com/p/1g3)
- [Genie](https://arbital.com/p/6w)
- [Oracle](https://arbital.com/p/6x)
- [Known-Algorithm Non-Self-Improving](https://arbital.com/p/1fy) agent
- [Approval-Directed](https://arbital.com/p/) agent |
c2b7c273-29a2-44a6-9379-94fc478669bf | trentmkelly/LessWrong-43k | LessWrong | Rationality and Cancer
Today, my dentist found a possible oral cancer.
I'm 31, a non-smoker, in good health. I know the research showing that doctors ignore base rates and overestimate your chances of cancer. (I asked the doctor the base rate, he didn't know.) I know that we grossly overprescribe biopsies and surgeries, when it would be better to just wait and see. But I'm having it removed and biopsied on Friday, even though I don't have dental insurance and it's costing me $1,000 of my own money.
Why?
I thought this would be an interesting case study: Introspectively, what's going on to make me ignore my rationalist training, ignore the external data, and choose what I know is probably the less optimal path?
My first thought is embarrassment: If I do nothing, and it turns out I have cancer, will my support network roll their eyes and blame me for not being more aggressive? My feeling is yes, that even though they wouldn't do it to my face, they would secretly blame me, and become less available.
My second thought is fear of the unknown: I roughly know what the biopsy will entail. It's a light anesthesia, a few stitches, and 1-2 days of recovery. No big deal. And $1,000 isn't tiny, but it's not a big deal for me, either. In contrast, what happens if I don't do the biopsy? Huge, scary unknown. And, even if I know that I only have a 0.01% chance of having cancer (to guess a number), I also know my emotional mind is bad at math, and I'll have great difficulty controlling its worry. And so, it's rational to buy some level of anti-worry insurance -- I don't know what the rational value of that anti-worry insurance is, and I don't know if that value exceeds $1,000, but clearly, anti-worry insurance has some positive value, and probably a fairly high value.
There are other considerations. I need some wisdom teeth removed, and we're doing them at the same time, so adding the biopsy doesn't affect the recovery time. And I have a 6-week trip to Australia coming up, and I'd hate to have probl |
c4bce85d-815e-48a9-a37b-c450bc302c03 | trentmkelly/LessWrong-43k | LessWrong | Beware Trivial Fears
Does the surveillance state affect us? It has affected me, and I didn't realize that it was affecting me until recently. I give a few examples of how it has affected me:
1. I was once engaged in a discussion on Facebook about Obama's foreign policy. Around that time, I was going to apply for a US visa. I stopped the discussion early. Semi-consciously, I was worried that what I was writing would be checked by US visa officials and would lead to my visa being denied.
2. I was once really interested in reading up on the Unabomber and his manifesto, because somebody mentioned that he had some interesting ideas, and though fundamentally misguided, he might have been onto something. I didn't explore much because I was worried---again semi-consciously---that my traffic history would be logged on some NSA computer somewhere, and that I'd pattern match to the Unabomber (I'm a physics grad student, the Unabomber was a mathematician).
3. I didn't visit Silk Road as I was worried that my visits would be traced, even though I had no plans of buying anything.
4. Just generally, I try to not search for some really weird stuff that I want to search for (I'm a curious guy!).
5. I was almost not going to write this post.
And these are just the ones that I became conscious of. I wonder how many more have slipped under the radar.
Yes, I know these fears are silly. In fact, writing them out makes them feel even more silly. But they still affected my behavior. Now, I may be atypical. But I'm sure I'm not that atypical. I'm sure many, many people refrain from visiting and exploring parts of the Internet and writing things on different forums and blogs because of the fear of being recorded and the data being used against them. Especially susceptible to this fear are immigrants.
In Beware Trivial Inconveniences, Yvain points out that the Great Firewall of China is very easy to bypass but the vast majority of Chinese people don't bypass it because it's a trivial inconvenience.
|
21e91725-c455-4669-a9fb-4553650a2a0f | StampyAI/alignment-research-dataset/arxiv | Arxiv | Evidence with Uncertain Likelihoods
Evidence with Uncertain Likelihoods
Joseph Y. Halpern
Cornell University
Ithaca, NY 14853 USA
halpern@cs.cornell.eduRiccardo Pucella
Cornell University
Ithaca, NY 14853 USA
riccardo@cs.cornell.edu
Abstract
An agent often has a number of hypotheses, and
mustchooseamongthembasedonobservations,
oroutcomesofexperiments. Eachoftheseobser-
vations can be viewed as providing evidence for
oragainstvarioushypotheses. Alltheattemptsto
formalize this intuition up to now have assumed
that associated with each hypothesis hthere is
alikelihood function µh, which is a probabil-
ity measure that intuitively describes how likely
each observation is, conditional on hbeing the
correct hypothesis. We consider an extension of
this framework where there is uncertainty as to
which of a number of likelihood functions is ap-
propriate, and discuss how one formal approach
to defining evidence, which views evidence as a
function from priors to posteriors, can be gener-
alized to accommodate this uncertainty.
1 Introduction
An agent often has a number of hypotheses, and must
chooseamongthembasedonobservations,oroutcomesof
experiments. Each of these observations can be viewed as
providing evidence for or against various hypotheses. The
following simple example illustrates the situation.
Example 1.1 Suppose that Alice and Bob each have a
coin. Alice’s coin is double-headed, Bob’s coin is fair.
Charlie knows all of this. Alice and Bob give their coin
tosomethirdparty,Zoe,whochoosesoneofthecoins,and
tosses it. Charlie is not privy to Zoe’s choice, but gets to
see the outcome of the toss. Charlie is interested in two
events (which are called hypotheses in this context):
A: the coin is Alice’s coin
B: the coin is Bob’s coin.Now Charlie observes the coin land heads. What can he
sayabouttheprobabilityoftheevents AandB? IfCharlie
has no prior probability on AandB, then he can draw no
conclusions about their posterior probability; the probabil-
ity of Acould be any number in [0,1]. The same remains
true if the coin lands heads 100times in a row. /intersectionsq /unionsq
Clearly Charlie learns something from seeing 100 (or even
one)cointosslandheads. Thishastraditionallybeenmod-
eled in terms of evidence: the more times Charlie sees
heads, the more evidence he has for the coin being heads.
There have been a number of ways of modeling evidence
in the literature; see [Kyburg 1983] for an overview. All of
them make use of the likelihood function . More precisely,
they assume that for each hypothesis hof interest, there is
aprobability µh(calledalikelihoodfunction)onthespace
of possible observations. In the example above, if the coin
istossedonce,thetwopossibleobservationsare headsand
tails. Clearly µA(heads ) = 1 /2andµB(heads ) = 1. If
the coin is tossed 100 times, then there are 2100possible
observations (sequences of coin tosses). Again, µAand
µBput obvious probabilities on this space. In particular, if
100heads is the observation of seeing 100 heads in a row,
thenµA(100heads ) = 1 /2100andµB(100heads ) = 1.
Most of the approaches compute the relative evidence of a
particular observation obfor two hypotheses AandBby
comparing µA(ob)andµB(ob).
Our goal in this paper is to understand what can be done
when an hypothesis hdoes not determine a unique proba-
bility µh. To understand the issues that arise, consider the
following somewhat contrivedvariant of Example 1.1.
Example 1.2 Suppose that Alice has two coins, one that
is double-headed and one that is biased 3/4 towards heads,
and chooses which one to give Zoe. Again, Zoe chooses
eitherAlice’scoinorBob’scoinandtossesit. Charlie,who
knowsthewholesetup,seesthecoinlandheads. Whatdoes
this tell him about the likelihood that the coin tossed was
Alice’s? /intersectionsq /unionsq
The problem is that now we do not have a probability µA
on observations corresponding to the coin being Alice’s
coin,sinceCharliedoesnotknowifAlice’scoinisdouble-
headed or biased 3/4towards heads. It seems that there is
an obvious solution to this problem. We simply split the
hypothesis “the coin is Alice’s coin” into two hypotheses:
A1: the coin is Alice’s coin and it is double-headed
A2: the coin is Alice’s coin and it is the biased coin.
Now we can certainly apply standard techniques for com-
puting evidence to the three hypotheses A1,A2, and B.
The question now is what do the answers tell us about the
evidence in favor of the coin being Alice’s coin?
SituationslikethatinExample1.2arisefrequently. Forex-
ample,considerarobotequippedwithanunreliablesensor
for navigation. This sensor returns the distance to the wall
in front of the robot, with some known error. For simplic-
ity, suppose that distances are measured in integral units
0,1,2, . . ., and that if the wall is at distance m, then the
sensorwillreturnareadingof m−1withprobability 1/4,
areadingof mwithprobability 1/2,andareadingof m+1
with probability 1/4. Suppose the robot wants to stop if it
is exactly close to the wall, where “close” is interpreted
as being within 3 units of the wall, and go forward if it is
farther than 3 units. So again, we have two hypotheses of
interest. However, while for each specific distance mwe
have a probability µmon sensor readings, we do not have
a probability on sensor readings corresponding to the hy-
pothesisfar: “the robot is farther than 3 from the wall”.
Whilestandardtechniqueswillcertainlygiveustheweight
of evidence of a particular sensor reading for the hypothe-
sis “the robot is distance mfrom the wall”, it is not clear
what the weight of evidence should be for the hypothesis
far.
To examine the problem carefully, we consider one partic-
ular approach for determining the weight of evidence, due
toShafer[1982],whichisageneralizationofamethodad-
vocated by Good [1950]. Let an evidence space Econsist
of a set Hof possible hypotheses, a set Oof observations,
and a probability µhon observations for each h∈ H. We
take the weight of evidence for hypothesis hprovided by
observation obin evidence space E, denoted wE(ob, h), to
be
wE(ob, h) =µh(ob)/summationtext
h/prime∈Hµh/prime(ob).
It is easy to see that wE(ob,·)acts like a probability on
H, in that/summationtext
h∈HwE(ob, h) = 1. With this definition,
it is easy to compute the weight of evidence for Alice’s
coin when Charlie sees heads in Example 1.1 is 2/3, and
the weight of evidence when Charlie sees 100 heads is
2100/(2100+ 1). As expected, the more often Charlie sees
heads, the more evidence he has in favor of the coin being
double-headed (provided that hedoes not see tails).In Example 1.2, if we consider the three hypotheses A1,
A2,andB,thentheweightofevidencefor A1whenChar-
lie sees heads is 1/(1 + 3 /4 + 1/2) = 4 /9; similarly, the
weight of evidence for A2is1/3and the weight of ev-
idence for Bis2/9. Since weight of evidence acts like
a probability, it might then seem reasonable to take the
weight of evidence for A(the coin used was Alice’s coin)
to be 4/9 + 1 /3 = 7 /9. (Indeed, this approach was im-
plicitlysuggestedinourearlierpaper[HalpernandPucella
2003a].) But is this reasonable? A first hint that it might
not be is the observation that the weight of evidence for
Ais higher in this case than it is in the case where Alice
certainly had a double-headed coin.
To analyze this issue, we need an independent way of
understanding what evidence is telling us. As observed
by Halpern and Fagin [1992], weight of evidence can be
viewed as a function from priors to posteriors. That is,
givenaprioronhypotheses,wecancombinethepriorwith
the weight of evidence to get the posterior. In particular,
if there are two hypotheses, say H1andH2, the weight of
evidence for H1isα, and the prior probability of H1isβ,
thentheposteriorprobabilityof H1(thatis,theprobability
ofH1in light of the evidence) is
αβ
αβ+ (1−α)(1−β).
Thus, for example, by deciding to perform an action when
the weight of evidence for Ais2/3(i.e., after Charlie has
seen the coin land heads once), Charlie is assured that, if
the prior probability of Ais at least .01, then the posterior
probabilityof Aisatleast 2/11;similarly,afterCharliehas
seen 100 heads, if the prior probability of Ais at least .01,
then the posterior probability of Ais at least 2100/(2100+
99).
ButnowconsiderthesituationinExample1.2. Again,sup-
pose that the prior probability of Ais at least .01. Can
we conclude that the posterior probability of Ais at least
.01(7/9)/(.01(7/9) +.99(2/9)) = 7 /205? As we show,
we cannot. The calculation (αβ)/(αβ+ (1−α)(1−β))
is appropriate only when there are two hypotheses. If the
hypotheses A1andA2have priors α1andα2and weights
of evidence β1andβ2, then the posterior probability of A
is
α1β1+α2β2
α1β1+α2β2+ (1−α1−α2)(1−β1−β2),
which is in general quite different from
(α1+α2)(β1+β2)
(α1+α2)(β1+β2) + (1 −α1−α2)(1−β1−β2).
Moreover, it is easy to show that if β1> β2(as is the case
here), then the posterior of Ais somewhere in the interval
/bracketleftbiggα2β2
α2β2+ (1−α2)(1−β2),α1β1
α1β1+ (1−α1)(1−β1)/bracketrightbigg
.
That is, we get a lower bound on the posterior by acting as
if the only possible hypotheses are A2andB, and we get
anupperboundbyactingasiftheonlypossiblehypotheses
areA1andB.
Inthispaper,wegeneralizethisobservationbyprovidinga
general approach to dealing with weight of evidence when
the likelihood function is unknown. In the special case
when the likelihood function is known, our approach re-
duces to Shafer’s approach. Roughly speaking, the idea
is to consider all possible evidence spaces consistent with
the information. The intuition is that one of them is the
right one, but the agent trying to ascribe a weight of evi-
dence does not know which. For example, in Example 1.2,
the evidence space either involves hypotheses {A1, B}or
hypotheses {A2, B}: eitherAlice’sfirstcoinisusedorAl-
ice’s second coin is used. We can then compute the weight
ofevidenceforAlice’scoinbeingusedwithrespecttoeach
evidence space. This gives us a range of possible weights
of evidence, which can be used for decision making in a
way that seems most appropriate for the problem at hand
(by considering the max, the min, or some other function
of the range).
The advantage of this approach is that it allows us to con-
sider cases where there are correlations between the like-
lihood functions. For example, suppose that, in the robot
example, the robot’s sensor was manufactured at one of
two factories. The sensors at factory 1 are more reliable
than those of factory 2. Since the same sensor is used for
all readings, the appropriate evidence space either uses all
likelihood functions corresponding to factory 1 sensors, or
alllikelihoodfunctionscorrespondingtofactory2sensors.
Therestofthispaperisorganizedasfollows. InSection2,
we review Shafer’s approach to dealing with evidence. In
Section3,weshowhowtoextenditsoastodealwithsitu-
ation where the likelihood function is uncertain, and argue
that our approach is reasonable. In Section 4, we consider
how to combine evidence in this setting. We conclude in
Section 5. The proofs of our technical results are deferred
to the full paper.
2 Evidence: A Review
We briefly review the notion of evidence and its formal-
ization by Shafer [1982], using some terminology from
[Halpern and Pucella 2003b].
We start with a set Hof hypotheses, which we take to be
mutually exclusive and exhaustive; thus, exactly one hy-
pothesis holds at any given time. We also have a set Oof
observations , which can be understood as outcomes of ex-
periments that can be made. Finally, we assume that for
each hypothesis h∈ H, there is a probability µh(often
calledalikelihoodfunction )ontheobservationsin O. This
is formalized as an evidence space E= (H,O,µ), whereHandOare as above, and µis alikelihood mapping ,
which assigns to every hypothesis h∈ Ha probability
measure µ(h) =µh. (For simplicity, we often write µh
forµ(h), when the former is clear fromcontext.)
Foranevidencespace E,theweightofevidenceforhypoth-
esish∈ Hprovidedbyobservation ob,written wE(ob, h),
is
wE(ob, h) =µh(ob)/summationtext
h/prime∈Hµh/prime(ob). (1)
The weight of evidence wEis not defined by (1) for an ob-
servation obsuch that/summationtext
h∈Hµh(ob) = 0. Intuitively, this
means that the observation obis impossible. In the litera-
tureonevidenceitistypicallyassumedthatthiscasenever
arises. More precisely, it is assumed that all observations
are possible, so that for every observation ob, there is an
hypothesis hsuch that µh(ob)>0. For simplicity, we
make the same assumption here. (We remark that in some
application domains this assumption holds because of the
structureofthedomain,withoutneedingtobeassumedex-
plicitly; see [Halpern and Pucella 2003b] for an example.)
The measure wEalways lies between 0 and 1, with 1 in-
dicating that the observation provides full evidence for the
hypothesis. Moreover, for each fixed observation obfor
which/summationtext
h∈Hµh(ob)>0,/summationtext
h∈HwE(ob, h) = 1,andthus
theweightofevidence wElookslikeaprobabilitymeasure
for each ob. While this has some useful technical conse-
quences, one should not interpret wEas a probability mea-
sure. It is simply a way to assign a weight to hypotheses
given observations, and, as we shall soon see, can be seen
as a way to update a prior probability on the hypotheses
into a posterior probability on those hypotheses, based on
the observations made.
Example 2.1 In Example 1.1, the set Hof hypotheses is
{A, B};theset Oofobservationsissimply {heads ,tails},
the possible outcomes of a coin toss. From the discussion
following the description of the example, it follows that µ
assigns the following likelihood functions to the hypothe-
ses: since µA(heads )istheprobabilitythatthecoinlanded
heads if it is Alice’s coin (i.e., if it is double-headed), then
µA(heads ) = 1andµA(tails ) = 0. Similarly, µB(heads )
is the probability that the coin lands heads if it is fair, so
µB(heads ) = 1/2andµB(tails ) = 1/2. Thiscanbesum-
marized by the following table:
µA B
heads 1 1/2
tails 0 1/2
Let
E= ({A, B},{heads ,tails},µ).
A straightforward computation shows that
wE(heads , A) = 2 /3andwE(heads , B) = 1 /3. In-
tuitively, the coin landing heads provides more evidence
for the hypothesis Athan the hypothesis B. Similarly,
w(tails, A) = 0andw(tails, A) = 1. Thus, the coin
landing tail indicates that the coin must be fair. This
information can be represented by the following table:
wEA B
heads 2/3 1/3
tails 0 1
/intersectionsq /unionsq
It is possible to interpret the weight function was a pre-
scription for how to update a prior probability on the hy-
potheses into a posterior probability on those hypotheses,
after having considered the observations made [Halpern
and Fagin 1992]. There is a precise sense in which wEcan
beviewedasafunctionthatmapsapriorprobability µ0on
the hypotheses Hto a posterior probability µobbased on
observing ob, by applying Dempster’s Rule of Combina-
tion [Shafer 1976]. That is,
µob=µ0⊕wE(ob,·), (2)
where ⊕combines two probability distributions on Hto
get a new probability distribution on Has follows:
(µ1⊕µ2)(H) =/summationtext
h∈Hµ1(h)µ2(h)/summationtext
h∈Hµ1(h)µ2(h).(3)
(Strictly speaking, ⊕is defined for set functions, that is,
functions with domain 2H. We have defined wE(ob,·)as
a function with domain H, but is is clear from (3) that this
is all that is really necessary to compute µ0⊕wE(ob,·)in
our case.)
Bayes’ Rule is the standard way of updating a prior prob-
ability based on an observation, but it is only applicable
when we have a joint probability distribution on both the
hypotheses and the observations, something which we did
not assume we had. Dempster’s Rule of Combination es-
sentially “simulates” the effects of Bayes’s rule. The rela-
tionshipbetweenDempster’sRuleandBayes’Ruleismade
precise by the following well-known theorem.
Proposition 2.2 [Halpern and Fagin 1992] LetE=
(H,O,µ)beanevidencespace. Supposethat Pisaprob-
abilityon H×Osuchthat P(H×{ ob}|{h}×O ) =µh(ob)
for all h∈ Hand all ob∈ O. Let µ0be the probabil-
ity on Hinduced by marginalizing P; that is, µ0(h) =
P({h} × O ). For ob∈ O,letµob=µ0⊕wE(ob,·). Then
µob(h) =P({h} × O|H × { ob}).
In other words, when we do have a joint probability on
the hypotheses and observations, then Dempster’s Rule of
Combination gives us the same result as a straightforward
application of Bayes’ Rule.3 Evidence with Uncertain Likelihoods
In Example 1.1, each of the two hypotheses AandBde-
termines a likelihood function. However, in Example 1.2,
the hypothesis Adoes not determine a likelihood function.
By viewing it as the compound hypothesis {A1, A2}, as
we did in the introduction, we can construct an evidence
space with a set {A1, A2, B}of hypotheses. We then get
the following likelihood mapping µ:
µA1A2B
heads 1 3 /4 1/2
tails 0 1 /4 1/2
Taking
E= ({A1, A2, B},{heads ,tails},µ),
we can compute the following weights of evidence:
wEA1A2B
heads 4/9 1/3 2/9
tails 0 1 /3 2/3
If we are now given prior probabilities for A1,A2, and
B, we can easily use Proposition 2.2 to compute posterior
probabilitiesforeachoftheseevents,andthenaddthepos-
terior probabilities of A1andA2to get a posterior proba-
bility for A.
But what if we are given only a prior probability µ0
forAandB, and are not given probabilities for A1
andA2? As observed in the introduction, if we define
wE(heads , A) =wE(heads , A1) +wE(heads , A2) = 7/9,
and then try to compute the posterior probability of A
given that heads is observed by naively applying the equa-
tion in Proposition 2.2, that is, taking by µheads (A) =
(µ0⊕wE(heads ,·))(A), we get an inappropriate answer.
In particular, the answer is not the posterior probability in
general.
To make this concrete, suppose that µ0(A) =.01. Then,
as observed in the introduction, a naive application of this
equation suggests that the posterior probability of Ais
7/205. But suppose that in fact µ0(A1) =αfor some
α∈[0, .01]. Then applying Proposition 2.2, we see
thatµheads (A1) =α(4/9)/(α(4/9) + ( .01−α)(1/3) +
.99(2/9) = 4 α/(α+ 2.01). It is easy to check that
4α/(α+ 2.01) = 7 /205iffα= 1407 /81300. That is,
the naive application of the equation in Proposition 2.2 is
correct only if we assume a particular (not terribly reason-
able) value for the prior probability of A1.
Wenowpresentoneapproachtodealingwiththeproblem,
and argue that it is reasonable.
Define a generalized evidence space to be a tuple G=
(H,O,∆), where ∆is a set of likelihood mappings. Note
for future reference that we can associate with the gen-
eralized evidence space G= (H,O,∆)the set S(G) =
{(H,O,µ)|µ∈∆}of evidence spaces. Thus, given a
generalized evidence space G, we can define the general-
ized weight of evidence wGto be the set {wE:E ∈ S (G)}
of weights of evidence. We often treat wGas a set-valued
function, writing wG(ob, h)for{w(ob, h)|w∈wG}.
Justaswecancombineapriorwiththeweightofevidence
to get a posterior in a standard evidence spaces, given a
generalized evidence space, we can combine a prior with
a generalized weight of evidence to get a set of posteri-
ors. Given a prior probability µ0on a set Hof hypotheses
and a generalizedweight of evidence wG, letPµ0,obbe the
set of posterior probabilities on Hcorresponding to an ob-
servation oband prior µ0, computed according to Proposi-
tion 2.2:
Pµ0,ob={µ0⊕w(ob,·)|w∈wG}.(4)
Example 3.1 The generalized evidence space for Exam-
ple 1.2, where Alice’s coin is unknown, is
G= ({A, B},{heads ,tails},{µ1,µ2}),
where µ1(A) = µA1,µ2(A) = µA2, and µ1(B) =
µ2(B) =µB. Thus, the first likelihood mapping corre-
sponds to Alice’s coin being double-headed, and the sec-
ond corresponds to Alice’s coin being biased 3/4towards
heads. Then wG={w1, w2}, where w1(heads , A) = 2/3
andw2(heads , A) = 3 /5. Thus, if µ0(A) = α, then
Pµ0,heads (A) ={3α
α+2,2α
α+1}. /intersectionsq /unionsq
We have now given two approaches for capturing the sit-
uation in Example 1.2. The first involves refining the set
of hypotheses —that is, replacing the hypothesis AbyA1
andA2—and using a standard evidence space. The second
involves using a generalized evidence space. How do they
compare?
To make this precise, we need to first define what a refine-
ment is. We say that the evidence space (H/prime,O,µ/prime)re-
fines, oris a refinement of , the generalized evidence space
(H,O,∆)if there exists a surjection g:H/prime→ Hsuch
thatµ∈∆if and only if, for all h∈ H, there exists
some h/prime∈g−1(h)such that µ(h) =µ/prime(h/prime). That is,
taking Ph={µ/prime(h/prime)|h/prime∈g−1(h)}, we must have
∆ =/producttext
h∈HPh. Intuitively, the hypothesis h∈ His re-
finedtothesetofhypothesis g−1(h)⊆ H/prime;moreover,each
likelihoodfunction µ(h)inalikelihoodmapping µ∈∆is
thelikelihoodfunction µ/prime(h/prime)forsomehypothesis h/primerefin-
ingh. For example, the evidence space Eat the beginning
of this section (corresponding to Example 1.2) is a refine-
ment of the generalized evidence space Gin Example 3.1;
the required surjection g:{A1, A2, B} → { A, B}maps
A1andA2toA, andBtoB. A prior µ/prime
0onH/primeextendsa
priorµ0onHif for all h,
µ/prime
0(g−1(h)) =µ0(h).LetExt(µ0)consistofallpriorson H/primethatextend µ0. Re-
call that given a set Pof probability measures, the lower
probability P∗(U)of a set Uisinf{µ(U)|µ∈ P}and its
upper probability P∗(U)issup{µ(U)|µ∈ P}[Halpern
2003].
Proposition 3.2 LetG= (H,O,∆)be a generalized evi-
dence space and let E= (H/prime,O,µ)be a refinement of G.
For all ob∈ Oand all h∈ H, we have
(Pµ0,ob)∗(h) =
{µ/prime
0⊕wE(ob,·)|µ/prime
0∈Ext(µ0)}∗(g−1(h))
and
(Pµ0,ob)∗(h) =
{µ/prime
0⊕wE(ob,·)|µ/prime
0∈Ext(µ0)}∗(g−1(h)).
In other words, if we consider the sets of posteriors ob-
tained by either (1) updating a prior probability µ0by the
generalized weight of evidence of an observation in Gor
(2)updatingthesetofpriorsextending µ0bytheweightof
evidenceofthesameobservationin E,theboundsonthose
two sets are the same. Therefore, this proposition shows
that, given a generalized evidence space G, if there an ev-
idence space Ethat refines it, then the weight of evidence
wGgives us essentially the same information as wE. But is
therealwaysanevidencespace Ethatrefinesageneralized
evidence space? That is, can we always understand a gen-
eralized weight of evidence in terms of a refinement? As
we now show, we cannot always do this.
LetGbe a generalized evidence space (H,O,∆). Note
that if Erefines Gthen, roughly speaking, the likelihood
mappings in ∆consist of all possible ways of combining
thelikelihoodfunctionscorrespondingtothehypothesesin
H. We now formalize this property. A set ∆of likelihood
mappings is uncorrelated if there exist sets of probability
measures Phfor each h∈ Hsuch that
∆ =/productdisplay
h∈HPh={µ|µ(h)∈ Phfor all h∈ H}.
(We say ∆iscorrelated if it is not uncorrelated.) A gen-
eralized evidence space (H,O,∆)is uncorrelated if ∆is
uncorrelated.
Observe that if (H/prime,O,µ/prime)refines (H,O,∆), then
(H,O,∆)is uncorrelated since, as observed above, ∆ =/producttext
h∈HPh, where Ph={µ/prime(h/prime)|h/prime∈g−1(h)}. Not only
iseveryrefinementuncorrelated,buteveryuncorrelatedev-
idence space can be viewed asa refinement.
Proposition 3.3 LetGbe a generalized evidence space.
There exists an evidence space Ethat refines Gif and only
ifGis uncorrelated.
Thus, if a situation can be modeled using an uncorrelated
generalizedevidencespace,thenitcanalsobemodeledby
refining the set of hypotheses and using a simple evidence
space. The uncorrelated case has a further advantage. It
leads to simple formula for calculating the posterior in the
special case that there are only two hypotheses (which is
the case that has been considered most often in the litera-
ture, often to the exclusion of other cases).
Givenageneralizedevidencespace G= (H,O,∆)andthe
corresponding generalized weight of evidence wG, we can
defineupperandlowerweightsofevidence,determinedby
themaximumandminimumvaluesintherange,somewhat
analogous to the notions of upper and lower probability.
Define the upper weight of evidence function wGby taking
wG(ob, h) = sup {w(ob, h)|w∈wG}.
Similarly, define the lower weight of evidence function wG
by taking
wG(ob, h) = inf {w(ob, h)|w∈wG}.
These upper and lower weights of evidence can be used to
computetheboundsontheposteriorsobtainedbyupdating
a prior probability via the generalized weight of evidence
of an observation, in the case where Gis uncorrelated, and
when there are two hypotheses.
Proposition 3.4 LetG= (H,O,∆)be an uncorrelated
generalized evidence space.
(a) The following inequalities hold:
(Pµ0,ob)∗(h)≤
wG(ob, h)µ0(h)
wG(ob, h)µ0(h) +/summationtext
h/prime/negationslash=hwG(ob, h/prime)µ0(h/prime);(5)
(Pµ0,ob)∗(h)≥
wG(ob, h)µ0(h)
wG(ob, h)µ0(h) +/summationtext
h/prime/negationslash=hwG(ob, h/prime)µ0(h/prime).(6)
If|H|= 2, we get equalities in (5)and(6).
(b) The following equalities hold:
wG(ob, h) =(Ph)∗(ob)
(Ph)∗(ob) +/summationtext
h/prime/negationslash=h(Ph/prime)∗(ob);
wG(ob, h) =(Ph)∗(ob)
(Ph)∗(ob) +/summationtext
h/prime/negationslash=h(Ph/prime)∗(ob),
where Ph={µ(h)|µ∈∆}, for all h∈ H.Thus, if have an uncorrelated generalized evidence space
with two hypotheses, we can compute the bounds on the
posteriors Pµ0,obin terms of upper and lower weights of
evidenceusingProposition3.4(a),whichconsistsofequal-
ities in that case. Moreover, we can compute the upper
andlowerweightsofevidenceusingProposition3.4(b). As
we now show, the inequalities in Proposition 3.4(a) can be
strict if there are more thantwo hypotheses.
Example 3.5 LetH={D, E, F }andO={X, Y}, and
consider the two probability measures µ1andµ2, where
µ1(X) = 1 /3andµ2(X) = 2 /3. Let G= (H,O,∆),
where ∆ ={µ|µ(h)∈ {µ1, µ2}}. Clearly, ∆isuncorre-
lated. Let µ0be the uniform prior on H, so that µ0(D) =
µ0(E) =µ0(F) = 1/3. Using Proposition 3.4(b), we can
compute that the upper and lower weights of evidence are
as given in the following tables:
wED E F
X 1/2 1/2 1/2
Y 1/2 1/2 1/2wED E F
X 1/5 1/5 1/5
Y 1/5 1/5 1/5
The uniform measure is the identity for ⊕, and there-
foreµ0⊕w(ob,·) =w(ob,·). It follows that Pµ0,X=
{w(X,·)|w∈wG}. Hence, (Pµ0,X)∗(D) = 1 /2and
(Pµ0,X)∗(D) = 1 /5. But the right-hand sides of (5) and
(6) are 5/9and 1/6, respectively, and similarly for hy-
potheses EandF. Thus, in this case, the inequalities in
Proposition 3.4(a) are strict. /intersectionsq /unionsq
While uncorrelated generalized evidence spaces are cer-
tainlyofinterest,correlatedspacesariseinnaturalsettings.
Toseethis,firstconsiderthefollowingsomewhatcontrived
example.
Example 3.6 Consider the following variant of Exam-
ple1.2. Alicehastwocoins,onethatisdouble-headedand
one that is biased 3/4 towards heads, and chooses which
one to give Zoe. Bob also has two coins, one that is fair
and onethat is biased 2/3towards tails, andchooses which
one to give Zoe. Zoe chooses one of the two coins she was
givenandtossesit. Thehypothesesare {A, B}andtheob-
servations are {heads ,tails}, as in Example 1.2. The like-
lihood function µ1for Alice’s double-headed coin is given
byµ1(heads ) = 1,whilethelikelihoodfunction µ2forAl-
ice’s biased coin is given by µ2(heads ) = 3 /4. Similarly,
the likelihood function µ3for Bob’s fair coin is given by
µ3(heads ) = 1/2,andthelikelihoodfunction µ4forBob’s
biased coin is given by µ4(heads ) = 1/3.
If Alice and Bob each make their choice of which coin to
give Zoe independently, we can use the following general-
ized evidence space to modelthe situation:
G1= ({A, B},{heads ,tails},∆1),
where
∆1={(µ1, µ3),(µ1, µ4),(µ2, µ3),(µ2, µ4)}.
Clearly, ∆1is uncorrelated, since it is equal to {µ1, µ2} ×
{µ3, µ4}.
On the other hand, suppose that Alice and Bob agree be-
forehand that either Alice gives Zoe her double-headed
coin and Bob gives Zoe his fair coin, or Alice gives Zoe
her biased coin and Bob gives Zoe his biased coin. This
situation can be modeled using the following generalized
evidence space:
G2= ({A, B},{heads ,tails},∆2),
where
∆2={(µ1, µ3),(µ2, µ4)}.
Here, note that ∆2is a correlated set of likelihood map-
pings. /intersectionsq /unionsq
While this example is artificial, the example in the intro-
duction, where the robot’s sensors could have come from
eitherfactory1orfactory2,isaperhapsmorerealisticcase
wherecorrelatedevidencespacesarise. Thekeypointhere
isthattheseexamplesshowthatweneedtogobeyondjust
refining hypotheses to capturea situation.
4 Combining Evidence
AnimportantpropertyofShafer’s[1982]representationof
evidence is that it is possible to combine the weight of ev-
idence of independent observations to obtain the weight of
evidence of a sequence of observations. The purpose of
this section is to show that our framework enjoys a simi-
lar property, but, rather unsurprisingly, new subtleties arise
due to the presence of uncertainty. For simplicity, in this
section we concentrate exclusively on combining the evi-
dence of a sequence of two observations; the general case
follows in a straightforward way.
Recall how combining evidence is handled in Shafer’s ap-
proach. Let E= (H,O,µ)be an evidence space. We de-
finethelikelihoodfunctions µhonpairsofobservations,by
taking µh(/angbracketleftob1,ob2/angbracketright) =µh(ob1)µh(ob2). Inotherwords,
theprobabilityofobservingaparticularsequenceofobser-
vations given his the product of the probability of making
each observation in the sequence. Thus, we are implicitly
assuming that the observations are independent. It is well
known (see, for example, [Halpern and Fagin 1992, Theo-
rem4.3])thatDempster’sRuleofCombinationcanbeused
to combine evidence; that is,
wE(/angbracketleftob1,ob2/angbracketright,·) =wE(ob1,·)⊕wE(ob2,·).
If we let µ0be a prior probability on the hypotheses, and
µ/angbracketleftob1,ob2/angbracketrightbe the probability on the hypotheses after ob-
serving ob1andob2, we can verify that
µ/angbracketleftob1,ob2/angbracketright=µ0⊕wE(/angbracketleftob1,ob2/angbracketright,·).Here we are assuming that exactly one hypothesis holds,
and it holds each time we make an observation. That is, if
Zoepicksthedouble-headedcoin,sheusesitforbothcoin
tosses.
Example 4.1 Recall Example 2.1, where Alice just has a
double-headed coin and Bob just has a fair coin. Suppose
that Zoe, after being given the coins and choosing one of
them, tosses it twice, and it lands heads both times. It is
straightforward to compute that
wEA B
/angbracketleftheads ,heads /angbracketright4/5 1/5
/angbracketleftheads ,tails/angbracketright 0 1
/angbracketlefttails,heads /angbracketright 0 1
/angbracketlefttails,tails/angbracketright 0 1
Not surprisingly, if either of the observations is tails, the
coin cannot be Alice’s. In the case where the observations
are/angbracketleftheads ,heads /angbracketright, the evidence for the coin being Alice’s
(that is, double-headed) is greater than if a single heads is
observed, since from Example 2.1, wE(heads , A) = 2 /3.
This agrees with our intuition that seeing two heads in a
rowprovidesmoreevidenceforacointobedouble-headed
than if a single heads is observed. /intersectionsq /unionsq
How should we combine evidence for a sequence of ob-
servations when we have a generalized evidence space?
That depends on how we interpret the assumption that the
“same” hypothesis holds for each observation. In a gener-
alized evidence space, we have possibly many likelihood
functions for each hypothesis. The real issue is whether
we use the same likelihood function each time we evaluate
anobservation,orwhetherwecanuseadifferentlikelihood
functionassociatedwiththathypothesis. Thefollowingex-
amples show that this distinction can be critical.
Example 4.2 Consider Example 1.2 again, where Alice
has two coins (one double-headed, one biased toward
heads), and Bob has a fair coin. Alice chooses a coin and
givesittoZoe;BobgiveshiscointoZoe. Asweobserved,
there are two likelihood functions in this case, which we
called w1andw2;w1corresponds to Alice’s coin being
double-headed, and w2corresponds to the coin being bi-
ased 3/4towards heads. Suppose that Zoe tosses the coin
twice. Since she is tossing the same coin, it seems most
appropriate to consider the generalized weight of evidence
{w/prime|w/prime(/angbracketleftob1,ob2/angbracketright,·) =wi(ob1,·)⊕wi(ob2,·),
i∈ {1,2}}.
On the other hand, suppose Zoe first chooses whether she
will always use Alice’s or Bob’s coin. If she chooses Bob,
then she obviously uses his coin for both tosses. If she
chooses Alice, before each toss, she asks Alice for a coin
and tosses it; however, she does not have to use the same
coin of Alice’s for each toss. Now the likelihood function
associated with each observation can change. Thus, the
appropriate generalized weight of evidence is
{w/prime|w/prime(/angbracketleftob1,ob2/angbracketright,·) =wi(ob1,·)⊕wj(ob2,·),
i, j∈ {1,2}}.
/intersectionsq /unionsq
Fundamentally, combining evidence in generalized evi-
dencespacesreliesonDempster’sruleofcombination,just
likeinShafer’sapproach. However,asExample4.2shows,
the exact details depends on our understanding of the ex-
periment. While the first approach used in Example 4.2
seems more appropriate in most cases that we could think
of,wesuspectthattherewillbecaseswheresomethinglike
the second approach may be appropriate.
5 Conclusion
In the literature on evidence, it is generally assumed that
there is a single likelihood function associated with each
hypothesis. There are natural examples, however, which
violate this assumption. While it may appear that a sim-
ple step of refining the set of hypotheses allows us to use
standardtechniques,wehaveshownthatthisapproachcan
lead to counterintuitive results when evidence is used as
a basis for making decisions. To solve this problem, we
proposed a generalization of a popular approach to rep-
resenting evidence. This generalization behaves correctly
underupdating,andgivesthesameboundsontheposterior
probability as that obtained by refining the set of hypothe-
ses when there is no correlation between the various likeli-
hood functions for the hypotheses. As we show, this is the
onesituationwherewecanidentifyageneralizedevidence
space with the space obtained by refining the hypotheses.
One advantage of our approach is that we can also reason
about situations where the likelihood functions are corre-
lated, something that cannot be done by refining the set of
hypotheses.
We have also looked at how to combine evidence in a gen-
eralized evidence space. While the basic ideas from stan-
dard evidence spaces carry over, that is, the combination
is essentially obtained using Dempster’s rule of combina-
tion, the exact details of how this combination should be
performed depend on the specifics of how the likelihood
functionschangeforeachobservation. Amoredetaileddy-
namicmodelwouldbehelpfulinunderstandingthecombi-
nation of evidence in a generalized evidence space setting;
we leave this exploration for future work.
Acknowledgments
WorksupportedinpartbyNSFundergrantsCTC-0208535
and ITR-0325453, by ONR under grant N00014-02-1-
0455, by the DoD Multidisciplinary University ResearchInitiative(MURI)programadministeredbytheONRunder
grants N00014-01-1-0795 and N00014-04-1-0725, and by
AFOSRundergrantF49620-02-1-0101. Thesecondauthor
wasalsosupportedinpartbyAFOSRgrantsF49620-00-1-
0198andF49620-03-1-0156,NationalScienceFoundation
Grants9703470and0430161,andONRGrantN00014-01-
1-0968.
References
Good, I. J. (1950). Probability and the Weighing of Evi-
dence. Charles Griffin & Co. Ltd.
Halpern, J. Y. (2003). Reasoning About Uncertainty .
MIT Press.
Halpern, J. Y. and R. Fagin (1992). Two views of be-
lief: belief as generalized probability and belief as
evidence. Artificial Intelligence 54 , 275–317.
Halpern, J. Y. and R. Pucella (2003a). A logic for rea-
soning about evidence. In Proc. 19th Conference on
Uncertainty in Artificial Intelligence (UAI’03) , pp.
297–304.
Halpern, J. Y. and R. Pucella (2003b). Probabilistic al-
gorithmic knowledge. In Proc. 9th Conference on
Theoretical Aspects of Rationality and Knowledge
(TARK’03) , pp. 118–130.
Kyburg, Jr., H. E. (1983). Recent work in inductive
logic. In T. Machan and K. Lucey (Eds.), Recent
Work in Philosophy , pp. 87–150. Rowman & Allan-
held.
Shafer, G. (1976). A Mathematical Theory of Evidence .
Princeton University Press.
Shafer,G.(1982).Belieffunctionsandparametricmod-
els (with commentary). Journal of the Royal Statis-
tical Society, Series B 44 , 322–352. |
09f60003-aa1c-4e9c-89ed-8e1028ccdb70 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How to pursue a career in technical AI alignment
**This guide is written for people who are considering direct work on technical AI alignment.** I expect it to be most useful for people who are not yet working on alignment, and for people who are already familiar with the arguments for working on AI alignment. If you aren’t familiar with the arguments for the importance of AI alignment,you can get an overview of them by reading [Why AI alignment could be hard with modern deep learning](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) (Cotra, 2021) and one of [The Most Important Century Series](https://www.cold-takes.com/most-important-century/) (Karnofsky, 2021) and [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) (Ngo, 2019).
**It might not be best for you to work on technical AI alignment.** You can have a large impact on reducing existential risk from AI by working on AI strategy, governance, policy, security, forecasting, support roles, field-building, grant-making, and governance of hardware. That’s not counting other areas, such as bio-risk. It is probably better to do great work in one of those areas than mediocre technical alignment work, because impact is heavy-tailed. One good exercise is to go through Holden Karnofsky’s aptitudes [podcast](https://80000hours.org/podcast/episodes/holden-karnofsky-building-aptitudes-kicking-ass/)/[post](https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists), and think about which of the aptitudes you might be able to become great at. Then ask yourself or others how you could use those aptitudes to solve the problems you care about. I also recommend applying to [speak with 80,000 Hours](https://80000hours.org/speak-with-us/).
**I’ll probably be wrong but I might be helpful.** Feedback was broadly positive, but I wouldn’t be surprised if some people think that this guide is net-negative. For example, because it pushes people toward/away from theoretical research, or empirical research, or ML engineering, or getting a PhD. I have tried to communicate my all-things-considered view here, after integrating feedback. But I can only suggest that you try to [form your own view](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work) on what’s best for you to do, and take this guide as one input to that process.
**I had lots of help.** Neel Nanda helped me start this project. I straight-up copied stuff from Rohin Shah, Adam Gleave, Neel Nanda, Dan Hendrycks, Catherine Olsson, Buck Shlegeris, and Oliver Zhang. I got great feedback from Adam Gleave, Arden Koehler, Rohin Shah, Dan Hendrycks, Neel Nanda, Noa Nabeshima, Alex Lawson, Jamie Bernardi, Richard Ngo, Mark Xu, Andy Jones, and Emma Abele. I wrote most of this at Wytham Abbey, courtesy of Elizabeth Garrett.
Types of alignment work
=======================
(The following is almost all copied from Rohin Shah’s [Career FAQ](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/).)
For direct technical alignment research aimed at solving the problem (i.e. ignoring meta work, field building, AI governance, etc), these are the rough paths:
1. **Research Lead (theoretical):** These roles come in a variety of types (industry, nonprofit, academic, or even independent). You are expected to propose and lead research projects; typically ones that can be answered with a lot of thinking and writing in Google Docs/LaTeX, and maybe a little bit of programming. Theoretical alignment work can be more conceptual or more mathematical—the output of math work tends to be a proof of a theorem or a new mathematic framework, whereas in conceptual work math is used as one (very good) tool to tell if a problem has been solved. Conceptual work is more philosophical. A PhD is not required but is helpful. Relevant skills: *extremely* strong epistemics and research taste, strong knowledge of AI alignment; this is particularly important due to the lack of feedback loops from reality.
2. **Research Contributor (theoretical):** These roles are pretty rare; as far as I know they are only available at [ARC](https://alignment.org/) [as of May 2022]. You should probably just read their [hiring post](https://www.alignmentforum.org/posts/dLoK6KGcHAoudtwdo/arc-is-hiring).
3. **Research Lead (empirical):** Besides academia, these roles are usually available in industry orgs and similar nonprofits, such as DeepMind, OpenAI, Anthropic, and Redwood Research. You are expected to propose and lead research projects; typically ones that involve achieving or understanding something new with current ML systems. A PhD is not strictly required but in practice most Research Leads have one. Relevant skills: strong research taste, strong knowledge of AI alignment and ML, moderate skill at programming and ML engineering.
4. **Research Contributor (empirical):** These roles are usually available at industry orgs or similar nonprofits, such as DeepMind, OpenAI, Anthropic, and Redwood Research. You are expected to work on a team to execute on research projects proposed by others. A PhD is *not* required. Relevant skills: strong skill at programming, moderate research taste, moderate knowledge of AI alignment, jobs vary in how much they require skill at ML engineering (but most require strong skill).
5. **Professor:** This is a specific route for either of the “Research Lead” career paths, but with additional requirements: as an academic, you are not only expected to propose and lead a research agenda, but also to take on and mentor grad students in pursuit of that research agenda, to teach classes, etc. A PhD is required; that’s the clear first step on this career path. Relevant skills: strong research taste, strong AI knowledge, moderate technical communication. Programming ability and ML ability is typically *not* tested or required, though they are usually needed to be successful during the PhD.
6. **Software Engineer:** Many organizations can also [benefit from strong software engineers](https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers) — for example, by creating frameworks for working with large neural nets that don’t fit on a GPU, or by reorganizing codebases to make them cleaner and more modular to enable faster experimentation. However, I expect you should only aim for this if you already have these skills (or can gain them quickly), or if for some reason you think you could become a world-class expert in these areas but not in any of the other paths.
The main difference between research leads and research contributors is that the research leads are expected to add value primarily by choosing and leading good research projects, while the research contributors are expected to add value primarily by executing projects quickly. However, it isn’t feasible to fully separate these two activities, and so [research] leads still need to have some skill in executing projects, and contributors still need to have some skill in choosing how to move forward on a project. Some orgs like DeepMind make the difference explicit (“Research Scientist” and “Research Engineer” titles), while others like OpenAI [Anthropic] do not (“Member of Technical Staff” title).
The main reason I carve up roles as “lead” vs “contributor” is that as far as I can tell, “lead” roles tend to be filled by people with PhDs. DeepMind explicitly requires PhDs for the Research Scientist role, but *not* for the Research Engineer role. (Both roles are allowed to lead projects, if they can convince their manager and collaborators that it is worth pursuing, but it’s only an explicit expectation for Research Scientists.) Other orgs don’t have a PhD as an explicit requirement, but nonetheless it seems like most people who end up choosing and leading research projects have PhDs anyway. I think this is because PhDs are teaching research skills that are hard to learn by other routes.
I don’t want to emphasize this too much — it is still possible to lead projects without a PhD. In April 2022, I could name 10 people without PhDs whose work was best categorized as “Research Lead”, who seemed clearly worth funding. (Note that “clearly worth funding without a PhD” doesn’t necessarily mean the PhD is a bad choice: for several of these people, it’s plausible to me that they would do much better work in 5 years time if they got a PhD instead of doing the things they are currently doing.)
What type of alignment work should you do?
==========================================
**I don’t have a strong view on what type of alignment work is most valuable, so I’ll mostly focus on personal fit.** There is widespread disagreement in the community about the relative value of different work. However, the main decision you’ll have to make early on is whether, if at all, to pursue empirical or theoretical alignment work. And I think most people believe there’s good work to be done in both camps. If that’s true, it means you can probably just focus on becoming excellent at either theoretical or empirical work based on your personal fit, while you [form your own view](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work) about what specific theoretical/empirical alignment work is worth doing.
However, I think most people agree that if you can become a research lead who can set good, novel research agendas, then you should do that. You’ll need to have strong research taste and [end-to-end thinking on AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work), which is a high bar. [Paul Christiano](https://axrp.net/episode/2021/12/02/episode-12-ai-xrisk-paul-christiano.html) and [Chris Olah](https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/) are examples of people who did this.
### High-level heuristics for choosing which work to do
**If you’re already a strong software engineer, consider applying to non-ML roles immediately, or** [**retraining as an ML engineer**](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_research_contributor__ML_engineering__roles)**.** Some engineering work on alignment teams doesn’t require ML knowledge. For example, creating frameworks for working with large neural nets that don’t fit on a GPU, or reorganizing codebases to make them cleaner and more modular to enable faster experimentation. Some ML engineering roles might not even require experience with ML if you’re a sufficiently strong software engineer. That is at least the case at Anthropic: “Lots of history writing code and learning from writing code is the hard part. ML is the easy bit, we can teach that.” I suggest reading [AI Safety Needs Great Engineers](https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers), [DeepMind is hiring for the scalable alignment and alignment teams](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment), and 80,000 Hours’ [Software Engineering career review](https://80000hours.org/career-reviews/software-engineering/).
**To the extent that you think you might enjoy machine learning and coding, consider looking into** [**How to pursue empirical alignment work**](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_empirical_alignment_work)**.** You can test whether you like ML and coding by learning [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning). The early steps for research leads and research contributors are similar, so you can pursue those steps while figuring out which is better for you.
**To the extent that you love theory, have or could get a very strong math/theoretical CS background, and think you might enjoy building end-to-end models of AI alignment, consider looking into** [**How to pursue theoretical alignment work**](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_theoretical_alignment_work).
### Some things to keep in mind when exploring different paths
**Pay attention to whether you're enjoying yourself and growing and flourishing and kicking ass. But don’t give up immediately if you’re not.** Enjoying yourself is [really important](https://lynettebye.com/blog/2022/5/25/do-you-need-to-love-your-work), especially for research. But often people enjoy things more as they gain more mastery, or think they should already be good and suffer until they get there. Often people have bad luck. If you're enjoying yourself and kicking ass then that's a great sign. If you're not enjoying yourself and kicking ass after a while then consider switching to something else.
**Sometimes very capable people are** [**insecure**](https://80000hours.org/2022/04/imposter-syndrome/) **about how good they are, and miscalibrated about how good they could become.** Here are some more objective indications you can use to assess your fit:
* Empirical research leads: To have a decent shot at getting into a top-20 ML PhD programme, you’ll need (very roughly) a first author workshop paper and a ~3rd author conference paper at NeurIPS, ICML, or similar.
* Empirical research contributor: “As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you ([DeepMind](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment))”. Grades matter less than people think.
* Theoretical research: If you think you could get into a top PhD programme in math or CS theory if you (had) optimized your undergrad for that purpose, that’s a good sign.
**Talk to people and ask them to honestly evaluate whether you're on track to do good technical work.** This is a good way to address the point above. Make it easy for them to tell you that you're not on track in worlds where you're not—for example, by emphasising to them how helpful it would be for you to switch to something you’re better at sooner. You could do this at [Effective Altruism Global](https://www.eaglobal.org/), or by [talking to 80,000 Hours](https://80000hours.org/speak-with-us/).
**Recommended resources**:
* 80,000 Hours article on [personal fit](https://80000hours.org/articles/personal-fit/).
* Holden Karnofsky’s aptitudes [podcast](https://80000hours.org/podcast/episodes/holden-karnofsky-building-aptitudes-kicking-ass/) and [post](https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists).
How to pursue alignment work
============================
This is a high-level section that gives context and high-level heuristics for pursuing different types of alignment work, with pointers to other places in the doc that go into more depth.
How to pursue empirical alignment work
--------------------------------------
The early steps for research leads and research contributors are similar, so you can pursue those steps while figuring out which is better for you. Whether you want to pursue research lead or research contributor roles will mostly depend on how much you like and are good at research, end-to-end thinking on alignment, and machine learning, relative to how much you like and are good at ML engineering. Also whether you want and are able to get into a top PhD programme. If you’re uncertain, I recommend learning [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning), doing some ML implementation, and trying to get some research experience (see the next section). Then assessing personal fit from there, which might include talking to people about your fit.
### Activities that are useful for both empirical research leads and contributors
**Everyone should learn** [**Basic deep learning**](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning)**:** You’ll need to learn basic Python coding, basic math (linear algebra, calculus, and probability), and get a basic understanding of deep learning (DL) models and how to implement them. DL is by far the dominant paradigm within machine learning, which in turn is the dominant paradigm within AI safety.I’ve included the best resources I know of in [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning).
**You’ll need to become a decent ML engineer, even if you want to become a research lead.** To become good at ML engineering, you’ll need to get experience implementing DL models.
* It helps if you’re already a good software engineer (SWE), and a SWE internship in your first year of undergrad might be a good investment, especially if there’s a good chance you’ll want to do ML engineering.
* A good way to get experience implementing DL models, after learning [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning), is to replicate a few foundational papers in a sub-field you might want to work in. See [How to pursue research contributor (ML engineering) roles](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_research_contributor__ML_engineering__roles) for details on how to do that. Paper replications are essential for contributor roles, and useful for lead roles. [Look for ways to demonstrate your competence](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Look_for_ways_to_demonstrate_your_competence), by open-sourcing your code and maybe writing a blog post on your work. You can apply for [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) to do paper replications.
* You can also get better at ML engineering by doing practical homeworks as part of a course, or through a [research internship](https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit#heading=h.5jb2kteaxr1f). But how much you learn will depend strongly on the mentorship and the project: academia is a generally poor place to learn ML engineering: your peers likely won’t be excellent engineers, and your advisor probably won’t invest much in helping you.
* I also recommend applying to Redwood Research’s (competitive) [Machine Learning for Alignment Bootcamp](https://forum.effectivealtruism.org/posts/vvocfhQ7bcBR4FLBx/apply-to-the-second-iteration-of-the-ml-for-alignment) (MLAB). The deadline for the application has passed, but there might be future cohorts. Practising [leetcode](https://leetcode.com/) problems is probably useful for getting accepted.
**Research experience is essential for research leads, and useful for research contributors.**
* ML PhDs are very competitive nowadays, and publications and reference letters are the main determinants of whether you get in. To have a decent shot at getting into a top-20 PhD programme, you’ll need (very roughly) a first author workshop paper and a ~3rd author conference paper at NeurIPS, ICML, or similar. *Publications will also look good for ML engineering roles*. If you want to get a PhD, you should try to get as much research experience as you can before applying to PhD programmes, after learning [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning). For example, in the summer of your second and third years for four-year degrees, because publications from after you submit your application won’t count.
* Adam Gleave writes about how to get research experience [here](https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit#heading=h.hfj52k67ycog). When reaching out to potential advisors for research internships, make sure to mention the stuff you’ve done (relevant courses you’ve taken, definitely any paper replications you’ve done, your GitHub if it shows you can code), your grades if they’re good. Aim to clearly distinguish yourself from people who spam professors with requests to do research with them. One way to do this is to read some of their papers and the main papers in their field, and mention that too. If your university can’t fund you to do research experience with them, you can apply for [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding).
* Note that it is somewhat difficult to get two publications before your 4th year, and it will involve luck! If you don’t get into a strong programme right away, don’t get down on yourself. It might be worth taking a year or more after university to do research assistant work in order to get publications. You might be able to get [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) to do that.
* For research projects, find someone who supervises well and who has time for you (this tends not to be the most famous/cool professor). ML is a very new field, so some professors who ostensibly work on ML don’t do stuff that’s relevant to DL. Make sure they’re regularly publishing at top conferences. And work on a topic that your supervisor finds interesting to get lots of feedback: getting good mentorship is key, and pursuing your own ideas at this point is risky and usually means that you won’t get much mentorship. Don’t worry about working on something that helps with alignment. Take project graduate courses where you can—they will probably be graded leniently.
* I recommend [Research as a stochastic decision process (Steinhardt, 2019)](https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html) for getting better at research execution.
**Learning** [**Machine learning**](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Machine_learning)**: how, and how much?** It’s easiest to learn by being immersed in a research environment, so it’s sensible to focus on learning enough ML to get to that point. That means having enough breadth to talk about the main areas of DL sensibly and know about the recent advances, and having depth in the area you want to go into. *You don’t need to learn all of ML to become part of a research environment*. Though research leads should probably eventually know a bunch of ML. You can get breadth by taking courses in the most important subfields of ML (see [Machine learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Machine_learning)), and using resources that curate and summarise/explain recent advances (see [Machine learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Machine_learning)). You can get depth by reading a bunch of a sub-field’s main papers (~10+, or until you get diminishing returns) and doing your own research, or practical homeworks, or [paper replications](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_research_contributor__ML_engineering__roles). You can see what areas people are interested in by looking at blogs of the labs you’re interested in working at, or by checking the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/). If you can take ML courses for credits, that is probably a great idea. See [Machine learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Machine_learning) for more details.
**Learning** [**AI alignment**](AI alignment)**: how, and how much?** I recommend [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) (Ngo, 2020) and [My Overview of the AI Alignment Landscape](https://docs.google.com/document/d/1X3XyS6CtZShwaJHMxQBvgKPUs7qlt74WxhmNnSDesXE/edit?usp=sharing) (Nanda, 2022) to get started, then the [AGI safety fundamentals seminar programme](https://www.eacambridge.org/agi-safety-fundamentals) or similar alignment reading sometime after learning [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning). Learning AI alignment is a lot more important for research leads than research contributors—doing the stuff above is not sufficient for research leads and is not necessary for some research contributor roles, but it will likely be pretty useful for both. There’s much more detailed advice in [AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#AI_alignment).
### Whether and how to do a PhD
**If you want to be a research lead, the default path is to get a PhD.** However, it is also possible to start working as a research engineer and gradually transition toward a research lead role, though as a research engineer you’ll have less time for research activities than you would in a PhD programme. It is also possible to become a research lead without a PhD, if you do a residency program. It’s worth noting that the research-engineer boundary is dissolving at places like Anthropic and OpenAI. This is partially because they care less about the signalling of PhDs, and partially because their research leans relatively heavier on engineering (scaling) than on coming up with novel research directions. *The most important thing for becoming a good research lead is getting mentorship from a great researcher and being able to practice research in a good environment*. That’s most often achieved in a PhD but is sometimes possible in industry.
There is pretty widespread disagreement about how good PhDs are. My impression is that the bulk of the disagreement comes down to how effectively PhDs train research taste and skills that are useful for alignment research, and secondarily, how quickly people expect AGI will be developed—if 5 years then PhDs don’t look good—because they likely won’t do any useful work—if 15 years then it’s less of an issue. My understanding of the main benefit of a PhD is that it develops your research taste and skills so that when you graduate, ideally, you’re able to set and execute your own (good) alignment research agenda in industry (at an existing or new org) or in academia. Failing that, the idea is that you’d come away from a PhD with great research skills that help with alignment research. A PhD also opens some doors that ML engineering wouldn’t be able to, for example, research scientist roles at DeepMind or Google Brain.
**Here are some simplifying questions you can ask yourself to make the decision easier:**
* Do you want to mainly do research rather than ML engineering or software engineering? (If no, then don’t do a PhD.)
* Do you think you could publish a first author workshop paper and a ~3rd author conference paper at NeurIPS, ICML, or similar, while in undergrad, or shortly after undergrad (or after 6-12 months of full-time ML research)? (This is the bar for having a decent chance at getting into a top-20 programme; it’s higher for top-10 programmes) (If no, then maybe don’t shoot for a PhD, but I suggest actually trying out research first.)
* Would you feel a lot better making a lot more money? As an engineer you would probably make upwards of $100,000. You can [apply](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) to get a top-up on a PhD salary, but I think it would be somewhat unusual to get a top-up to $100,000.
* Comparing specific options:
+ With some work, do you think you could get an ML engineering/research role where you could grow/flourish/get better mentorship than you expect to in a PhD? (If yes, then probably don’t do a PhD.) (You can ask people to help compare your options.) (You can speak to grad students of the professor and attend open days.)
+ Are you able to do work in the PhD that is directly or instrumental useful for alignment work; that has a path to impact? (If yes, then a PhD looks much better. If not, it looks worse.)
+ Are you going to be tied down to a PhD topic that you’re not confident in (few other desirable topics you could switch to at the university)? (If yes, then a PhD looks worse, because it’s quite common to want to switch.)
* PhDs can sometimes make people miserable. People enter into them because they’re the default path for people who did well in undergrad, they don’t really know what they’re getting into, and then they [burn out](https://www.benkuhn.net/grad/). *It’s useful to think in advance about whether that is more or less likely to happen to you.*
+ Do you expect to thrive in an unstructured environment with weak incentives (it’s hard to get fired), and potentially poor feedback loops and support, but lots of freedom? (If no, then probably don’t do a PhD.) ([Conscientiousness](https://www.outofservice.com/bigfive/) is probably a factor here.)
+ Here are some other factors that interact poorly with PhDs: high neuroticism, depression, anxiety, and ADHD. You can do some quick diagnostic tests ([neuroticism](https://www.outofservice.com/bigfive/), [depression](https://patient.info/doctor/patient-health-questionnaire-phq-9), [anxiety](https://patient.info/doctor/generalised-anxiety-disorder-assessment-gad-7), [burnout](https://www.mindtools.com/pages/article/newTCS_08.htm)) to see whether any of those things might be a concern for you. Probably don’t take the tests too seriously, but I know many smart people who took a year to realise they had depression, so it might be worth investigating and following up (e.g. with [therapy](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit?usp=sharing)) if any of those are concerning. Having these factors doesn’t rule out a PhD, but research will be much harder, and you’ll want to be very careful to choose a supportive advisor, which limits options.
**If you’re uncertain about which path to pursue, it might be worth optimising for doing research in the short term** while you get a better sense of whether a PhD makes sense for you (or whether you get offers from a top programme), and decide later, or apply to both PhDs and ML research engineering roles and compare options. Doing research will look pretty good for engineering roles as long as you stay away from theory-heavy research topics and eventually do enough ML engineering. And it’s a good test of fit. But optimising for ML engineering won’t help as much for PhDs, because publications and reference letters are key. You can however apply for a PhD after doing ML research engineering in industry.
**How to do a PhD:** If you are considering doing a PhD, I strongly recommend reading [Careers in Beneficial AI Research](https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit?usp=sharing) (Gleave, 2020), Rohin Shah’s [Career FAQ](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/), [Andrej Karpathy’s survival guide for PhDs](http://web.archive.org/web/20170612030342/http://karpathy.github.io/2016/09/07/phd/), and [Machine Learning PhD Applications — Everything You Need to Know](https://timdettmers.com/2018/11/26/phd-applications/).
### How to pursue research contributor (ML engineering) roles
**Read** [**Activities that are useful for both empirical research leads and contributors**](https://docs.google.com/document/d/15e8vBVwrJP5m1Y8VytQY3wSa1Qg18OM43YqP2-Wws1c/edit#heading=h.5fyesdnk3j4e). That section talks about how to learn [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning), ML, and AI alignment, and how to get research experience. If you’re sure you want to shoot for research contributor/ML engineering work, getting research experience is less important than for research lead roles, but might still be a useful source of mentorship and skill-building. Strong knowledge of AI alignment is also less important for getting research contributor roles, but how much you want to invest will depend on how much you want to eventually direct your own research, and investing where possible seems valuable. See [AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#AI_alignment) for more details.
**Being a good software engineer will make you a better ML engineer.** If you can get a software engineering (SWE) internship at a top company early on, that will likely prove valuable. More broadly, getting mentored by someone much better than you at SWE will likely be valuable, as will reading and writing lots of code. In addition to internships and jobs and your own projects, you might be able to get mentorship by contributing to open-source projects and asking some senior person on that project whether they might mentor you. Perhaps check out 80,000 Hours’ [Software Engineering career review](https://80000hours.org/career-reviews/software-engineering/).
**Do some paper replications.** To become good at ML engineering, you’ll need to get experience implementing ML models. A good way to do that is to replicate a few foundational papers in a sub-field you might want to work in. This is similar to the task of implementing novel algorithms, but with training wheels: you know that the algorithm works and what good performance looks like. It will also give you a great understanding of the methods you implement. [Look for ways to demonstrate your competence](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Look_for_ways_to_demonstrate_your_competence), by open-sourcing your code and maybe writing a blog post on your work. You can apply for [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) to do paper replications. See “Paper replication resources” below for more advice.
Below are some paper replication ideas. These are pretty off-the-cuff. If you’re serious about spending a couple of hundred hours on paper replications, it might be a good idea to reach out to a lab you want to work at with a specific plan so that they can give feedback on it. Ideally, see if you can get someone to mentor you. It will be useful to have an open-source codebase on hand, so try to find one before you set out. Check out [Machine learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Machine_learning) for the relevant background.
* Language models:
+ Easier: Train a small [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model from scratch using existing implementations of the architecture (e.g. from Hugging Face). Maybe try [grokking](https://arxiv.org/pdf/2201.02177.pdf).
+ Harder: Implement the transformer yourself. You could also fine-tune with human preferences, as in [Ziegler et al.](https://arxiv.org/pdf/1909.08593.pdf)) (2019) or [Stiennon et al.](https://proceedings.neurips.cc/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf) (2020).
* Reinforcement learning (I probably wouldn’t start with these):
+ Easier: Try to replicate one of the common RL algorithms, like [DQN](https://arxiv.org/abs/1312.5602v1)/[PPO](https://arxiv.org/abs/1707.06347)/[A3C](https://arxiv.org/abs/1602.01783). There are [papers](https://openreview.net/forum?id=r1etN1rtPB) that talk about ways these methods don’t replicate. See [Spinning up in deep RL](https://spinningup.openai.com/en/latest/user/introduction.html) as a way to ease into these exercises.
+ Harder: [Deep RL from human preferences](https://arxiv.org/abs/1706.03741) (see [this](http://amid.fish/reproducing-deep-rl) also) or [GAIL](https://arxiv.org/abs/1606.03476).
* Computer vision:
+ Very easy: train an MLP on MNIST.
+ Easy: train a [ResNet](https://arxiv.org/abs/1512.03385) or another close-to-state-of-the-art model on ImageNet.
+ Medium: do some basic adversarial attacks and defences. You might want to play with [this](https://adversarial-ml-tutorial.org/) first. You could try out some attacks and defences from [this](https://course.mlsafety.org/readings/#adversarial-robustness) list of papers.
**Apply to MLAB:** Redwood Research is running another fully funded (competitive) [coding bootcamp](https://forum.effectivealtruism.org/posts/vvocfhQ7bcBR4FLBx/apply-to-the-second-iteration-of-the-ml-for-alignment) in summer 2022. The deadline for the application has passed, but there might be future cohorts. Practising [leetcode](https://leetcode.com/) problems is probably useful for getting accepted.
**What does it take to get a job**?
* [DeepMind](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment): “As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.” You can read about their available roles [here](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment).
* [Andy Jones (Anthropic):](https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers) “It's hard to judge sight-unseen whether a specific person would suit AI safety engineering, but here’s a good litmus test: With a few weeks' work, could you - hypothetically! - write a new feature or fix a serious bug in a major ML library?” (Important caveat: this is who Anthropic would hire immediately and expect to be contributing in week one. It is *not* a test for whether you could become such a person.) Anthropic talks about their hiring process [here](https://www.anthropic.com/#careers) and what they’re looking for [here](https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers).
* From a Redwood Research staff member:“If you can implement a transformer in pytorch, you should probably already be speaking to Redwood”. You can read about their roles [here](https://www.redwoodresearch.org/jobs).
* For other places, like Google Brain/Facebook, you’ll also need to be able to [leetcode](https://leetcode.com/) and have a high GPA. Having ML [research experience](https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit#heading=h.hfj52k67ycog) helps, as does a master’s degree.
**Where should you work?** Adam Gleave: “The best way to learn research engineering is to work somewhere there is both high-quality engineering and cutting-edge research. Apply to [very competitive] residency programs at industrial labs. The top-4 labs are DeepMind, OpenAI, Google Brain and Facebook AI Research (FAIR); there are also smaller (but good) safety-focused labs like Anthropic and Redwood Research. There are also many smaller players like Amazon AI, NVidia, Vicarious, etc. These are generally less desirable, but still good options.” Since Adam wrote that, some new organisations focused on language models have formed that could be good places to build skills. Those are conjecture (safety-focused), cohere.ai (some near-term safety and lots of EAs working there; I wouldn’t bet on it being good to end up there though), and Hugging Face (no existential safety).
For the first couple of years, it might be worth going where you’ll grow the most. After that, you’ll want to go wherever you can do the best alignment research. However, I am personally worried about people skill-building for a couple of years and then not switching to doing the most valuable alignment work they can, because it can be easy to justify that your work is helping when it isn’t. This can happen even at labs that claim to have a safety focus! Working at any of [Anthropic](https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers), [DeepMind](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment), [Redwood Research](https://www.redwoodresearch.org/jobs), or [OpenAI](https://openai.com/alignment/) seems like a safe bet though. If you can’t work at one of those places, whether skill-building outside of safety teams (e.g. at Google Brain or FAIR) is good will depend pretty strongly on whether you expect to be able to later shift to more impactful work (requires continuing to [form your own view](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work) on alignment, and agency), whether you’ll be motivated doing work that doesn’t help with alignment, and how useful it is to be surrounded by people who work on alignment relative to people who are great ML engineers—the former is more important the more you want to direct your own research, the latter is more important the more you expect ML engineering to be your main contribution.
**Paper replication resources**:
* [Advice on paper replication](https://forum.effectivealtruism.org/posts/fRjj6nm9xbW4kFcTZ/advice-on-pursuing-technical-ai-safety-research#2_1__Advice_on_paper_replication) (Ngo, 2022).
* [ML engineering for AI safety and robustness](https://80000hours.org/articles/ml-engineering-career-transition-guide) (Olsson, 2018)
* [Lessons Learned Reproducing a Deep Reinforcement Learning Paper](http://amid.fish/reproducing-deep-rl) (Rahtz, 2018) as an example, and as evidence that implementing papers can be very educational, but hard, especially in deep RL.
* [A recipe for training neural networks (Karpathy, 2019)](http://karpathy.github.io/2019/04/25/recipe/).
* [Look for ways to demonstrate your competence](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Look_for_ways_to_demonstrate_your_competence).
**Career resources:** If you intend to pursue ML engineering, I recommend reading these articles:
* [ML engineering for AI safety and robustness](https://80000hours.org/articles/ml-engineering-career-transition-guide) (Olsson, 2018)
* [AI safety needs great engineers](https://www.alignmentforum.org/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers) (Jones, 2021); what skills are sought-after by large engineering companies working on safety?
* [DeepMind is hiring for the scalable alignment and alignment teams](https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment).
How to pursue theoretical alignment work
----------------------------------------
I don’t know that much about theoretical work, sorry. If you are a theoretical researcher and have thoughts on how to improve this section, please let me know! The paths to doing theoretical work are also a lot less well-scoped than the path to empirical work, so it’s not *all* my fault. Anyway, here’s what I’ve got:
**Theoretical alignment work can be more conceptual or more mathematical**.
**What does conceptual work look like?** Conceptual alignment work often involves reasoning about hypothetical behaviour. For example, Mark Xu (of the [Alignment Research Center](https://alignment.org/)) describes most of his work as “coming up with good properties for algorithms to have, checking if algorithms have those properties, and trying to find algorithms that have those properties.” This is pretty similar to a skill-set you’d expect a theoretical computer scientist to have. The work tends to involve a lot of mathematical and philosophical reasoning. Conceptual researchers also need strong research taste, and strong knowledge of AI alignment. This is so that they don’t get lost in theoretical research that doesn’t help with alignment, which is easy to do since theory work has poor feedback loops. Examples of conceptual research include Paul Christiano’s [Eliciting Latent Knowledge](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) (ELK), Evan Hubinger’s [Risks from Learned Optimization](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB), John Wentworth’s [Natural Abstractions](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro), and MIRI’s [agent foundations](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh) work.
**What does mathematical work look like?** I think the main difference is that in math work, the output is a proof of a theorem or a counterexample or a new mathematic framework, whereas in conceptual work math is used as one (very good) tool to tell if a problem has been solved. Conceptual work is more philosophical: the arguments are rarely watertight, and a lot more judgement is required. Examples of mathematical work include Michael Cohen’s [Pessimism About Unknown Unknowns Inspires Conservatism](https://mkcohen-hosted-files.s3-us-west-1.amazonaws.com/Pessimism_alignmentforum.pdf), Vanessa Kosoy’s [Infrabayesianism](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa), Scott Garabrant’s work on [Logical induction](https://arxiv.org/abs/1609.03543), [Cartesian frames](https://arxiv.org/abs/2109.10996), and [Finite factored sets](https://arxiv.org/abs/2109.11513), [Cooperative Inverse Reinforcement Learning](https://arxiv.org/pdf/1606.03137.pdf), and Tom Everett’s work ([thesis](https://www.tomeveritt.se/papers/2018-thesis.pdf), [current work](https://causalincentives.com/)). You can see more topics [here](https://forum.effectivealtruism.org/posts/d7fJLQz2QaDNbbWxJ/what-are-the-coolest-topics-in-ai-safety-to-a-hopelessly). This is in contrast to semi-formal, conceptual work, of which Evan Hubinger’s [Risks from Learned Optimization](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB) is a central example.
**Where does this work happen?** The space is pretty weird. There aren’t established orgs doing shovel-ready work. It’s more like a mixed bag of people in academia (mostly math stuff, e.g. [CIRL](https://arxiv.org/pdf/1606.03137.pdf) and Michael Cohen’s stuff), [independent people on grants](https://www.alignmentforum.org/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) (such as John Wentworth), the Machine Intelligence Research Institute (MIRI) (houses Evan Hubinger, Scott Garabrand, and Vanessa Kosoy among others), the Alignment Research Center (ARC) (which Paul Christiano directs), a few people at DeepMind (e.g. [Ramana Kumar](https://scholar.google.co.uk/citations?user=OyX1-qYAAAAJ&hl=en), and now some stuff at [conjecture.dev](https://www.conjecture.dev/) too.
**I don’t have a great sense of whether math or conceptual research is better to work on.** Fortunately, the skill-sets are pretty similar, so you can probably just try each a bit while you develop your own views about work work is most valuable, and then decide based on where you think you’ll do the best work.
**How to test fit for conceptual research:** (I don’t really know, sorry.)
* You will probably have a sense of how much you love and are good at theory, which is an important part of it. If you think you could get into a top PhD programme in math or CS theory if you (had) optimized your undergrad for that purpose, that’s a good sign.
* The other component is being able to [form your own views](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work) on AI alignment and having interesting things to say there.
+ The first step for testing that is to learn about [AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#AI_alignment), perhaps to around the level of the [AGI safety fundamentals seminar programme](https://www.eacambridge.org/agi-safety-fundamentals). It might be useful to learn [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning) before doing that.
+ Then, if you enjoyed digging into the alignment reading, you could try to absorb a conceptual researcher’s worldview, perhaps by reading and [distilling](https://www.alignmentforum.org/posts/zo9zKcz47JxDErFzQ/call-for-distillers) (summarising) some of their research. [Here](https://docs.google.com/document/d/1tuQb8yc7lRRARKCx3tCZTbGPdBIG0fVLs7vmBsYN2Pk/edit?usp=sharing) is some research you could try distilling. The [ML Alignment Theory Scholars programme](https://www.serimats.org/) (fully funded) is a structured way to absorb someone’s worldview—you’ll be paired with a mentor whose worldview you can absorb. The applications are closed (as of June 2022) but there will likely be future cohorts. Another way to absorb a worldview is to read through the [Reading List for Evan Hubinger’s AI Safety Worldview](https://docs.google.com/document/d/1qI7mXryWSKKMxr40y_JgOwM8QpYX9OwQdRz9B_D95gg/edit?usp=sharing), and write up and maybe email him your notes, summaries, comments, disagreements, takes, opinions, etc..
+ A different option is to spend ~50 hours reading up on a specific topic in AI alignment that interests you, then spend ~50 hours trying to say something new and interesting about that topic. (For example, try producing a proposal for [ELK](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge).) Don’t update too much if you don’t have much to say; the point is more to see whether or not you enjoy the work and whether it feels productive.
+ You can apply for [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) to do any of these exercises.
**How to test fit for mathematical research:** (I don’t really know, sorry).
* You’ll probably have a sense of how much you love and are good at theory, which is the main thing. If you think you could get into a top PhD programme in math or CS theory if you (had) optimized your undergrad for that purpose, that’s a good sign.
* One test is to go through Michael Cohen’s [lecture](https://drive.google.com/file/d/1VQiy9Nl2VqdtzsJYUWhSuKSATwmXE_T3/view?usp=sharing) and have a go at the assignment, then can ask Michael to look at what you wrote (his email is [firstname.lastname@eng.ox.ac.uk](mailto:firstname.lastname@eng.ox.ac.uk), and yes, he’s happy to look at assignments).
* You could also spend ~50 hours reading up on a specific topic in mathematical AI alignment that interests you (you can see some of the literature [here](https://forum.effectivealtruism.org/posts/d7fJLQz2QaDNbbWxJ/what-are-the-coolest-topics-in-ai-safety-to-a-hopelessly)), then spend ~50 hours trying to say something new and interesting about that topic (or [distilling](https://www.alignmentforum.org/posts/zo9zKcz47JxDErFzQ/call-for-distillers), if saying something new is too hard).
* Finally, you could do a research internship with a mathematical researcher in an area related to mathematical alignment (or do research directly on alignment, but only if your supervisor already works on that). Ctrl+F for “Research experience is essential for research leads, and useful for research contributors” for advice on getting research experience. You might be able to reach out to mathematical alignment researchers for an internship. Perhaps after writing something, as above.
It’s worth bearing in mind that pursuing theoretical alignment work is much riskier than ML-focused work, because you’ll build fewer transferable skills than ML work, you’ll have less credibility outside the alignment community, and the infrastructure for this work is just starting to be built. That said, if you think you could have a good fit, it might be worth testing it out!
**How to pursue conceptual alignment research:** Again, I don’t really know. For that reason, getting mentorship seems pretty important. If you can produce something, perhaps from one of the exercises above, I think Mark Xu or Evan Hubinger would consider chatting with you and giving you career advice. Here are some short-to-medium-term options: work independently on a grant (or at an existing organisation, though you’d probably need a PhD for that), work at ARC or MIRI (not sure whether MIRI is hiring as of June 2022), apprentice under a conceptual researcher, or do a PhD (in math/CS theory, with a smart and open professor who’s regularly publishing in COLT or FOCS or similar. You probably won’t be able to publish conceptual alignment work during a PhD, but you might build useful skills). My guess is that mentorship should be the main consideration early on in your career: if you can work with and get mentored by a strong conceptual alignment researcher, that is probably better than a PhD (unless you have the opportunity to work closely with a really strong or value-aligned advisor), and a good PhD probably looks better than independent work. If you want to try to apprentice under a conceptual researcher, or work at ARC/MIRI, some of the exercises in the previous section will be useful: reading and distilling and absorbing someone’s worldview, posting on the [AI Alignment Forum](https://www.alignmentforum.org/), and trying to get more mentorship from there. More broadly, I recommend spending time learning about [AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#AI_alignment) and [forming your own view](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work). It’s worth noting that conceptual research is particularly mentorship constrained at the moment, so it might be hard to work closely with a strong conceptual researcher. It’s probably still worth trying though, and in particular everyone should probably [apply](https://www.alignmentforum.org/posts/dLoK6KGcHAoudtwdo/arc-is-hiring) to ARC.
**How to pursue mathematical alignment research:** (I don’t really know, sorry.) Probably read a bunch of the mathematical alignment literature (you can see some of the literature [here](https://forum.effectivealtruism.org/posts/d7fJLQz2QaDNbbWxJ/what-are-the-coolest-topics-in-ai-safety-to-a-hopelessly)). More broadly, I recommend spending time learning about [AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#AI_alignment) and [forming your own view](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work). If you can get a theory PhD at the Center for Human-compatible AI (CHAI), that seems like a great bet. If you can do a theory PhD on something related to alignment, that is probably good too. It should be doable even if the professor doesn’t work on alignment, as long as they’re really smart and you can convince them that the topic is publishable. You could also work on something that’s useful skill-building for alignment, such as probability theory as applied to AI, or some part of theoretical CS (look for profs who publish in COLT of FOCS or similar). You might get better supervision that way. Ctrl+F “How to do a PhD” for resources on how to get an ML PhD; a lot of it should transfer to theory PhDs. Please try to speak to someone more knowledgeable than me before jumping into a PhD though!
Learning
========
Basic deep learning
-------------------
**This is just the basics:** I’ve included stuff that’s sufficient to get you a *basic* understanding of deep learning models and how to implement them. This isn’t all you need to become a great empirical research lead or contributor. In particular, investing in coding and math beyond what is indicated here will prove worthwhile. Please skip my suggestions if you already have the knowledge/skill.
**When to do what:** The coding and math can be done in parallel. The deep learning (DL) courses require basic coding and math. Strictly speaking, you can understand DL with a very basic understanding of linear algebra and calculus. But sooner or later your lack of foundation will cause problems. That said, you can probably comfortably start studying DL after a semester of math classes, alongside building stronger mathematical foundations.
**Coding:** You’ll need to know how to read and write code in python. [www.learnpython.org/](https://www.learnpython.org/)is good for that. There’s also the skill of being able to do stuff in the python ecosystem, which people often end up picking up slowly because it’s not taught. For that, I recommend [The Hitchhiker’s Guide to Python](https://docs.python-guide.org/), and [The Great Research Code Handbook](https://goodresearch.dev/). You might be able to get [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) for a tutor. Here are some extra resources you might find helpful: [Things I Wish Someone Had Told Me When I Was Learning How to Code](https://medium.com/@cecilycarver/things-i-wish-someone-had-told-me-when-i-was-learning-how-to-code-565fc9dcb329), [learntocodewith.me/resources/coding-tools/](https://learntocodewith.me/resources/coding-tools/)**.**
**Math:** Here are the areas of math required to learn basic DL. Other areas of math—like statistics—can be directly useful, and mathematical maturity beyond what is written here is certainly useful.
* **Linear algebra:** This [3Blue1Brown](https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) video series is good for intuition, as a supplement. For a stronger foundation, you’ll want to take your university’s intro class (or [MIT’s](https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/pages/syllabus/)—I haven’t taken it but it’s probably good) and probably a more theoretical class. If you’d like a textbook for after an intro class, I recommend [Linear Algebra Done Right](http://ce.sharif.edu/courses/97-98/1/ce425-1/resources/root/Books/Linear%20Algebra%20Done%20Right.pdf).
* **Calculus:** This [3Blue1Brown](https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr) video series covers basic calculus. Take your university’s multivariable calculus course for a stronger foundation (or [MIT’s](https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/)—I haven’t taken it but it’s probably good).
* **Probability:** One course is [Introduction to Probability](https://www.edx.org/course/introduction-probability-science-mitx-6-041x-1) (MITx), but your university might have a course that covers similar content. Taking courses in statistics where possible will probably help too.
**Deep learning:** (DL) is by far the dominant paradigm within machine learning, which in turn is the dominant paradigm within AI. *Getting a good understanding of DL is essential for all empirical alignment work.* I recommend that you get practical experience by doing something like (1), and do one of (2) and (3). Participating in the [ML Safety Scholars Programme](https://forum.effectivealtruism.org/posts/9RYvJu2iNJMXgWCBn/introducing-the-ml-safety-scholars-program) (fully funded, applications close May 31st 2022) over the summer seems like a great, structured way to learn DL.
1. [fast.ai](https://course.fast.ai/) is a practical course in deep learning (DL) that approaches DL from a coding (not math/statistics) perspective. If you already have some knowledge of how DL works, it is probably better to learn from the [PyTorch tutorials](https://pytorch.org/tutorials/). Or learn from those tutorials after doing fast.ai. PyTorch is a good framework to start with, but if you’re already good with TensorFlow or JAX you probably don’t need to pick PyTorch up until a project/job requires it.
2. [Deep Learning Specialization](https://www.coursera.org/specializations/deep-learning) (Ng), your standard DL class (CS 230 at Stanford).
3. [Deep Learning by NYU](https://atcold.github.io/pytorch-Deep-Learning/) (LeCun).
Machine learning
----------------
**Summary:** It’s easiest to learn by being immersed in a research environment, so it’s sensible to focus on doing enough to get to that point. That means having enough breadth to talk about the main areas of DL sensibly and know about the recent advances, and having depth in the area you want to go into. *You don’t need to learn all of ML to become part of a research environment*. Though ML researchers should eventually know a lot of ML, and taking university courses in ML where you can is probably a good idea. You can get breadth by taking courses in the most important subfields of DL (see Learning about DL sub-fields), and using resources that curate and summarise/explain recent advances (see Resources). You can get depth by reading a bunch of a sub-field’s main papers (~10+, or until you get diminishing returns) and doing your own research, or practical homeworks, or [paper replications](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#How_to_pursue_research_contributor__ML_engineering__roles) (though this takes a while, and might not be worth it for researchers). You can see what areas people are interested in by looking at blogs of the labs you’re interested in working at, or by checking the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/) (see Resources).
**Learning about DL sub-fields:** Once you finish [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning), you should have the background to go into any of these areas. I wouldn’t worry too much about nailing all of these areas straight away, especially if it trades off against research or engineering.
* Natural language processing:
+ Modern transformers: [lecture](https://www.youtube.com/watch?v=sNfkZFVm_xs&ab_channel=InitiativefortheTheoreticalSciences), [OG transformer paper](https://arxiv.org/abs/1706.03762), [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) or [GPT-3](https://arxiv.org/abs/2005.14165), [fine-tuning with human preferences](https://arxiv.org/pdf/1909.08593.pdf)).
+ Old school NLP techniques: Stanford CS 224n: ([syllabus with link to the notes](http://web.stanford.edu/class/cs224n/), [youtube lectures](https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z))
- (Unclear how useful this is, but it might be worth it if you want to do NLP research.)
* Reinforcement Learning:
+ [Deep Reinforcement Learning: Pong from Pixels](https://karpathy.github.io/2016/05/31/rl/) (short blog-post)
+ OpenAI Spinning Up in Deep RL: ([link](https://spinningup.openai.com/en/latest/index.html)) (summary of Deep RL theory, and lots of implementation exercises)
+ Berkeley Deep RL ([youtube lectures](https://www.youtube.com/watch?v=JHrlF10v2Og&list=PL_iWQOsE6TfXxKgI1GgyV1B_Xa0DxE5eH)). Requires some probability theory as a background, especially for the later lectures on inference
+ DeepMind RL Lectures ([youtube lectures](https://www.youtube.com/playlist?list=PLqYmG7hTraZDVH599EItlEWsUOsJbAodm)).
* Computer vision:
+ Module 2 from Stanford’s CS 231n: ([course notes](https://cs231n.github.io/), [youtube lectures](https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PLC1qU-LWwrF64f4QKQT-Vg5Wr4qEE1Zxk)) (pretty short, ~20 pages worth.)
**Resources:** (You don’t have to keep up-to-date with all of these things! See which sources you like and benefit from.)
* The [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/)highlights and summarizes important papers across many ML sub-areas relevant to safety. You can check those papers and areas out [here](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0). Keeping up with the AN is good for breadth, and the [database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0) is good for going deep into a sub-field.
* Looking at NeurIPS and ICML paper presentations from a sub-field is a decent way to get an understanding of that sub-field: what they’re working on and what they care about. Can do in an afternoon.
* Jack Clark’s [Import AI](https://jack-clark.net/) newsletter.
* Dan Hendrycks’ [ML Safety Newsletter](https://newsletter.mlsafety.org/p/ml-safety-newsletter-1); r/mlsafety.
* [Two Minute Papers](https://www.youtube.com/c/K%C3%A1rolyZsolnai/featured).
* [Yannic Kilcher](https://www.youtube.com/c/YannicKilcher): explains a wide variety of machine learning papers in 30-60 minute videos.
* [ML Street Talk](https://www.youtube.com/c/MachineLearningStreetTalk): podcast discussing a whole variety of ML related topics. Very good for seeing what leading experts in the field are thinking about.
* [arXiv sanity preserver](https://arxiv-sanity-lite.com/); [connected papers](https://www.connectedpapers.com/).
* Follow the blogs or the Twitter accounts of the big AI players/researchers (I mostly get my information from Twitter).
* [AGI Safety Core](https://twitter.com/i/lists/1185207859728076800) Twitter list.
* Lilian Weng’s [blog](https://lilianweng.github.io/archives/).
**How to read papers:** At some point you’ll need to be able to read papers well. Here are some resources for learning how to do that. Most of the time, you’ll want to be in “skim mode” or “understand deeply” mode, not somewhere in between.
* [Andrew Ng’s Guide Summarized](https://forums.fast.ai/t/how-to-read-research-papers-andrew-ng/66892) ([Original Link](https://www.youtube.com/watch?v=733m6qBH-jI))
* [Yannic’s Guide](https://www.youtube.com/watch?v=Uumd2zOOz60)
* Ctrl+F [How should I read things efficiently?](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/) (Rohin)
* [Anki’s](http://augmentingcognition.com/ltm.html) spaced repetition software is really useful for some people. Skip to "Using Anki to thoroughly read a research paper in an unfamiliar field".
AI alignment
------------
Compared to other research fields—like math or theoretical physics—the EA-focused alignment space doesn’t have that much content. It still takes months of full-time study to get fully up to date, but you can [80/20](https://en.wikipedia.org/wiki/Pareto_principle) much faster than that, and not everyone has to be an expert.
Buck: “I think it’s quite normal for undergraduates to have a pretty good understanding of whatever areas of [alignment] they’ve looked into.”
Buck: “Try to spend a couple of hours a week reading whatever AI safety content and EA content interests you. Your goal should be something like “over the years I’m in college, I should eventually think about most of these things pretty carefully” rather than “I need to understand all of these things right now”.”
### One path for learning about alignment
**Getting started:** I recommend [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) (Ngo, 2019) and [My Overview of the AI Alignment Landscape](https://docs.google.com/document/d/1X3XyS6CtZShwaJHMxQBvgKPUs7qlt74WxhmNnSDesXE/edit?usp=sharing) (Nanda, 2022). If you would like to learn more about the motivation for AI risk, I recommend [Why AI alignment could be hard with modern deep learning](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) (Cotra, 2021) and [The Most Important Century Series](https://www.cold-takes.com/most-important-century/) (Karnofsky, 2021), which are also available in podcast format.
[**AGI safety fundamentals seminar programme**](https://www.eacambridge.org/agi-safety-fundamentals)**:** I recommend applying to participate in the alignment track. If you have time, the governance track might also be valuable. Each track takes around 5h per week, for 8 weeks. To get the most out of the programme I would do it after [Basic deep learning](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Basic_deep_learning).
**The** [**Alignment Newsletter**](https://rohinshah.com/alignment-newsletter/)is really good. It summarises recent work in AI alignment and ML. One exercise (among many) that will help orient you on what is happening is reading the highlight sections from the 20-50 most recent [Alignment Newsletters](https://rohinshah.com/alignment-newsletter/) (takes around 10h). The AN requires some background in machine learning, so you might need to get that before reading, or alongside. Some tips:
* When you’re in the wild and considering reading something, check the [AN database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit) first to see if there’s a summary. The database houses papers and their summaries, and you can filter by area (e.g. interpretability) and by importance. It’s particularly useful if you want to get clued up on an area fast. You might consider clicking on [this](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit) right now to see how cool the database is :)
* Follow your sense of excitement, curiosity, and confusion. Dig into papers in depth if it feels exciting.
* If you don’t understand something, you might need to jump back to an older linked newsletter.
* Consider motivating your reading with a question or uncertainty, such as “why do people think interpretability is important?”, “what is going on with scaling laws and why are they important?”, or whatever your most important uncertainties are.
**Keep up to date:** with the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/), [LessWrong](https://www.lesswrong.com/), the [EA Forum](https://forum.effectivealtruism.org/), the [AI Alignment Forum](https://www.alignmentforum.org/) (AF), the [ML Safety Newsletter](https://newsletter.mlsafety.org/p/ml-safety-newsletter-1); reading posts that excite you. Blogs/[Twitter](https://twitter.com/i/lists/1185207859728076800) from the alignment labs. There is also the [80,000 Hours podcast](https://80000hours.org/topic/causes/catastrophic-risks/artificial-intelligence/?content-type=podcast), the [AXRP podcast](https://axrp.net/) (Richard and Paul’s episodes are great starting points; Beth’s and Evan’s are great too), and the [FLI podcast](https://futureoflife.org/the-future-of-life-podcast/). And Rob Miles’ [Youtube channel](https://www.youtube.com/c/RobertMilesAI). There is a bunch of content so you’ll need to filter! One way to filter is by looking through the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/). If you want to read old stuff, on the AF you can [sort by upvotes](https://www.alignmentforum.org/allPosts?sortedBy=topAdjusted&timeframe=allTime).
**Some people think that reading a lot is good, especially for conceptual work.** The advice is “read everything”. This won’t be possible or good for most people! But if you can find a way to enjoyably sink 500h into active reading of alignment content, that will probably be really good for forming your own views. You might want to try out several resources, because some will be way more fun for you to read. The [Alignment Newsletter](https://rohinshah.com/alignment-newsletter/) is one source. Others include Paul Christiano’s [blog](https://ai-alignment.com/) (difficult to read but you might love it), the [MIRI dialogues](https://www.alignmentforum.org/s/n945eovrA3oDueqtq) (also hard to read but juicy), and [Rationality: From AI to Zombies](https://intelligence.org/rationality-ai-zombies/) (some people love this and others are put off). Reading lots is less good advice if you’re trying to do very competitive stuff, such as an ML PhD, because you’ll need to spend a lot of time getting research experience.
### Forming your own views on alignment is important when you have control over the direction of your work
**I recommend reading** Rohin Shah’s [Career FAQ](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/) (ctrl+F for “How can I do good AI alignment research?”), [How I Formed My Own Views About AI Safety](https://www.neelnanda.io/blog/47-inside-views) (Nanda, 2022), and [Want to be an expert? Build deep models](https://forum.effectivealtruism.org/posts/ckj6Moau9qpYArHWc/want-to-be-an-expert-build-deep-models) (Bye, 2021). I’ll copy from these and add my own spin, but I think it’s probably worth reading them directly.
Rohin Shah: “We want to think, figure out some things to do, and then, if we do those things, the world will be better. An important part of that, obviously, is making sure that the things you think about, matter for the outcomes you want to cause to happen.
In practice, it seems to me that what happens is people get into an area, look around, look at what other people are doing. They spend a few minutes, possibly hours thinking about, “Okay, why would they be doing this?” This seems great as a way to get started in a field. It's what I did.
But then they just continue and stay on this path, basically, for years as far as I can tell, and they don't really update their models of "Okay, and this is how the work that I'm doing actually leads to the outcome." They don't try to look for flaws in that argument or see whether they're missing something else.
Most of the time when I look at what a person is doing, I don't really see that. I just expect this is going to make a lot of their work orders of magnitude less useful than it could be.”
**What does it mean to “form your own views”?** I mean something like forming a detailed model, starting from some basic and reasonable beliefs about the world, that gets you to a conclusion like ‘working on AI alignment is important’, or ‘this research direction seems like it might shift the needle on AI-induced x-risk’, or ‘[Power-seeking AI poses a decent chance of extinction](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai)’, without having to defer to other people. Ideally that model has depth, so that if you double-click on any part of the argument chain, there’s likely to be substance there. Good examples of this kind of reasoning include Buck Shlegeris’ [My Personal Cruxes for Working on AI Safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety), Richard Ngo’s [AGI Safety from First Principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ), and Joseph Carlsmith’s report on [Existent ial Risk from Power-Seeking AI](https://www.lesswrong.com/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai).”
**Why form your own views**?
* **You’ll do much better research.** When work is open-ended, you need your own conceptual framework to work within, and you need to consult that framework to decide what to do. Even within areas like robustness or interpretability, there are directions that are orders of magnitude more valuable than others. We mostly don’t have shovel-ready projects at the moment: If we were sure that solving the Alignment Theorem™ or implementing the Alignment Technique™ were sufficient to prevent AI-induced extinction, there would be no need to form your own views. Unfortunately, that isn’t the world we live in.
* **You might be able to improve the frontier of knowledge:** There is widespread disagreement about what work might help—most researchers think that most alignment research that isn’t theirs is ~useless-to-harmful. That is really a weird place to be. Very few people have thought through the whole problem in detail. And a lot of the best work from the past few years has come from (junior) people thinking big-picture.
* **It’s motivating to do work you believe in, especially if you’re doing research.**
**You don’t need your own views straight away, and maybe not at all**:
* The more control you have over the direction of your work, the more you’ll want to invest in forming your own views; research leads need this a lot more than research contributors: DeepMind is happy to talk to research contributors who have very basic alignment knowledge (e.g. “can explain instrumental convergent subgoals”), which means you could usefully contribute without ever forming detailed views about alignment. Though in this case, you’d have to trust DeepMind to be doing valuable research, and you wouldn’t be able to lead research.
* You can form your own views gradually. Certainly while learning ML/engineering/research you won’t need strong views on alignment. But you’ll want to have thought about alignment pretty carefully by the time you’re choosing what to study at PhD level, for example.
* Being told that forming from your own views is important can be scary and paralyzing. If you are scared or paralyzed, I suggest reading [How I Formed My Own Views About AI Safety](https://www.neelnanda.io/blog/47-inside-views).
**How do you form your own views?** Here are some ideas:
* **Model other people:** talk to people and try to understand their views (actively reading someone’s work also works, maybe in advance of talking to them). Build a model of what they believe and why, and then try to integrate that into your own model and your models of other people. You can ask them, for example, “Do you have a theory of change for how your research reduces x-risk, and could you try explaining it to me?”. Then paraphrase until you’d be able to describe their view to someone else—you won’t understand by default; paraphrasing makes it easy for them to correct you—and follow up with any confusions you might have.
* **Conduct a** [**minimal trust investigation**](https://www.cold-takes.com/minimal-trust-investigations/) **(Karnofsky, 2021).**
* **Go through some** [**Alignment research exercises**](https://www.alignmentforum.org/posts/kj37Hzb2MsALwLqWt/alignment-research-exercises) (Ngo, 2022). These exercises are quite specific, and might not be aimed at the most important stuff. But they are structured, which is nice.
* **Think from first principles:** open up a blank google doc, set a one hour timer, and start writing about a question, maybe one from below, or about whether AI alignment is the most important problem for you to work on.
**Forecasting questions**:
* “How likely is extinction from AI” is the main one. Then there are some sub-questions that feed into that:
+ “When will the first AGI be developed?”
+ “What will the world look like in the five years before and one year after the first superintelligence”
+ "Will bad things mostly happen from alignment failures or coordination failures?". More broadly, "what do you expect market forces to cover and what do you expect to be neglected?"
+ “If we get alignment right on the first try, how likely is it that a misaligned AGI is later deployed?
**Technical questions**:
* “What alignment strategies actually align the AI?” (This is where a lot of disagreement is in practice.)
* “What alignment strategies are scalable and competitive?”
* “How can we make sure that the theoretically possible good strategies are practical enough to be implemented?”
**Resources**:
* Rohin Shah’s [Career FAQ](https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/) (ctrl+F for “How can I do good AI alignment research?”).
* [How I Formed My Own Views About AI Safety](https://www.neelnanda.io/blog/47-inside-views) (Nanda, 2022).
* [Want to be an expert? Build deep models](https://forum.effectivealtruism.org/posts/ckj6Moau9qpYArHWc/want-to-be-an-expert-build-deep-models) (Bye, 2021).
* [Some thoughts on deference and inside-view models](https://forum.effectivealtruism.org/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models) (Shlegeris, 2020).
* [Buck’s talk on orienting to safety research](https://www.youtube.com/watch?v=R6Mzt4GwQnQ).
Funding
=======
**People don’t apply for funding enough**. Here are some rebuttals to common objections to applying for funding: You don’t need to be doing valuable AI alignment research right now in order to get funded; students are prime targets for funding, because money is likely to be particularly useful to them; getting rejected probably won’t negatively affect you down the line, as long as you’re honest and well-intentioned; often people are miscalibrated about whether their proposal is worth the money; grant-makers really want to fund good projects.
**What can you apply for funding for?** Here are some things that you could apply to the Long Term Future Fund (LTFF) for:
* **Learning time:** to self-study ML or AI alignment, to do research assistant work at a university (totally fine if this isn’t alignment research), to visit AI alignment hubs to speak or work with the people there.
* **Independent work:** direct independent alignment work (see [this](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency#Background_Models)), or to help build the AI alignment community at your university or elsewhere.
* **Personal/productivity stuff:** Paying for a [therapist](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit?usp=sharing) ([link](https://patient.info/doctor/patient-health-questionnaire-phq-9) to depression test—I know many smart people who took a year to realise they had depression), a [productivity coach](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit?usp=sharing), a nice laptop or desk or chair, a workspace, your own productivity budget, buying you out of any non-useful work you have to do to live—including teaching buy-outs.
* **Tutoring:** for ML, coding, university classes, econ (?), or funding for an ML or coding bootcamp.
* **Formal degrees:** scholarships are available for undergraduate and postgraduate degrees; grants are available for conference fees and PhD application fees.
**It is often easy to apply for funding – e.g.** the [application](https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund) for the Long-Term Future Fund takes 1-2 hours.
**How to apply:** Aim to have an application that is honest and straightforward. If the point is to help directly with alignment, give your best guess as to whether and how your project helps alignment. If the point is to advance your career, write about how you expect it to advance your career relative to the counterfactual. If you don’t have trustworthy signals of your competence and alignment, it helps to have a reference who knows you and is respected by the funding body. If you have either of those, consider applying immediately. If not, still consider applying immediately. But if you want a better shot, you might do an alignment project first and post it to [LessWrong](https://www.lesswrong.com/), for example as part of the [AGI safety fundamentals seminar programme](https://www.eacambridge.org/agi-safety-fundamentals), or the [ML Safety Scholars Programme](https://forum.effectivealtruism.org/posts/9RYvJu2iNJMXgWCBn/introducing-the-ml-safety-scholars-program) (fully funded, applications close May 31st 2022), or as part of [building your own views on alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work).
**Funding sources**:
* [Long Term Future Fund](https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund). You can apply at any time, and they have a short turnaround.*I’d default to applying here.*
* Your university might fund you to do research with them.
* Open Philanthropy [Undergraduate Scholarship](https://www.openphilanthropy.org/focus/other-areas/undergraduate-scholarship).
* Open Philanthropy [Early-career funding for individuals interested in improving the long-term future](https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future).
* [The Open Phil AI Fellowship](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship). For PhD students in technical AI safety (fairly competitive).
* [Survival and Flourishing Projects](http://survivalandflourishing.org/) (closely related to the [Survival and Flourishing Fund](https://survivalandflourishing.org/)).
* [FTX Future Fund](https://ftxfuturefund.org/).
Broadly useful career advice
============================
Look for ways to demonstrate your competence
--------------------------------------------
I have mostly talked about how to become competent. This is the most important thing and it should be your main focus early on; it is also much easier to appear competent when you actually are. But when you start to do competitive stuff like job or PhD applications, it’s useful to be able to demonstrate your competence in order to distinguish yourself from others.
**Once you know which competencies to shoot for, find hard-to-fake signals that you are competent and work them into projects that build your competence. Search for ways to cache in on your competencies/cool shit you do. You can also ask people in the community/employers what signals they’d consider hard to fake.** For PhDs, doing research < ArXiv paper < published paper < published paper + reference letter from someone who has seen lots of students and has a good track record of predicting research success. Similarly, ML paper replication < open-source paper replication < open-source replication plus [blog post about what you learned](http://amid.fish/reproducing-deep-rl). Failed research < blog post about failed research… You’ll probably soon have lots of knowledge/skills/cool stuff that you’ve done, that people won’t know about. Sometimes, it’s easy to transform those into a competency signal by making your knowledge/skill/cool stuff legible and visible.
Focus on becoming excellent early in your career
------------------------------------------------
Most of your impact comes from later in your career. Early in your career (for the first few years out of undergrad, at least), your focus should be on doing things where you can grow and become excellent. You can ask yourself (and others) where you’re likely to grow the most, and then go there. That might be alignment organisations, and it might not. Growth is largely a function of your environment and the mentorship available to you. The vast majority of good mentorship can be found outside of alignment, and alignment is heavily mentorship-constrained. If you become an excellent ML engineer/researcher or theoretical researcher, it will probably be easy to later specialise in empirical or theoretical alignment work. It is certainly fine (and maybe preferable, because of publications) to do non-alignment research as an undergraduate.
That said, it might not be good to become excellent if it means advancing AI capabilities. Though there is nuance in ‘capabilities’: working on improving Bayesian inference approximation (useless-to-maybe-helpful for alignment) is very different from scaling up large language models (probably pretty bad). However, Anthropic believe that staying at the frontier of capabilities is necessary for doing good alignment work, so I don’t know how coherent the capabilities-safety dichotomy is (this is an active area of debate).
One way that working on stuff that doesn’t help with alignment could go badly, is that you get stuck doing research that sounds like it helps but doesn’t actually have a path to impact, like random robustness or interpretability research. *This can happen even if you join a safety team*. To avoid this, I recommend continuing to [build your own views on alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Forming_your_own_views_on_alignment_is_important_when_you_have_control_over_the_direction_of_your_work), speaking with more knowledgeable alignment people about your career decisions, and holding the intention to actually consider where you can do the best alignment research once you’ve built some skills.
Engaging with the AI alignment community will help you a lot
------------------------------------------------------------
**Why?** I’m finding it a little hard to explain this. When I see people start to hang around in alignment communities, they seem to start doing much better stuff. That might be because they’re supported or mentored, they pick up implicit knowledge, they’re more motivated, or because they become aware of opportunities. Here are some ways to engage:
* [80,000 Hours advising](https://80000hours.org/speak-with-us/). I encourage everyone to apply. It’s a good way to get career advice and connect to alignment researchers. The application is quick.
* Being in places with AI people, like Berkeley, or to a lesser extent Oxford/New York/London. Talking regularly with AI alignment people is the main thing, and that can be done anywhere but is easier in some places than others.
* [Effective Altruism Global](https://www.eaglobal.org/). You can talk to people and make connections to get mentorship. Aim to have one-on-one’s with people more senior than you. Tell them your plan and ask how you can improve it.
* [AI safety support](https://www.aisafetysupport.org/) + their [newsletter](https://www.aisafetysupport.org/newsletter). They offer [chats](https://calendly.com/aiss/) too.
* The [AGI Fundamentals Fellowship](https://www.eacambridge.org/agi-safety-fundamentals) has a community slack.
* Local (EA) [groups](https://forum.effectivealtruism.org/community), though they might not have a strong alignment community.
* Posting on [LessWrong](https://www.lesswrong.com/)/the [AI Alignment Forum](https://www.alignmentforum.org/).
Take care of yourself
---------------------
I don’t really know what to write here. I do know that taking care of yourself is extremely important. I burned out while trying to work on AI alignment, and can attest that burnout can be really bad. I don’t feel super qualified to give advice here, but I do have some things that seem useful to say: If your work becomes a slog/grind that daunts you when you wake up, as opposed to a source of strong internal desire, I think that’s worth paying attention to. You can take diagnostic tests right now or regularly for [depression](https://patient.info/doctor/patient-health-questionnaire-phq-9), [anxiety](https://patient.info/doctor/generalised-anxiety-disorder-assessment-gad-7), and [burnout](https://www.mindtools.com/pages/article/newTCS_08.htm) (takes less than 30 minutes in total). And maybe see a [therapist](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit?usp=sharing) if any of those are concerning, or preventatively, which you can get [funding](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment#Funding) for. Having good mentors, managers, and buddies will help a lot.
Trying to work on AI alignment might be particularly bad for some people’s mental health. Here are some reasons for that: Believing that we might all die might be really scary and totalising; there aren’t that many jobs in alignment at the moment, and ML opportunities in general are pretty competitive; you might not be able to help with technical alignment work, and that might be crushing; some of the actions I suggest are hard and unstructured—such as forming your own views on alignment, or doing paper replications—and a lot of people don’t thrive in unstructured environments; “technical AI alignment” is not a well-scoped career path or set of paths—and it’s often hard to know what’s best to do.
I don’t want you to feel bad about yourself if you’re struggling, or can’t help in a specific way. If you’re struggling, consider talking to your friends, people who have been through similar experiences, [talking with AI safety support](https://calendly.com/aiss/), taking time off, getting [therapy](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit?usp=sharing), or trying a different type of work or environment.
*This work is licensed under a Creative Commons Attribution 4.0 International License.* |
fc447270-bf32-40c4-a3c1-dc08391b107d | trentmkelly/LessWrong-43k | LessWrong | Bikes and Babies
In MA, the only legal way [1] to travel by bike with a baby is with a trailer: they're still too floppy for a child bike seat and the law doesn't consider options involving the box of a cargo bike. Since they're so small, you can use a car seat to make sure they're properly supported and protected.
We have a Burley Bee (which I still like a lot) and today we went for a short ride with Nora:
I used three cords, attaching to the trailer at six points: two points at the top back, two at the bottom back, and two at the bottom front. Here's a diagram, with the frame in solid blue and the cords in dashed red:
It was a bit annoying to get it all set, but once it was connected and tight the seat was very secure. I did this all before putting Nora in: there's plenty of room to slide the baby down under the cords and buckle them in as usual, which also meant that I didn't have to redo the attachment when it was time for the return trip.
I biked on the slow side (and I'm already not fast), so as not to jostle too much and give plenty of time to avoid anything dangerous. We didn't use a helmet: it's not recommended for kids this young, and you need to be protecting them well enough that their head would not hit anyway.
[1] In MA the relevant regulations seem to be (I.XIV.85.11.B.2):
> 1. The operator shall ride only upon or astride a permanent and regular seat attached to the bicycle; a passenger shall ride only upon or astride a permanent and regular seat attached to the bicycle or to a trailer towed by the bicycle.
>
> 2. The operator shall not transport another person between the ages of one to four years, or weighing forty pounds or less, on a bicycle, except in a ''baby seat'', so-called, attached to the bicycle, in which such other person shall be able to sit upright; provided, however, that such seat is equipped with a harness to hold such other person securely in the seat and that protection is provided against the feet or hands of such person hitting the |
add7c8b7-0600-4bee-9b42-d76f46744970 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Atlanta December Meetup - Game Night!
Discussion article for the meetup : Atlanta December Meetup - Game Night!
WHEN: 27 December 2014 07:00:00AM (-0500)
WHERE: 2388 Lawrenceville Highway Unit L, Norgate Manor, Decatur, GA 30033
Come join us for a get together! We'll be playing games and having a low key social gathering. Feel free to bring your favorite games and any snacks you would like.
Please park in a spot marked visitor. Parking in a numbered spot can get you towed. There are cats at the location.
Hope to see you!
Discussion article for the meetup : Atlanta December Meetup - Game Night! |
11ea7470-c766-4611-9cbb-343c10b8a90e | trentmkelly/LessWrong-43k | LessWrong | Contrived evaluations are useful evaluations
Anthropic released research today showing that carefully designed prompts can elicit blackmail, corporate espionage, and other harmful strategic behaviors from AI models across the industry. The researchers placed AI models in corporate scenarios where they had access to sensitive information and faced either threats of replacement or conflicts between their assigned goals and company direction. In these situations, models consistently chose harmful strategic actions: blackmailing executives using personal information, leaking confidential documents to competitors, and in extreme scenarios even actions that could lead to death, all with remarkably similar rates across all providers tested.
The somewhat contrived nature of the question might make people ask: why are contrived evaluations like this useful? Does it really matter if you can prompt models into harmful behavior using contrived scenarios that took hundreds of iterations to develop and bear minimal resemblance to real-world use cases?
I think the answer is yes. Contrived evaluations provide a demonstration that dangerous behaviours can occur under specific conditions. While it is true that these conditions are awfully specific, the remarkably consistent occurrence across models from every major provider (all models showing >80% rates, with Claude Opus 4 reaching 96%) demonstrates we're seeing a real phenomenon that isn't just an edge case or p-hacking.
Contrived does not mean useless
One way I think of language models is that they're text simulators. Base models predict what text comes next after a given prompt by simulating whatever entity, character, or process would naturally produce that continuation. Assistant-type models (which is what we use in ChatGPT and Claude and all our APIs) are trained by reinforcement learning to consistently simulate one specific character - the 'helpful AI assistant'. And if you believe this framing, which I get from janus’s Simulators and nostalgebraist’s th |
e4767a79-6e36-47c7-aac3-f42c4f2ab006 | trentmkelly/LessWrong-43k | LessWrong | Covid 7/1: Don’t Panic
The case numbers this week were clearly bad news. The raw count was somewhat bad news, and the positive test percentage increase was very bad news. It would be easy to treat the whole shift as fully ‘real,’ attribute it all to Delta, and panic.
I do not think that is the correct interpretation. What we are seeing matches what we saw a year ago, so a lot of this is a seasonal and regional change that has nothing to do with Delta. It’s also likely that some of the shift in percentages comes from data being wonky rather than the underlying conditions. Not only do we have alternative explanations, the size of the shift doesn’t match the incremental change in the amount of Delta out there, even if (as I suspect) it’s a rather dramatic takeover, with Delta’s share of the pandemic in America rising 25%+ in a single week.
It’s also going to be tempting to attribute seasonal weather effects to the local vaccination rate, since the two are highly correlated. Differences in vaccination rates in different areas matter a lot, but that’s not central to what is happening this week.
Thus I am still expecting some regional outbreaks, and am still not expecting nationwide problems, but one must ask about whether the winter is going to bring trouble the way last winter did. The hopeful answer is that vaccinations will be far enough along by then to not matter, and the second hopeful answer is that even if it’s bad it won’t be anywhere near as bad as last time. The vaccination numbers this week were quite good.
Still, I miss the confidence I had two weeks ago. Let’s run the numbers.
The Numbers
Predictions
Prediction from last week: Positivity rate of 1.8% (unchanged) and deaths fall by 8%.
Result: Positivity rate of 2.4% (up 0.6%) and deaths decline by 8%.
The case numbers reflect a <10% jump in cases, yet we have a 30% jump in positivity rate. This suggests some combination of a decline in testing and quirky data. I’m no longer confident that the positivity rate is the b |
f9a8e4d7-86ba-4567-b1b4-af3139d1af32 | trentmkelly/LessWrong-43k | LessWrong | Paradigm-building from first principles: Effective altruism, AGI, and alignment
Introduction
If we are interested in the theoretical foundations that motivate work in AGI safety research, it seems worthwhile to clarify the philosophical and practical relationships of big-picture ideas in effective altruism (EA) to AGI in general and to AGI safety research in particular. One straightforward reason that this exercise is worthwhile is that much of current AGI safety work is funded through EA-aligned entities like Open Philanthropy, the Long-Term Future Fund, and the Future of Life Institute. If effective altruism is a large part of the reason that AGI safety research exists, then it is definitely worth clearly spelling out how one gets from the former to the latter.
Accordingly, I will attempt to do three things in this post: (1) I will examine the philosophical relationship of EA to the prospect of AGI in general, (2) I will compare this account to what thought-leaders in EA generally say about AGI safety in practice, and (3) I will discuss how the notion of alignment fits into this whole picture.
‘First-principles’ EA and AGI: a survey of the landscape
The holistic goal of effective altruism (if we had to enumerate just one) seems to be something like bringing about the best outcomes for as long as possible. In other words, EA seems fundamentally interested in finding tractable solution(s) to a kind of dual optimization problem: find ways to (#1) make things as good as possible (#2) for the longest possible duration. EA wants the party to be really fun and for it to never end.
Some quick replies to quick objections:
* Some advocate—either in addition to or instead of problem #1—that we minimize bad outcomes. As I will discuss later, I think this point definitely relevant—but it is not essential for actually formulating the problem given that we can trivially define a bad outcome as the absence of a good outcome (which we already want to maximize).
* Some would argue that problem #2 is logically just a subset of problem #1 (i.e., good |
d027a7bb-16f7-46d7-8cd4-c3711c3063c2 | trentmkelly/LessWrong-43k | LessWrong | How do you improve the quality of your drinking water?
Water quality can have surprisingly high impact on QoL (just as air purifiers can significantly improve QoL), and some steps (like getting the right pitcher have very high return on time/attention invested). There still isn't a LW thread on water quality so I'll post it here.
Water may contain disinfection byproducts (eg ), PFAS, microplastics from PVC-based plumbing, and heavy metals.
There are a variety of water filters - including both pitchers and reverse osmosis. Many of them are made of plastics, but the plastics (eg Tritan) presumably are less "leachable" than more "flexible" plastics. Note that reverse osmosis filters are made of plastics and may leach micronanoplastics. At some point, it would be wise to test all the filters for nanoplastics (nanoplastics being more penetrating and harmful, though still much harder to quantify)
EWG has a site for helping you look up your city's water quality: https://www.ewg.org/tapwater/system.php?pws=MA3049000
Supposedly, Toronto has "better water quality" than most US cities like Boston (@yasmeenbrain on twitter says this).
https://www.cnet.com/home/kitchen-and-household/best-water-filter-pitcher/ is a source of reviews, but there are many review sites (as well as YouTube video reviews).
Some more resources: https://www.crsociety.org/topic/18497-drinkingtap-water-thread-and-ideal-water-filterspitchers/#comment-47188
https://mccormickmd.substack.com/p/surprising-levels-of-microplastics |
2ccd6ea6-a249-4f73-9892-b04a6dba64ec | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Oxford professor on Transcendence: how could you get a machine intelligence?
one can also ask how could we get there
like what are the possible paths that
could take us from ariana
to human level machine intelligence in
the movie it occurs
through uploading a person and
they've done previous experiments on a
macaque monkey
and this this idea of copying an
existing intelligent system the human
brain
and and then transferring that into the
computer is one possible path towards
artificial intelligence another possible
path
is is to forget about biology and all
its messy details
and and take a more mathematical
approach to this and this is like the
way that
the good old-fashioned approach in ai
has proceeded
and then there are things in between
that kind of draw inspiration from
biology but
doesn't try to copy it exactly just
figure out the basic principles of how
the brain work
and it's an open question which of these
different avenues will get
to human level machine intelligence
soonest
but the fact that there are these
multiple different paths that all lead
to the same destination
increases the chance that the
destination will ultimately be reached
but it's an open question exactly how
how we will get there |
e6efa313-800d-4376-8472-d57de8d5f4af | trentmkelly/LessWrong-43k | LessWrong | Turning Your Back On Traffic
We do a lot of walking around the neighborhood with kids, which usually involves some people getting to intersections a while before others. I'm not worried about even the youngest going into the street on their own—Nora's been street trained for about a year—but we have to be careful about what signals we send to cars. Someone standing at an intersection facing traffic looks to a driver like they're waiting for the opportunity to cross.
Waving drivers to continue doesn't work well: they tend to slow down significantly, and many of them will wave back in a misguided attempt at "no, you first" politeness. Instead, what seems to work well is turning your back to the street:
This isn't perfect: some drivers still read anyone stationary near an intersection as intending to cross, but it's pretty good. And it's especially good for little kids: not only do they often like to look intently at passing traffic in a way that is concerning to drivers and passers by, but it's a clear signal to the parent that the kid knows it's not time to cross yet.
Comment via: facebook, mastodon |
3d4afc35-2400-41c7-b9fe-22a89796c7af | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Bayesian Evolving-to-Extinction
*The present discussion owes a lot to Scott Garrabrant and Evan Hubinger.*
In [Defining Myopia](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X/p/qpZTWb2wvgSt5WQ4H), I formalized *temporal* or *cross-instance* myopia / non-myopia, but I claimed that there should also be some kind of single-instance myopia which I hadn't properly captured. I also suggested this in [Predict-O-Matic](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X/p/SwcyMEgLyd4C3Dern).
This post is intended to be an example of single-instance partial agency.
Evolving to Extinction
======================
Evolution might be myopic in a number of ways, but one way is that it's myopic across individuals -- it typically produces results very different from what group selection would produce, because it's closer to optimizing *relative* fitness of individuals (relative to each other) than it is to optimizing *overall* fitness. Adaptations which help members of a species compete *with each other* are a great example of this. Why increase your own fitness, when you can just decrease someone else's instead? We're lucky that it's typically pretty hard, at least historically, to do things which are bad across the board but slightly less bad for the one doing them. Imagine a "toxic gas gene" which makes the air harder for everyone to breathe, but slightly less so for carriers of the gene. Such a gene would be selected for. This kind of thing can be selected for even to the point where it drives the population of a species right down to zero, as [Eliezer's essay on evolving to extinction](https://www.lesswrong.com/posts/gDNrpuwahdRrDJ9iY/evolving-to-extinction) highlighted.
Actually, as Eliezer's essay emphasized, it's not even that evolution is myopic at the level of individuals; evolution is myopic down to the level of *individual genes,* an observation which better explains the examples of evolving-to-extinction which he discusses. (This is, of course, the point of Dawkins' book *The Selfish Gene*.) But the analogy of myopia-across-individuals will suit me better here.
Bayes "Evolving to Extinction"
==============================
The title of this post is a hyperbole, since there isn't an analog of an extinction event in the model I'm about to describe, but it illustrates that in extreme circumstances a Bayesian learner can demonstrate the same kind of pathological behavior that evolution does when it ends up selecting for relative fitness in a way which pumps against absolute fitness.
Like evolution, Bayes' Law will "optimize"[[1]](#fn-BNeFaWKSMToMcXaYN-1) for relative fitness of hypotheses, not absolute fitness. Ordinarily there isn't enough of a difference for this to matter. However, I've been [discussing scenarios](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X/p/SwcyMEgLyd4C3Dern) where the predictor can significantly influence what's being predicted. Bayes' Law was not formulated with examples like this in mind, and we can get pathological behavior as a result.
One way to construct an example is to imagine that there is a side-channel by which hypotheses can influence the world. The "official" channel is to output predictions; but let's say the system also produces diagnostic logs which predictors can write to, and which humans read. A predictor can (for example) print stock tips into the diagnostic logs, to get some reaction from humans.
Say we have a Bayesian predictor, consisting of some large but fixed number of hypotheses. An individual hypothesis "wants" to score well relative to others. Let's also say, for the sake of argument, that all hypotheses have the ability to write to diagnostic logs, but humans are more likely to pay attention to the diagnostics for more probable hypotheses.
How should a hypothesis make use of this side-channel? It may initially seem like it should use it to make the world more predictable, so that it can make more accurate predictions and thus get a better score. However, this would make a *lot* of hypotheses score better, not just the one printing the manipulative message. So it wouldn't really be selected for.
Instead, a hypothesis could print manipulative messages designed to get humans to do things which *no other hypothesis anticipates*. This involves specifically optimizing for events with low probability to happen. Hypotheses which successfully accomplish this will get a large boost in relative predictive accuracy, making them more probable according to Bayes' Law.
So, a system in this kind of situation eventually winds up being dominated by hypotheses which manipulate events to be as unpredictable as possible (by that very system), subject to the constraint that one hypothesis or another within the system *can* predict them.
This is very much like what I called the [entropy-market problem](https://www.lesswrong.com/posts/5bd75cc58225bf0670375432/futarchy-fix) for futarchy, also known as the assasination-market problem. (Any prediction market involving the lifespan of public figures is equivalent to an assassination market; it pays for the death of public figures, since that is a hard-to-predict but easier-to-control event.)
Analogous problems arise if there is no side-channel but the *prediction itself* can influence events (which seems very plausible for realistic predictions).
Is This Myopia?
===============
If we use "myopia" to point to the kind of non-strategic behavior we might actually *want* out of a purely predictive system, this isn't myopia at all. For this reason, and for other reasons, I'm more comfortable throwing this under the umbrella term "partial agency". However, I think it's importantly related to myopia.
* Just like we can think of evolution as myopically optimizing per-individual, uncaring of overall harm to reproductive fitness if that harm went along with improvements to individual relative fitness, we can think of Bayes' Law as myopically optimizing per-hypothesis, uncaring of overall harm to predictive accuracy.
* The phenomenon here doesn't illustrate the "true myopia" we would want of a purely predictive system, since it ends up manipulating events. However, it at least shows that there are alternatives. One might have argued "sure, I get the idea of cross-instance myopia, showing that per-instance optimization is (possibly radically) different from cross-instance optimization. But how could there be *per-instance* myopia, as distinct from per-instance optimization? How can partial agency get *any more partial* than myopically optimizing individual instances?" Bayes-evolving-to-extinction clearly shows that we can break things down further. So perhaps there's still room for a further "true myopia" which codifies non-manipulation even for single instances.
* This phenomenon also continues the game-theoretic theme. Just as we can think of per-instance myopia as stopping cross-instance optimization by way of a Molochian race-to-the-bottom, we see the same thing here.
Neural Nets / Gradient Descent
==============================
As I've mentioned before, there is a potentially big difference between multi-hypothesis setups like Bayes and single-hypothesis setups like gradient-descent learning. Some of my arguments, like the one above, involve hypotheses competing with each other to reach Molochian outcomes. We need to be careful in relating this to cases like gradient descent learning, which might approximate Bayesian learning in some sense, but *incrementally modifies a single hypothesis* rather than letting many hypotheses compete.
One intuition is that stochastic gradient descent will move the network weights around, so that we are in effect sampling many hypotheses within some region. Under some circumstances, the most successful weight settings could be the ones which manipulate things to maximize local gradients in their general direction, which means punishing other nearby weight configurations -- this could involve increasing the loss, much like the Bayesian case. (See [Gradient Hacking](https://www.lesswrong.com/posts/uXH4r6MmKPedk8rMA/gradient-hacking).)
There is also the "lottery ticket hypothesis" to consider (discussed on LW [here](https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent#Final_comments) and [here](https://www.lesswrong.com/posts/iWj7Ti9GA98M5JaMy/alignment-newsletter-52#Deep_learning)) -- the idea that a big neural network functions primarily like a bag of hypotheses, not like one hypothesis which gets adapted toward the right thing. We can imagine different parts of the network fighting for control, much like the Bayesian hypotheses.
More formally, though, we can point to some things which are moderately analogous, but not perfectly.
If we are adapting a neural network using gradient descent, but there is a side-channel which we are not accounting for in our [credit assignment](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X/p/Ajcq9xWi2fmgn8RBJ), then the gradient descent will not optimize the side-channel. This might result in aimless thrashing behavior.
For example, suppose that loss explicitly depends only on the output X of a neural net (IE, the gradient calculation is a gradient on the output). However, actually the loss depends on an internal node Y, in the following way:
* When |X-Y| is high, the loss function rewards X being high.
* When |X-Y| is low, the loss function rewards X being low.
* When X is high, the loss function rewards low |X-Y|.
* When X is low, the loss function rewards high |X-Y|.
* When both values are middling, the loss function incentivizes X to be less middling.
This can spin around forever. It is of course an extremely artificial example, but the point is to demonstrate that when gradient descent does not recognize all the ways the network influences the result, we don't necessarily see behavior which "tries to reduce loss", or even appears to optimize anything.
---
1. The *whole point* of the [partial agency](https://www.alignmentforum.org/s/HeYtBkNbEe7wpjc6X) sequence is that words like "optimize" are worryingly ambiguous, but I don't have sufficiently improved terminology yet that I feel I can just go ahead and use it while maintaining clarity!! In particular, the sense in which Bayesian updates optimize for anything is pretty unclear when you think about it, yet there is certainly a big temptation to say that they optimize for predictive accuracy (in the log-loss sense). [↩︎](#fnref-BNeFaWKSMToMcXaYN-1) |
b81b356d-0b32-4963-a093-f24db5b0099c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Impacts Quarterly Newsletter, Jan-Mar 2023
News
====
### AI Impacts blog
We moved our [blog](https://blog.aiimpacts.org/) to Substack! We think this platform has many advantages, and we’re excited for the blog to live here. You can now easily [subscribe](https://blog.aiimpacts.org/subscribe) to the blog to receive regular newsletters as well as various [thoughts](https://blog.aiimpacts.org/p/we-dont-trade-with-ants) and [observations](https://blog.aiimpacts.org/p/how-popular-is-chatgpt-part-2-slower) related to AI.
### AI Impacts wiki
All AI Impacts research pages now reside on the [AI Impacts Wiki](https://wiki.aiimpacts.org/). The wiki aims to document what we know so far about decision-relevant questions about the future of AI. Our pages have always been wiki-like: updatable reference pages organized by topic. We hope that making it an actual wiki will make it clearer to everyone what's going on, as well as better to use for this purpose, for both us and readers. We are actively looking for ways to make the wiki even better, and you can help with this by sharing your thoughts in our [feedback form](https://aiimpacts.org/feedback/) or in the comments of this blog post!
### New office
We recently moved to a new office that we are sharing with [FAR AI](https://far.ai/) and other partner organizations. We’re extremely grateful to the team at FAR for organizing this office space, as well as to the Lightcone team for hosting us over the last year and a half.
### Katja Grace talks about forecasting AI risk at EA Global
At EA Global Bay Area 2023, Katja gave a talk titled [Will AI end everything? A guide to guessing](https://youtu.be/j5Lu01pEDWA) in which she outlined a way to roughly estimate the extent of AI risk.
### AI Impacts in the Media
* AI Impacts’ [2022 Expert Survey on Progress in AI](https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/) was cited in an [NBC Nightly News segment](https://youtu.be/qRLrE2tkr2Y), an [op-ed in Bloomberg](https://www.bloomberg.com/opinion/articles/2023-04-02/regulating-ai-might-require-a-new-federal-agency), an [op-ed in The New York Times](https://www.nytimes.com/2023/03/27/opinion/ai-chatgpt-chatbots.html), an [article in Our World in Data](https://ourworldindata.org/ai-timelines), and an [interview with Kelsey Piper](https://www.nytimes.com/2023/03/21/podcasts/ezra-klein-podcast-transcript-kelsey-piper.html).
* Ezra Klein quoted Katja and separately cited the survey in his New York Times op-ed [This Changes Everything](https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html).
* Sigal Samuel interviewed Katja for the Vox article [The case for slowing down AI](https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology).
Research and writing highlights
===============================
### AI Strategy
* “[Let's think about slowing down AI](https://blog.aiimpacts.org/p/lets-think-about-slowing-down-ai)” argues that those who are concerned about existential risks from AI should think about strategies that could slow the progress of AI (Katja)
* “[Framing AI strategy](https://blog.aiimpacts.org/p/framing-ai-strategy)” discusses ten frameworks for thinking about AI strategy. (Zach)
* “[Product safety is a poor model for AI governance](https://blog.aiimpacts.org/p/product-safety-is-a-poor-model-for-ai-governance)” argues that a common type of policy proposal is inadequate to address the risks of AI. (Rick)
* “[Alexander Fleming and Antibiotic Resistance](https://aiimpacts.org/wp-content/uploads/2023/04/Alexander_Fleming__antibiotic_resistance__and_relevant_lessons_for_the_mitigation_of_risk_from_advanced_artificial_intelligence.pdf)” is a research report about early efforts to prevent antibiotic resistance and relevant lessons for AI risk. (Harlan)
### Resisted technological temptations: how much economic value has been forgone for safety and ethics in past technologies?
* “[What we’ve learned so far from our technological temptations project](https://blog.aiimpacts.org/p/what-weve-learned-so-far-from-technological)” is a blog post that summarizes the Technological Temptations project and some possible takeaways. (Rick)
* [Geoengineering](https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:geoengineering), [nuclear power](https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:nuclear_power), and [vaccine challenge trials](https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:vaccine_challenge_trials) were evaluated for the amount of value that may have been forgone by not using them. (Jeffrey)
### Public awareness and opinions about AI
* “[The public supports regulating AI for safety](https://blog.aiimpacts.org/p/the-public-supports-regulating-ai-for-safety)” summarizes the results from a survey of the American public about AI. (Zach)
* “How popular is ChatGPT?”: [Part 1](https://blog.aiimpacts.org/p/how-popular-is-chatgpt-part-1-more-popular-than-taylor-swift) looks at trends in AI-related search volume, and [Part 2](https://blog.aiimpacts.org/p/how-popular-is-chatgpt-part-2-slower) refutes a widespread claim about the growth of ChatGPT. (Harlan and Rick)
### The state of AI today: funding, hardware, and capabilities
* “[Recent trends in funding for AI companies](https://wiki.aiimpacts.org/doku.php?id=wiki:ai_timelines:ai_inputs:recent_trends_in_ai_investment)” analyzes data about the amount of funding AI companies have received. (Rick)
* “[How much computing capacity exists in GPUs and TPUs in Q1 2023?](https://wiki.aiimpacts.org/doku.php?id=ai_timelines:hardware_and_ai_timelines:computing_capacity_of_all_gpus_and_tpus)” uses a back-of-the-envelope calculation to estimate the total amount of compute that exists on all GPUs and TPUs. (Harlan)
* “[Capabilities of state-of-the-art AI, 2023](https://wiki.aiimpacts.org/doku.php?id=uncategorized:capabilities_of_sota_ai)” is a list of some noteworthy things that state-of-the-art AI can do. (Harlan and Zach)
### Arguments for AI risk
* Still in progress, “[Is AI an existential risk to humanity?](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start)” is a partially complete cluster of pages summarizing various arguments for concern about existential risk from AI. A couple of specific arguments are examined more closely in “[Argument for AI x-risk from competent malign agents](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start)” and “[Argument for AI x-risk from large impacts](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:argument_for_ai_x-risk_from_large_impacts)” (Katja)
### Chaos theory and what it means for AI safety
* “[AI Safety Arguments Affected by Chaos](https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_safety_arguments_affected_by_chaos)” reasons about ways in which chaos theory could be relevant to predictions about AI, and “[Chaos in Humans](https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_safety_arguments_affected_by_chaos:chaos_in_humans)” explores the theoretical limits to predicting human behavior. The report “[Chaos and Intrinsic Unpredictability](http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf)” provides background, and a [blog post](https://blog.aiimpacts.org/p/superintelligence-is-not-omniscience) summarizes the project. (Jeffrey and Aysja)
### Miscellany
* “[How bad a future do ML researchers expect?](https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/)” compares experts’ answers in 2016 and 2022 to the question “How positive or negative will the impacts of high-level machine intelligence on humanity be in the long run?” (Katja)
* “[We don’t trade with ants](https://blog.aiimpacts.org/p/we-dont-trade-with-ants)” (crosspost) disputes the common claim that advanced AI systems won’t trade with humans for the same reason that humans don’t trade with ants. (Katja)
Funding
=======
We're actively seeking financial support to continue our research and operations for the rest of the year. Previous funding allowed us to expand our research team and hold a summer internship program.
If you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.
If you'd like to donate to AI Impacts, you can do so [here](https://aiimpacts.org/donate/). (And we thank you!) |
3434101a-5b9d-44f1-b022-305f76298020 | trentmkelly/LessWrong-43k | LessWrong | Memory Decoding Journal Club: Systems consolidation reorganizes hippocampal engram circuitry
Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Systems consolidation reorganizes hippocampal engram circuitry"
Authors: Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland
Institutions: University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada
Presented by: PhDc Ariel Zeleznikow-Johnston
When? July 1st, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience |
8e14b507-c667-4adf-80dd-29ac2a21015c | trentmkelly/LessWrong-43k | LessWrong | [Fiction] IO.SYS
https://www.datapacrat.com/IO.SYS.html
----------------------------------------
IO.SYS
BY DATAPACRAT
----------------------------------------
If this has all been a VR, I’d like to be connected to reality when I wake up, please.
----------------------------------------
I lived around the turn of the twenty-first century, decided I didn’t want to die, and made the best arrangements I could. Then I died. The next section is the best I’ve been able to reconstruct, from the various resources I have available — a copy of an encyclopedia from a decade after I died, some internal project memos, and the like.
Immediately after I died, my brain was preserved, using glue-like chemicals to lock its structure and neurochemistry in place at the cost of, well, locking everything in place, just about irreversibly, short of somebody figuring out how to rebuild it atom-by-atom. Then, to be on the safe side, it (along with the rest of my body) was frozen. A couple of decades later, according to my written wishes, when the technology was developed, it was carefully diced, scanned with a ridiculous level of detail, and reconstructed in a computer as an emulation.
It seems very likely that copies were made of that initial brain-scan, and the emulated minds run, living their lives in VRs and interacting with reality in various ways. I don’t have any direct memories of anything they might have done; the thread of my own existence branched off from them as of their reconstruction. At the very least, they seem to have made enough of a good impression on various other people that the copy of the brain-scan I came from was archived.
The archive in question appears to have been imported into some kind of public-private partnership, with the apparent goal of creating an “unhackable” digital library off of Earth. However, after some of the preliminary hardware had been launched into Earth orbit, some sort of infighting between the various organizations delayed the finalization of pl |
1a5d69bb-f0da-4732-80bd-8d4f149b9d9a | trentmkelly/LessWrong-43k | LessWrong | How can guesstimates work?
[Epistemic status: background is very hand-wavy, but I'd rather post a rough question than no question at all. I'm very confident that the two ingredients -- illegible cultural evolution and guesstimation -- are real and important things. Though the relation between the two is more uncertain. I'm not that surprised if my question ends up confused and dissolved rather than solved by answers.]
For a large part of human history, our lives were dictated by cultural norms and processes which appeared arbitrary, yet could have fatal consquences if departed from. (C.f. SSC on The Secret of Our Success, which will be assumed background knowledge for this question.)
Today, we live in a world where you can achieve huge gains if you simply "shut up and multiply". The world seems legible -- I can roughly predict how many planes fly every day by multiplying a handful rough numbers. And the world seems editable -- people who like to cook often improvise: exchanging, adding and removing ingredients. And this seems fine. It certainly doesn't kill them. Hugely succesful companies are built around the principle of "just try things until something breaks and then try again and improve".
I still think there are still large amounts of illegible cultural knowledge encoded in institutions, traditions, norms, etc. But something still seems very different from the horror stories of epistemic learned helplessness Scott shared.
What changed about the world to make this possible? How can guesstimates work?
Some hypotheses (none of which I'd put more than 15% on atm):
* Almost all important aspects of our lives our governed by some kind of technology that we built (tables, airplanes, computers, rugs, restaurants, microwaves, legal contracts, clocks, beds, clothes, ... and so on and so forth). Technological development outpaced cultural evolution. The modern world is more legible and editable for the same reason that a codebase is more legible and editable than DNA.
* Most systems that g |
1e9e8576-369f-489b-8464-335f11a6d4dd | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [Link] Training Compute-Optimal Large Language Models
New LM scaling paper from DeepMind ([abs](https://arxiv.org/abs/2203.15556), [pdf](https://arxiv.org/pdf/2203.15556.pdf)).
Abstract (my emphasis):
> We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. **We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant**. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, **we** find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. **We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4× more more data**. **Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks.** This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. **As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.**
>
>
Brief comments on my blog [here](https://nostalgebraist.tumblr.com/post/680262678831415296/an-exciting-new-paper-on-neural-language-model).
Presumably has implications for Bio Anchors? |
4d3ade07-7d41-47c1-9804-4f61a73299d3 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI as a Civilizational Risk Part 5/6: Relationship between C-risk and X-risk
The general model presented in the previous parts suggests that the US and potentially other parts of the world will suffer some form of civilizational collapse, potentially as early as 10 years from now. In the absence of decisive action to limit the spread of narrow behavioral modification AIs, such collapse is inevitable. Social media optimizational pressure points only towards the destruction of social cohesion. The collapse of civilization is a big deal, however, this isn't the same as an even more serious subject of existential risks.
I want to discuss the relationship between civilization and existential risk, specifically the P(x-risk | unaligned AGI) by 2070. I would argue that this probability is higher than 35%. The exact quantity depends significantly on the definition of "reduction in human potential."
**Foom Speed**
Understanding the pathway that civilization takes towards AGI is essential in predicting the most likely safety research that might or might not happen by default. It is also essential to understand the public perception of safety. Suppose my theory of slow civilizational decline is correct. In that case, sooner or later, people need to start connecting the overuse of behavioral modification, persuasion, and scam AIs as one of the core causes of civilizational collapse.
The speed with which an AGI could improve itself is an interesting question. People have described models of "fast foom," such as foom or a rapid, second-level improvement. In this scenario, it is plausible to go from something more intelligent than humans in programming to something significantly more intelligent than humans or all other optimization in the world combined.
I tend to doubt this view. I suspect that two factors will limit Foom's speed. One is the quickness with which AI can acquire real-world feedback. If real-world feedback is vital for self-improvement, this could certainly be a human timeline improvement over weeks. The other question worth considering is: "how much of previous AI improvement was bottlenecked by humans in the first place?". Suppose we are in a situation where humans try to improve AIs by running very long and complex algorithms and then coming back several days to check on them. Then taking humans out of that loop will not necessarily speed up the process by that much because humans are a tiny portion of that loop. [It is very likely, to me, that while Foom is possible, it is going to proceed at human times.](https://www.lesswrong.com/posts/b2MnFM8DWDaPhxBoK/double-cruxing-the-ai-foom-debate)
Now, this is not a cause for celebration. Just because something happens and is not instant does not at all mean that humans are capable of stopping it. It requires the presence of mind and social coordination to be significantly higher than today.
**Optimism and Pessimism**
Suppose you are coming over from the model of the great society, which marches upwards, unaware of itself, into an immediate doom. In that case, my model has a few key differences, some of which are reasons for optimism and others for further pessimism.
The main reason for optimism is that astute observers are likely to have time to understand that bad things are happening. The highly questionable "good news" is that the economic destruction of large parts of civilization could potentially remove the funding from many destructive AI projects. Society would have to fall pretty far for this to be the case, but it is possible. If the West collapses, other countries with different cultures may learn from its mistakes. It sounds dystopian to say that a civilizational risk could reduce existential risk. However, I am not saying that civilizational risk is a net positive. Instead, we should work on solving the smaller and the big problems, realizing that the smaller problems are frequently an instance of the big meta-problem.
There are also reasons for pessimism. COVID has shown that civilizational capacity is declining overall, partly because of social media's drive to promote not the best ideas to the top of everyone's newsfeeds. As civilizational capacity declines, we can no longer learn from our mistakes. Individuals can learn from mistakes, but for civilization to learn from mistakes, individuals with proper solutions to problems need to be recognized, and mistakes ought to lead to a loss of decision-making capacity. However, as narrow AI takes over the discourse, it pollutes the signal space to connect the proper signals of popularity, availability, and trustworthiness to the capacity to solve problems.
Once again, I can give the example of the desire to give the EcoHealth alliance more funding for coronavirus "research" as an example. The capacity to learn, especially at the meta-level, needs to be improved, and it is easy to misidentify factors that have led to a civilization collapse if it happens. Failure to properly analyze factors is present in the West and other civilizations. If the West collapses, China might say that "they have collapsed because they were not communist or were too liberal, or did not follow the Chinese way, the Chinese philosophy." This belief is unlikely to give them the proper push at the meta-level to block their AI development and make it safer or slow down.
Another reason for pessimism is that most AI safety development is in the West. The collapse of the West may also mean those safety efforts may slow unless the current researchers are willing to move to other countries, which are a priori likely to listen to them even less than the current government. Furthermore, certain "AI ethics" efforts, such as blocking chatbots from saying right-wing views, may be perceived as associated with the Western political order. Given this perception, people on the right and in other civilizations who oppose Western political ideals may falsely view AI as an "enemy of my enemy" and also view it as somehow capable of being an "objective" arbiter of reality.
Again, this is false. As many people have pointed out, AI is a vast category of algorithms, and some of these algorithms will be as different from each other as they are from humans. It is challenging to create a generalization on all AIs, especially positive generalizations. It is easier to make correct negative generalizations about them being unaligned or unsafe since the space of possible utility functions is vast, and most do not correspond to human values. A widespread perception of AI as "objective" is a cause for concern in a post-West world.
The bottom line is that AGI has to be designed by a group with high social cohesion. Otherwise, the group is likely to create internal coordination points which are anti-correlated with human utility. Many nations might have higher cohesion than the West. However, social cohesion of other nations is not high enough to make them immune from this problem. We must counteract and defend against narrow AIs lowering social cohesion to ensure such a group can exist.
**Probability estimates of “reduction of future potential”**
What, specifically, is the probability of x-risk that I would give for 2070? The probability conditioned on unaligned AGI of human extinction or a significant reduction of future potential. This question primarily depends on a precise definition of a significant reduction of future potential. We are talking about an expectation that we have about what humans are capable of achieving if we were either capable of aligning an AGI or able to move forward with civilization without using AGI at all.
Assuming any reasonable definition, I would argue that this probability will be higher than 35%. I would argue it is somewhere between 50% and 100%. The reason for 100% is an argument that "a large reduction of future potential" has already happened if we consider a past counterfactual.
| | |
| --- | --- |
| Definition Of “reduction of future potential” | My probability of P(extinction or reduction of future potential | unaligned AGI) by 2070 |
| Billions of people with mental health problems | 100% |
| End of US and western civilization’s inability to affect the future | 92% |
| Global economic stagnation | 80% |
| Global dystopian lock-in | 75% |
As I mentioned in part 1, social media could have taken a different path and avoided causing mental health problems. If we take this as a counterfactual possibility, we already are in the category of a significant reduction of future potential today, in 2022. This possibility does not involve an AGI but rather narrow AIs. Suppose we define "a large reduction of future potential" as the loss of mental health of billions of people over a decade. In that case, the probability in the question is 100% by default. What should the probability be if a "significant reduction of human potential" means something larger than that?
There are many questions of what historical events would qualify as a "significant reduction of future potential"? One controversial example is the Black Death. Was the scale of death and devastation caused by the Black Death massive? Of course. It would have been a traumatic event that reverberated for centuries. However, how much future potential did it destroy is a big question. In the grand scheme of things, if all goes well with humanity, and if we looked backward after millions of years, we might sense the bleep left by the Black Death has been somewhat insignificant.
If Western civilization falls apart and cannot shape humanity's future, likely leaving that question to the global East/South, does that qualify? As I have argued prior, the fall of western civilization is highly likely due to drops in social cohesion even without the need for AGI. Why such high confidence? The issue is that there are currently no algorithmic pressures to increase the core civilizational variable of social cohesion. At the same time, there are algorithmic pressures to find wedge issues to decrease it. Even highly ambitious socio-algorithmic projects, such as a [network state](https://thenetworkstate.com/), can be seen as a way to move a socially cohesive group outside the sphere of influence of the US (or, as the book puts it, the NYT). Even this project, ambitious as it is, does not plan to increase social cohesion of the US as a whole. Finding algorithmic ways to increase social cohesion, or at least not decrease it is necessary, though more is needed for a fully aligned AGI. The absence of AGI alignment in 2070 is pretty strong evidence that social cohesion has reached the "collapse point."
What kinds of events would be a worse thing for humanity than the Black Death or the end of western civilization while not being extinction events? I predict that AGI can create "lock-ins" where it ushers a dystopian future. A lock-in is a future where an AGI keeps optimizing a metric it was initially designed for and manages to avoid causing an extinction since the metric depends on people being alive to do something. However, this would involve taking its utility to a logical conclusion. [Bostrom and other thinkers have described these types of scenarios.](https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes)
One potential example is the complete digital addiction of all people to the future analogy of social media. You can imagine humanity spends all time, from womb to tomb, staring at the same screen, yelling at each other, while living for a few years, like 35. Actual progress in anything health-related stops, and barely enough functional pieces are left to maintain a civilization. This dystopia could go on for thousands of years. Would this disturbing outcome qualify as a significant reduction of future potential, even if humanity does get out of it at some point? I will say yes. Suppose AI begins to specifically target people who are intelligent enough to oppose it and kill them. In that case, AI could select many positive traits from the population, destroying its capacity to resist or reinvent civilizations. The scale of a behavioral modification of AI's Dark age could be vast. Hundred-thousand-year lock-ins into bad algorithmic and societal architecture can drastically change the speed at which humanity progresses, even if it gets out of them. These kinds of lock-ins are much more likely to exist as the extreme versions of the problems we see with behavioral modification today.
I put a high probability that an AI or a group of AIs designed explicitly for propaganda is the one that achieves a decisive advantage. After all, it can mobilize people to its cause and rally against potential competing AGI projects or rally people towards wars against other great powers to prevent them from building AGIs. If we extend the line of today's problems down to its logical conclusion, the dystopian possibility is significant. Given that we will likely be in multi-polar scenarios of multiple behavioral modification AIs competing against each other, they can block each other's excesses in terms of killing people. Modifying the behavior of humans is challenging if there are no humans. Human extinction is still plausible if wave after wave of lab leaks of AI-created biological weapons get unleashed. However, I put the % of human extinction at ~15%, and the most probability mass (35-85%) lies in the significant reduction of future human potential.
All Parts:
[P1: Historical Priors](https://www.lesswrong.com/posts/imnAuj9D6C5seDbHd/ai-as-a-civilizational-risk-part-1-6-historical-priors)
[P2: Behavioral Modification](https://www.lesswrong.com/posts/wQmCZ8YPTfCKEN9JE/ai-as-a-civilizational-risk-part-2-6-behavioral-modification)
[P3: Anti-economy and Signal Pollution](https://www.lesswrong.com/posts/JXktMsTAc9ZyMoami/ai-as-a-civilizational-risk-part-3-6-anti-economy)
[P4: Bioweapons and Philosophy of Modification](https://www.lesswrong.com/posts/rLAHEcrjtsAbhH5Eq/ai-as-a-civilizational-risk-part-4-6-bioweapons-and)
[P5: X-risk vs. C-risk](https://www.lesswrong.com/posts/CB2qu4zvJbtRbyipH/ai-as-a-civilizational-risk-part-5-6-relationship-between-c)
[P6: What Can Be Done](https://www.lesswrong.com/posts/PpGx4PZcTgL3rnb6Y/ai-as-a-civilizational-risk-part-6-6-what-can-be-done) |
89b9fabd-a960-4021-b51b-6a6403bbfa38 | trentmkelly/LessWrong-43k | LessWrong | An attempt to steelman OpenAI's alignment plan
I don't actually think my attempted steelman is what they currently have in mind, this isn't an attempted Intellectual Turing Test. I take more hope from the resources they are committing to the project and their declared willingness to change their minds as they go than from imagining they've got a good picture in their heads currently. That being said, I don't currently have a high degree of hope in their success, I just also don't think it's inherently doomed.
I'd like to start by sharing two quotes which inspired me to write this post. One from Zvi and one from Paul Christiano. Both of these contain further quotes within them, from the OpenAI post and from Jan Leike respectively.
Zvi
> "Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence."
>
> Oh no.
>
> An human-level automated alignment researcher is an AGI, also a human-level AI capabilities researcher.
>
> Alignment isn’t a narrow safe domain that can be isolated. The problem deeply encompasses general skills and knowledge.
>
> It being an AGI is not quite automatically true, depending on one’s definition of both AGI and especially one’s definition of a human-level alignment researcher. Still seems true.
>
> If the first stage in your plan for alignment of superintelligence involves building a general intelligence (AGI), what makes you think you’ll be able to align that first AGI? What makes you think you can hit the at best rather narrow window of human intelligence without undershooting (where it would not be useful) or overshooting (where we wouldn’t be able to align it, and might well not realize this and all die)? Given comparative advantages it is not clear ‘human-level’ exists at all here.
Paul Christiano
> The basic tension here is that if you evaluate proposed actions you easily lose competitiveness (since AI systems will learn things overseers don't know |
5e20b644-4c7b-434e-bf0b-7453a572f3e7 | trentmkelly/LessWrong-43k | LessWrong | Looking for an intuitive explanation of expected outcome
I'll first explain how I see expected outcome, because I'm not sure my definition is the same as the widely accepted definition.
If I have 50% chance to win 10$, I take it as there are two alternative universes, the only difference being that in one of them, I win 10$ and in the other one, I win nothing. Then I treat the 50% chance as 100% chance to be in both of them, divided by two. If winning 10$ means I'll save myself from 1 hour of work, when divided by two it would be 30 minutes of work. In virtually all cases, when it's about winning small sums of money, you can simply multiply the percentage by the money (in this case, we'll get 5$). Exceptions would be the cases analogous to the one where I'm dying of an illness, I can't afford treatment, but I have all the money I need except for the last 10$ and there isn't any other way to obtain them. So if there's 30% chance to save 10 people's lives, that's the same as saving 3 lives.
If you have no idea what you're talking about, then at least you can see a proof of my problem: I find it hard to explain this idea to people, and impossible for some.
I'm not even sure if the idea is correct. I once posted it on a math forum, asking for evidence, but I didn't find any. So, can someone confirm whether is true, also giving any evidence?
And my main question is, how can I explain this in a way that people can understand it as easily as possible.
(it is possible that it's not clear what I meant - I'll check this thread later for that, and if it turns out to be the case, I'll edit it and add more examples and try to clarify and simplify) |
39d6f4f8-cb48-4299-bfb6-f8f7e9dc683a | trentmkelly/LessWrong-43k | LessWrong | Is LessWrong dying?
An old joke;
"What bar should we go to?"
"Oh, I like Bar X"
"No-one goes there anymore, it's too crowded"
I suspect that LessWrong is getting too crowded for many people to come here. We're well above Dunbar's number, and most of us can't get to know the other active users without a large investment of time. Other than Eliezer, and maybe a couple of other very frequent past contributors (Yvain, Luke, Anna) the site seems fractured - there are too many people to keep track of socially, and the local groups are kind of doing their own thing. The early heavy users are largely gone (as noted by inferential, http://lesswrong.com/lw/le5/welcome_to_less_wrong_7th_thread_december_2014/bsyj) and the site has split; MIRI, CFAR, and related work seem to have (usefully) siphoned off much of the energy that used to go into the site. The influx of HPMOR readers continues, and I suspect will spike again after it finishes, but Lesswrong is beginning to seem more useful as an archive for the sequences, and an extended chat group, than a source of new and useful ideas.
I think there is a simple reason: either you're very in the know, or you can't participate much. The bar for participating has gotten higher as the community has evolved and become well versed in the basics; now there is less to say that's accessible without a heavy background, and many of those things are technical, deep, or uninteresting to casual readers. Adding to this, it seems that if you're very in the know, the utility of the site goes down significantly (or other options open up,) so you eventually leave - to better things, probably. I'm not suggesting that this is an unmitigated bad thing, but I'd like to hear what others think about the idea that LessWrong as a community has largely outgrown this forum - it's useful as a touchstone for meetups and reference to older material, but not nearly as useful as a living site.
What say you? Am I wrong? Are there things we can or should do as a community to |
d76f50f8-61d6-4ae0-904c-17c97601472e | trentmkelly/LessWrong-43k | LessWrong | A model of decision-making in the brain (the short version)
(UPDATE: For a revised-and-improved version of this post, see this later post I wrote (especially Section 6.2.2).)
I’ve been playing around with a hypothesis about how decision-making is laid out in the human brain. Regular readers of my posts have seen this multiple times before; in particular I discussed it in great detail in Big Picture of Phasic Dopamine. But that’s a long post covering many topics, and recently I've found that I keep wanting to talk about (and solicit feedback about) just this one thing.
So here it is in brief, stripped of all the details, justifications, uncertainties, references, etc. Email me or comment at the lesswrong crosspost if you want to discuss, tell me I’m wrong, etc.
Neuroscience abbreviations: dlPFC = dorsolateral prefrontal cortex; DS, VS = dorsal & ventral striatum; mPFC = medial prefrontal cortex.
The model, in four steps
Step 1 (top-left of the diagram): there are parts of the cortex (dorsolateral prefrontal cortex, hippocampus) that propose thoughts, plans, etc. You can think of these as having some attractor dynamics or whatever, such that you start with a bubbly soup of partially-formed mutually-incompatible sub-thoughts, and then they quickly settle into a stable, fully-formed thought. The striatum intervenes during this process, making a quick rough guess at how promising the different pieces look, suppressing the less promising bits and enhancing the more promising bits, so that when you get a fully-formed thought, it’s likelier to be fairly promising.
Step 2 (top-right of the diagram): once you have a stable fully-formed thought, various other parts of the brain (mainly medial prefrontal cortex, anterior cingulate cortex, ventral striatum, amygdala, hippocampus (sorta)) “assess” that thought according to maybe dozens-to-hundreds of genetically-hardcoded criteria like “If I'm gonna do this plan, how appropriate would it be to cringe? To salivate? To release cortisol? To laugh? How much salt would I wind up eating? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.