id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
0eba637d-4db4-4383-9855-527b176e5926 | trentmkelly/LessWrong-43k | LessWrong | The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists
[NOTE: This paper was previously titled 'The Shutdown Problem: Three Theorems'.]
This paper is an updated version of the first half of my AI Alignment Awards contest entry. My theorems build on the theorems of Soares, Fallenstein, Yudkowsky, and Armstrong in various ways.[1] These theorems can guide our search for solutions to the shutdown problem.[2]
One aim of the paper is to get academic philosophers and decision theorists interested in the shutdown problem and related topics in AI alignment. They’re my assumed audience. I’m posting here because I think the theorems will also be interesting to people already familiar with the shutdown problem.
For discussion and feedback, I thank Adam Bales, Ryan Carey, Bill D’Alessandro, Tomi Francis, Vera Gahlen, Dan Hendrycks, Cameron Domenico Kirk-Giannini, Jojo Lee, Andreas Mogensen, Sami Petersen, Rio Popper, Brad Saad, Nate Soares, Rhys Southan, Christian Tarsney, Teru Thomas, John Wentworth, Tim L. Williamson, and Keith Wynroe.
Abstract
I explain and motivate the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems suggest that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. I end by noting that these theorems can guide our search for solutions to the problem.
0. Preamble
Tradition has it that decision theory splits into two branches. The descriptive branch concerns how actual agents behave. The normative branch concerns how rational agents behave. But there is also a lesser-known third branch: what we can call ‘constructive decision theory.’ It concerns how we want artificial agents to behave and how we can create artificial agents that behave in those w |
7fe36db2-df2c-4eae-8fe9-b1e55a200b18 | trentmkelly/LessWrong-43k | LessWrong | Epistemic Viciousness
Someone deserves a large hattip for this, but I'm having trouble remembering who; my records don't seem to show any email or OB comment which told me of this 12-page essay, "Epistemic Viciousness in the Martial Arts" by Gillian Russell. Maybe Anna Salamon?
> We all lined up in our ties and sensible shoes (this was England) and copied him—left, right, left, right—and afterwards he told us that if we practised in the air with sufficient devotion for three years, then we would be able to use our punches to kill a bull with one blow.
> I worshipped Mr Howard (though I would sooner have died than told him that) and so, as a skinny, eleven-year-old girl, I came to believe that if I practised, I would be able to kill a bull with one blow by the time I was fourteen.
> This essay is about epistemic viciousness in the martial arts, and this story illustrates just that. Though the word ‘viciousness’ normally suggests deliberate cruelty and violence, I will be using it here with the more old-fashioned meaning, possessing of vices.
It all generalizes amazingly. To summarize some of the key observations for how epistemic viciousness arises:
* The art, the dojo, and the sensei are seen as sacred. "Having red toe-nails in the dojo is like going to church in a mini-skirt and halter-top... The students of other martial arts are talked about like they are practicing the wrong religion."
* If your teacher takes you aside and teaches you a special move and you practice it for 20 years, you have a large emotional investment in it, and you'll want to discard any incoming evidence against the move.
* Incoming students don't have much choice: a martial art can't be learned from a book, so they have to trust the teacher.
* Deference to famous historical masters. "Runners think that the contemporary staff of Runner's World know more about running than than all the ancient Greeks put together. And it's not just running, or other physical activities, where history |
b34132ac-8f6e-46a2-a94c-bf0c1392e1fc | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Some cruxes on impactful alternatives to AI policy work
Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from.
I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate.
(During the double crux, we also discussed how the heavy-tailed worldview applies to community building, but decided on this post to focus on the object level of what impact looks like.)
Note from Ben: “I am not an expert in policy, and have not put more than about 20-30 hours of thought into it total as a career path. But, as I recently heard Robin Hanson say, there’s a common situation that looks like this: some people have a shiny idea that they think about a great deal and work through the details of, that folks in other areas are skeptical of given their particular models of how the world works. Even though the skeptics have less detail, it can be useful to publicly say precisely why they’re skeptical.
In this case I’m often skeptical when folks tell me they’re working to reduce x-risk by focusing on policy. Folks doing policy work in AI might be right, and I might be wrong, but it seemed like a good use of time to start a discussion with Richard about how I was thinking about it and what would change my mind. If the following discussion causes me to change my mind on this question, I’ll be really super happy with it.”
Ben's model: Life in a heavy-tailed world
-----------------------------------------
A [heavy-tailed distribution](https://en.wikipedia.org/wiki/Heavy-tailed_distribution) is one where the probability of extreme outcomes doesn’t drop very rapidly, meaning that outliers therefore dominate the expectation of the distribution. Owen Cotton-Barratt has written a brief explanation of the idea [here](https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/#heavy-tailed-distributions). Examples of heavy-tailed distributions include the Pareto distribution and the log-normal distribution; other phrases people use to point at this concept include ‘power laws’ (see [Zero to One](https://www.amazon.co.uk/Zero-One-Notes-Start-Future/dp/0753555204/ref=sr_1_1?ie=UTF8&qid=1538077169&sr=8-1&keywords=zero+to+one)) and ‘black swans’ (see the recent [SSC book review](http://slatestarcodex.com/2018/09/19/book-review-the-black-swan/)). Wealth is a heavy-tailed distribution, because many people are clustered relatively near the median, but the wealthiest people are millions of times further away. Human height and weight and running speed are not heavy-tailed; there is no man as tall as 100 people.
There are three key claims that make up Ben's view.
**The first claim is that, since the industrial revolution, we live in a world where the impact that small groups can have is much more heavy-tailed than in the past.**
* People can affect incredibly large numbers of other people worldwide. The Internet is an example of a revolutionary development which allows this to happen very quickly.
* Startups are becoming unicorns unprecedentedly quickly, and their valuations are very heavily skewed.
* The impact of global health interventions is heavy-tail distributed. So is funding raised by Effective Altruism - two donors have contributed more money than everyone else combined.
* Google and Wikipedia qualitatively changed how people access knowledge; people don't need to argue about verifiable facts any more.
* Facebook qualitatively changed how people interact with each other (e.g. FB events is a crucial tool for most local EA groups), and can swing elections.
* It's not just that we got more extreme versions of the same things, but rather that we can get unforeseen types of outcomes.
* The books *HPMOR* and *Superintelligence* both led to mass changes in plans towards more effective ends via the efforts of individuals and small groups.
**The second claim is that you should put significant effort into re-orienting yourself to use high-variance strategies.**
* Ben thinks that recommending strategies which are *safe* and *low-risk* is insane when pulling out of a heavy-tailed distribution. You want everyone to be taking high-variance strategies.
+ This is only true if the tails are long to the right and not to the left, which seems true to Ben. Most projects tend to end up not pulling any useful levers whatever and just do nothing, but a few pull crucial levers and solve open problems or increase capacity for coordination.
* Your intuitions were built for the ancestral environment where you didn’t need to be able to think about coordinating humans on the scale of millions or billions, and yet you still rely heavily on the intuitions you’re built with in navigating the modern environment.
* [Scope insensitivity](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity), [framing effects](https://www.lesswrong.com/posts/Nx2WxEuPSvNBGuYpo/feeling-moral), [taboo tradeoffs](http://www.overcomingbias.com/2017/12/automatic-norms.html), and [risk aversion](https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/), are the key things here. You need to learn to train your S1 to understand *math*.
+ By default, you’re not going to spend enough effort finding or executing high-variance strategies.
* We're still only 20 years into the internet era. Things keep changing qualitatively, but Ben feels like everyone keeps adjusting to the new technology as if it were always this way.
* Ben: “My straw model of the vast majority of people’s attitudes is: I guess Facebook and Twitter are just things now. I won’t spend time thinking about whether I could build a platform as successful as those two but optimised better for e.g. intellectual progress or social coordination - basically not just money.”
* Ben: “I do note that never in history has change been happening so quickly, so it makes sense that people’s intuitions are off.”
* While many institutions have been redesigned to fit the internet, Ben feels like almost nobody is trying to improve institutions like science on a large scale, and that this is clear low-hanging altruistic fruit.
* The Open Philanthropy Project has gone through this process of updating away from safe, low-risk bets with GiveWell, toward [hits-based giving](https://www.openphilanthropy.org/blog/hits-based-giving), which is an example of this kind of move.
**The third claim is that AI policy is not a good place to get big wins nor to learn the relevant mindset.**
* Ben: “On a first glance, governments, politics and policy looks like the sort of place where I would not expect to find highly exploitable strategies, nor a place that will teach me the sorts of thinking that will help me find them in future.”
* People in policy spend a lot of time thinking about how to influence governments. But governments are generally too conventional and slow to reap the benefits of weird actions with extreme outcomes.
* Working in policy doesn't cultivate the right type of thinking. You're usually in a conventional governmental (or academic) environment, stuck inside the system, getting seduced by local incentive gradients and prestige hierarchies. You often need to spend a long time working your way to positions of actual importance in the government, which leaves you prone to value drift or over-specialisation in the wrong thing.
+ At the very least, you have to operate on the local incentives as well as someone who actually cares about them, which can be damaging to one’s ability to think clearly.
* Political landscapes are not the sort of environment where people can easily ignore the local social incentives to focus on long-term, global goals. Short term thinking (election cycles, media coverage, etc) is not the sort of thinking that lets you build a new institution over 10 years or more.
+ Ben: “When I’ve talked to senior political people, I’ve often heard things of the sort ‘We were working on a big strategy to improve infrastructure / international aid / tech policy etc, but then suddenly public approval changed and then we couldn’t make headway / our party wasn’t in power / etc.’ which makes me think long term planning is strongly disincentivised.”
* One lesson of a heavy-tailed world is that signals that you’re taking safe bets are *anti-signals* of value. Many people following a standard academic track saying “Yeah, I’m gonna get a masters in public policy” sounds *fine*, *sensible, and safe*, and therefore *cannot* be an active sign that you will do something a million times more impactful than the median.
The above is not a full, gears-level analysis of how to find and exploit a heavy tail, because almost all of the work here lies in identifying the particular strategy. Nevertheless, because of the considerations above, Ben thinks that talented, agenty and rational people should be able in many cases to identify places to win, and then execute those plans, and that this is much less the case in policy.
Richard's model: Business (mostly) as usual
-------------------------------------------
I disagree with Ben on all three points above, to varying degrees.
On the first point, I agree that the distribution of success has become much more heavy-tailed since the industrial revolution. However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway). Probably the alternatives would have been somewhat worse, but not significantly so (and if they were, different competitors would have come along). The distinguishing trait of modernity is that even a small difference in quality can lead to a huge difference in earnings, via network effects and global markets. But that isn't particularly interesting from an x-risk perspective, because money isn't anywhere near being our main bottleneck.
You might think that since Facebook has billions of users, their executives are a small group with a huge amount of power, but I claim that they're much more constrained by competitive pressures than they seem. Their success depends on the loyalty of their users, but the bigger they are, the easier it is for them to seem untrustworthy. They also need to be particularly careful since antitrust cases have busted the dominance of several massive tech companies before. (While they could swing a few elections before being heavily punished, I don’t think this is unique to the internet age - a small cabal of newspaper owners could probably have done the same centuries ago). Similarly, I think the founders of Wikipedia actually had fairly little counterfactual impact, and currently have fairly little power, because they're reliant on editors who are committed to impartiality.
What we should be more interested in is cases where small groups didn't just ride a trend, but actually created or significantly boosted it. Even in those cases, though, there's a big difference between success and impact. Lots of people have become very rich from shuffling around financial products or ad space in novel ways. But if we look at the last fifty years overall, they're far from dominated by extreme transformative events - in fact, Western societies have changed very little in most ways. Apart from IT, our technology remains roughly the same, our physical surroundings are pretty similar, and our standards of living have stayed flat or even dropped slightly. (This is a version of Tyler Cowen and Peter Thiel's views; for a better articulation, I recommend *The Great Stagnation* or *The Complacent Class).* Well, isn't IT enough to make up for that? I think it will be eventually, as AI develops, but right now most of the time spent on the internet is wasted. I don't think current IT has had much of an effect by standard metrics of labour productivity, for example.
**Should you pivot?**
Ben might claim that this is because few people have been optimising hard for positive impact using high-variance strategies. While I agree to some extent, I also think that there are pretty strong incentives to have impact regardless. We're in the sort of startup economy where scale comes first and monetisation comes second, and so entrepreneurs already strive to create products which influence millions of people even when there’s no clear way to profit from them. And entrepreneurs are definitely no strangers to high-variance strategies, so I expect most approaches to large-scale influence to already have been tried.
On the other hand, I do think that reducing existential risk is an area where a small group of people are managing to have a large influence, a claim which seems to contrast with the assertion above. I’m not entirely sure how to resolve this tension, but I’ve been thinking lately about an analogy from finance. [Here's Tyler Cowen](https://medium.com/conversations-with-tyler/nate-silver-conversations-with-tyler-1bdafe685d77):
> I see a lot of money managers, so there’s Ray Dalio at Bridgewater. He saw one basic point about real interest rates, made billions off of that over a great run. Now it’s not obvious he and his team knew any better than anyone else.
> Peter Lynch, he had fantastic insights into consumer products. Use stuff, see how you like it, buy that stock. He believed that in an age when consumer product stocks were taking off.
> Warren Buffett, a certain kind of value investing. Worked great for a while, no big success, a lot of big failures in recent times.
The analogy isn’t perfect, but the idea I want to extract is something like: once you’ve identified a winning strategy or idea, you can achieve great things by exploiting it - but this shouldn’t be taken as strong evidence that you can do exceptional things in general. For example, having a certain type of personality and being a fan of science fiction is very useful in identifying x-risk as a priority, but not very useful in founding a successful startup. Similarly, being a philosopher is very useful in identifying that helping the global poor is morally important, but not very useful in figuring out how to solve systemic poverty.
From this mindset, instead of looking for big wins like “improving intellectual coordination”, we should be looking for things which are easy conditional on existential risk actually being important, and conditional on the particular skillsets of x-risk reduction advocates. Another way of thinking about this is as a distinction between high-impact goals and high-variance strategies: once you’ve identified a high-impact goal, you can pursue it without using high-variance strategies. Startup X may have a crazy new business idea, but they probably shouldn't execute it in crazy new ways. Actually, their best bet is likely to be joining Y Combinator, getting a bunch of VC funding, and following Paul Graham's standard advice. Similarly, reducing x-risk is a crazy new idea for how to improve the world, but it's pretty plausible that we should pursue it in ways similar to those which other successful movements used. Here are some standard things that have historically been very helpful for changing the world:
* dedicated activists
* good research
* money
* public support
* political influence
My prior says that all of these things matter, and that most big wins will be due to direct effects on these things. The last two are the ones which we’re disproportionately lacking; I’m more optimistic about the latter for a variety of reasons.
**AI policy is a particularly good place to have a large impact.**
Here's a general argument: governments are very big levers, because of their scale and ability to apply coercion. A new law can be a black swan all by itself. When I think of really massive wins over the past half-century, I think about the eradication of smallpox and polio, the development of space technology, and the development of the internet. All of these relied on and were driven by governments. Then, of course, there are the massive declines in poverty across Asia in particular. It's difficult to assign credit for this, since it's so tied up with globalisation, but to the extent that any small group was responsible, it was Asian governments and the policies of Deng Xiaoping, Lee Kuan Yew, Rajiv Gandhi, etc.
You might agree that governments do important things, but think that influencing them is very difficult. Firstly, that's true for most black swans, so I don't think that should make policy work much less promising even from Ben's perspective. But secondly, from the outside view, our chances are pretty good. We're a movement comprising many very competent, clever and committed people. We've got the sort of backing that makes policymakers take people seriously: we're affiliated with leading universities, tech companies, and public figures. It's likely that a number of EAs at the best universities already have friends who will end up in top government positions. We have enough money to do extensive lobbying, if that's judged a good idea. Also, we're correct, which usually helps. The main advantage we're missing is widespread popular support, but I don't model this as being crucial for issues where what's needed is targeted interventions which "pull the rope sideways". (We're also missing knowledge about what those interventions should be, but that makes policy research even more valuable).
Here's a more specific route to impact: in a few decades (assuming long timelines and slow takeoff) AIs that are less generally intelligent that humans will be causing political and economic shockwaves, whether that's via mass unemployment, enabling large-scale security breaches, designing more destructive weapons, psychological manipulation, or something even less predictable. At this point, governments will panic and AI policy advisors will have real influence. If competent and aligned people were the obvious choice for those positions, that'd be fantastic. If those people had spent several decades researching what interventions would be most valuable, that'd be even better.
This perspective is inspired by Milton Friedman, who argued that the way to create large-scale change is by nurturing ideas which will be seized upon in a crisis.
> Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the possible.
The major influence of the Institute of Economic Affairs on Thatcher’s policies is an example of this strategy’s success. An advantage of this approach is that it can be implemented by clusterings of like-minded people collaborating with each other; for that reason, I'm not so worried about policy work cultivating the wrong mindset (I'd be more worried on this front if policy researchers were very widely spread out).
Another fairly specific route to impact: several major AI research labs would likely act on suggestions for coordinating to make AI safer, if we had any. Right now I don’t think we do, and so research into that could have a big multiplier. If a government ends up running a major AI lab (which seems pretty likely conditional on long timelines) then they may also end up following this advice, via the effect described in the paragraph above.
**Underlying generators of this disagreement**
More generally, Ben and I disagree on where the bottleneck to AI safety is. I think that finding a technical solution is probable, but that most solutions would still require careful oversight, which may or may not happen (maybe 50-50). Ben thinks that finding a technical solution is improbable, but that if it's found it'll probably be implemented well. I also have more credence on long timelines and slow takeoffs than he does. I think that these disagreements affect our views on the importance of influencing governments in particular.
We also have differing views on what the x-risk reduction community should look like. I favour a broader, more diverse community; Ben favours a narrower, more committed community. I don't want to discuss this extensively here, but I will point out that there are many people who are much better at working within a system than outside it - people who would do well in AI safety PhDs, but couldn't just teach themselves to do good research from scratch like Nate Soares did; brilliant yet absent-minded mathematicians; people who could run an excellent policy research group but not an excellent startup. I think it's valuable for such people (amongst which I include myself), to have a "default" path to impact, even at the cost of reducing the pressure to be entrepreneurial or agenty. I think this is pretty undeniable when it comes to technical research, and cross-applies straightforwardly to policy research and advocacy.
Ben and I agree that going into policy is much more valuable if you're thinking very strategically and [out of the "out of the box" box](https://www.lesswrong.com/posts/qu95AwSrKqQSo4fCY/the-outside-the-box-box) than if you're not. Given this mindset, there will probably turn out to be valuable non-standard things which you can do.
Do note that this essay is intrinsically skewed since I haven't portrayed Ben's arguments in full fidelity and have spent many more words arguing my side. Also note that, despite being skeptical about some of Ben's points, I think his overall view is important and interesting and more people should be thinking along similar lines.
*Thanks to Anjali Gopal for comments on drafts.* |
0c525833-d335-4d0b-9546-3d633ed193d9 | trentmkelly/LessWrong-43k | LessWrong | Intelligence failures and a theory of change for forecasting
In this blog post I’m going to try to do two things. First I’m going to make the case that failures of both the US and Soviet intelligence services have substantially increased the risk of nuclear power in the atomic age, then I’m going to argue that these case studies provide potentially the best example of how Tetlock style forecasting can be used to improve the world.
Forecasting
Tetlock style forecasting is the practice of individuals or, even better, groups of individuals and having them predict the probability of events on scale from 0-1 and then scoring them on their predictions. Tetlock’s experiments have shown that some people are much much much better both than average individuals, domain experts, and relevant for this, CIA intelligence analysts with access to classified material, at predicting political events. However, as with any improvement in human power, it’s an open question about whether this tool is more likely to be used for good or ill. For instance, one could imagine authoritarian regimes using these methods to more effectively suppress protests. Furthermore, it’s not clear how many tragedies could have been averted if the revlevent actors had had access to higher quality information. For instance, the North Korean economy isn’t failing because President Kim is incorrect about whether or not his economic policies are good - the economy is failing because Kim uses the policies that are expedient to him staying in power for the longest period of time possible. If one was trying to improve the long term future as much as possible it’s not clear that developing and advocating for forecasting is better than reducing the burden of disease in low income countries which both have enormous benefits and we can be very confident that we do effectively.
However, I think the history of nuclear risk provides good case studies in how having better information would have reduced nuclear risk, and I’ll also sketch theoretical reasons for why one should |
b34a71b5-9659-4c51-8dcb-83acce81caff | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | The Alignment Problem: Machine Learning and Human Values
to explore deep unsolved problems about
the nature and limits of computation
the institute's core activities revolve
around semester-long research programs
on specific topics in the foundations of
computing and related fields
we've been going since 2012 and recently
we learned that our funding will be
renewed for the second decade so we're
very grateful to the simons foundation
for that
for this semester although we've been
operating online only we have two
very active programs one on probability
geometry and computation in high
dimensions
and one on the theory of reinforcement
learning the institute has a
long-standing tradition of
appointing a science communicator in
residence
with the aims of supporting authors and
journalists in the areas of computer
science and mathematics of helping them
connect with the experts who participate
in our programs
of increasing the visibility of
theoretical computer science and of
helping to
educate our participants about
communicating their work to a broader
audience
so we're delighted to have as our
science communicator in residence this
semester
brian christian brian is the
award-winning author of the most human
human
and algorithms to live by these books
have won many awards and commendations
including from the new york times wall
street journal
the new yorker and the mit technology
review
brian's written articles for the
atlantic the guardian the new yorker
the paris review wired and the wall
street journal
and he has a third book coming out in
early october
and we're very happy to be hearing about
uh the topic of that book today brian's
talk today is entitled the alignment
problem
machine learning and human values
welcome brian
thank you so much peter so i'm going to
talk about the book that i've been
writing for the past
four years which is finally coming out
in three weeks
and it feels really fitting to be giving
the first real talk about the book here
at berkeley
because berkeley has really been my home
in my primary academic community during
that time
so i first want to express my gratitude
to citrus where i've been a visiting
scholar starting in the spring of 2017.
and to chai where i've been an active
participant throughout that time as well
and also to the simons institute
attending some of the seminars and
workshops here was
incredibly generative both for
familiarizing myself with certain
technical
concepts connecting certain
interdisciplinary dots and
meeting folks across the community not
just here at berkeley but beyond
so i want to give a special thanks to
brandi nanaki and camille crittenden
at citrus to stuart russell and mark
nitzberg at chai
and to richard carp and kristen kane
here at simons for
inviting me into the fold and making me
feel so welcome and so at home here
and so i'm very honored to be starting
this semester as the scientific
communicator
in residence here at simon's um it's a
great honor to me so
thank you i want to talk about the
central theme of the book which
is the connection between machine
learning and human values
in some ways it feels quite ironic to be
addressing in large part the theoretical
computer science community
particularly those who work in machine
learning and especially in reinforcement
learning
um because in some sense i see my role
as being something of
an ambassador from that community to the
public at large
and so um it was an interesting
challenge to think about how to make an
address
to that community um the second
challenge
for me uh is that the book is very
deliberately an opportunity
for people who are not myself to speak
um i decided very early on that one of
my priorities was to create space
for people in large part to tell their
own stories
there are probably about um between 100
and 150 people
who speak at one point or other in the
course of the book and
i think that um sort of multi-vocal
quality is really one of its
distinguishing features um for better
worst this morning it's just me so you
won't
quite uh get to appreciate that aspect
of the book
um third and finally a big part of what
the book sets out to do
um is to actually teach people machine
learning i believe very passionately
that there are
many careers that are intersecting with
machine learning at this point whether
it is public policy um the law
medicine and so forth and a lot of
people are having to find themselves
kind of skilling up in machine learning
literacy you know midway through a
career that was ostensibly
um about something else um and so a big
part of my goal uh is pedagogy i want
people to
to walk away knowing the difference
between supervised unsupervised and
reinforcement learning and
what an objective function is and the
difference between
behavior cloning and inverse
reinforcement learning
etc and uh if ever there was a group of
people who did not
need me to give them a primer on machine
learning it is the simon's institute
research fellows so what i would like to
do
is highlight some of the
interdisciplinary connections and the
places where
machine learning makes contact sometimes
in very surprising ways
with other disciplines because i think
it is very broadly true that
um to the degree that ai machine
learning reinforcement learning are
about discovering fundamental aspects of
what it means to think to have a mind
then the further along that field goes
the more relevant existing fields like
developmental psychology cognitive
science education
management science etc become
and so for for the next 30 or so minutes
perhaps i can um
reverse my normal osmosis and instead of
being a kind of ambassador
for theoretical computer science i can
attempt to be an
ambassador to theoretical computer
science and
highlight this what i see as an
incredibly rich circumference of
connections to other fields
and opportunities for research and for
cross-pollination
so on to the central thrust of the book
itself
um before we explore the the actual
contents together
so there's a story that i assume most
everyone
here knows at least to some degree or
another which is the remarkable progress
that has been made in machine learning
particularly
deep learning since the beginning of the
last decade
there is something very poetic to me
about the fact that it
was neural networks in particular that
are responsible for this incredible
breakthrough because
neural networks were essentially one of
the very first
ideas in computing uh they're they're
older than the von neumann architecture
they're they're older than the the
stored
uh program computer um von neumann's
1945 edvac report which
is the first written description of a
stored program computer contains
um in 101 pages only a single citation
which is mcculloch and pitts 1943
so i started researching the life of
walter pitts
um going through oral histories and the
mcculloch archives uh
at the american philosophical society in
philadelphia
and i was astounded at the life stories
of some of these early pioneers
i remember reading an oral history from
one of their contemporaries jerome
ledvin who just casually mentioned
oh yeah and when pitts started working
with mcculloch he was 17 years old
and homeless well that certainly got my
attention
um and warren mcculloch basically came
like became like his foster parent
um the more i read the more fascinating
and poignant that story
became so i thought okay um i found my
my opening scene so it's walter pitts
h12 deciding to run away from home
and of course we all know the story that
that runs
from there through frank rosenblatt and
marvin minsky and seymour papert
the rebirth in the 1980s and then what i
see as kind of this ultimate final
triumph in 2012 with alex net
um when we first meet alex krazevsky in
the book he is
in his bedroom at his parents house in
toronto and his two
uh gtx 580 gpus have been running
uh non-stop for two weeks and now it's
too hot to sleep um
and that was just 2012 not that long ago
and yet
it feels now that we almost live in a
different world
i think we've become um
in some ways desensitized to how
discontinuous some of these jumps
uh have felt and it's important i think
to remember how much they caught even
experts off guard
um so one one example sort of at random
is uh richard sutton uh who authored of
course the
canonical book about reinforcement
learning um
gave a lecture in 2015 at ut austin
uh where he presented a graph and here's
the slide
of the strength of computer go programs
there was this very striking
linear trend line which uh if you
extrapolated it out
he notes that projecting this trend
suggests
a computer will be world champion by
2025
so within 10 years um
and it happened the very next year um
led by the team at deepmind uh led by
david silver um
and i think this is just just a very
striking illustration of how abrupt some
of those
uh jumps uh have been uh
i got very curious by the way in the
origin of this graph
so i started digging into where uh where
rich got it
and the original version had been made
for rich a few years earlier
by a grad student of his named david
silver
uh so i think that's quite ironic indeed
at the same time all of this is
happening in deep learning
uh and deep reinforcement learning um
there is a
subtler but equally significant movement
that's happening within
society which is the penetration of
machine learning into greater
and greater contact with human and
institutional decision making
so to illustrate this this is a look at
the number of states in the u.s that are
using statistical risk assessment models
to assist in parole decisions
so um by
1935 it's just one u.s state
by 1960 it's 2 out of 50.
by 1980 it's 4 out of 50
then 12 and then in the year 2026
um and by 2003 the association of parole
authorities internationals
handbook for new parole members writes
in this day and age making parole
decisions without the benefit of a
research-based risk assessment
instrument
clearly falls short of accepted best
practice
um and in 2016
um supreme court chief justice john
roberts is visiting
rensselaer and he is asked by rensselaer
president shirley ann jackson
can you foresee a day when artificial
intelligence
will assist with courtroom fact-finding
and even
more controversially perhaps judicial
decision-making
and roberts responds
it's a day that's here
and so i think on both of these counts
both the astonishing
capacity of these systems on the one
hand and
on the other hand the increasing surface
area on which they touch our lives
people began as we all know to get
concerned and these concerns
take root across two distinct but
fundamentally united groups
there are people worried about the
present day
about whether the systems currently
being deployed really represent the
interests of the people that they're
supposed to
and there are people worried about the
near-term future
as we increase and increase the
capability of these systems
that we might be setting ourselves up
for a truly catastrophic failure
again despite their different priorities
and
distinct but overlapping communities i
began to see the fundamental question
underneath
each set of concerns as being the same
so put most simply
how can we make sure that these systems
do what we want
and this problem of course has a name
and the name is
the alignment problem
and i think this is more or less
where the public conversation around
machine learning around
ml ethics and technical ai safety uh
kind
of ends but in my view
this is really where things get
interesting and this is
this is really the point where the book
begins so if we look to the period of
roughly 2014 to 2016 as
a giant pulling of the fire alarm
uh then to continue the analogy what we
begin to see from
roughly 2016 to the present uh is
the first responders start showing up
uh this set of concerns moves from being
marginal and to some degree
taboo to comprising one of the central
concerns
of i think of the field we have this
explosion of
workshops conferences research centers
non-profits grants
all happening within this short time and
i heard from people
you know again and again that they they
would go to
for example nurips in 2016 and tell
people that they worked on
safety and the response would be
something like safety
uh and then by 2017 there was an entire
nurse workshop
workshop on safety um i heard versions
of this story again and again you know
with the years perhaps plus or minus one
and the area of focus being you know
safety fairness transparency etc
um so there is this incredible zeitgeist
i think pivoting towards these issues
there is a first generation of phd
students
just now graduating who um
are have matriculated with the explicit
purpose of wanting to work on ai ethics
and ai safety
um and not only is there this incredibly
spirited energy around these topics but
there are actual results there are
tangible victories uh being notched and
there is this
agenda now uh that is well underway
um so with ironically a
minimum of actual computer science and a
maximum emphasis
on the interdisciplinary dimensions of
this field
what does that agenda actually look like
so the book is divided into three parts
which comprise three chapters each
and so i'd like to just take a very
brief
um glance at each in turn
and highlight what i see as being some
of the the frontier of interdisciplinary
connections that exist
around this core theoretical computer
science
contribution so the first chapter starts
with
one of the most visceral and sadly
iconic
examples of machine learning failing in
an ethically significant way which is
um the famous example of the two black
americans being labeled
by google photos in 2015 as gorillas
um we get to meet jackie alstine who
was one of those people and is himself a
software engineer with an ml
uh background and who knew instantly
that
something had gone wrong uh in the
training data that he
as soon as this experience happened to
him he immediately surmised that there
was just a
paucity of black faces in the training
data
so we knew exactly how this had happened
but the question was
why why had that come to be the case
that this this um
training data was so uh unrepresentative
of the population at large
so that's that's the deeper question um
and i think this is a great example of
where
ethical and long-term technical safety
concerns intersect so
you can frame this as a question of
inclusion and representation
uh you can also frame it as a question
of robustness to distributional shift
how do models behave when they get
outside of their training distribution
um
and there is a lot of really i think
exciting and encouraging work being done
here so you have the work of people like
joy bolemweeny and timmy gabriel um and
many others
uh bringing a focus um
to this question of where do these
training sets actually come from
what do they actually look like um and
i think there's an interdisciplinary
theme here as well
um which is that there's also there's an
almost 200 year story
of the intersection of racial justice
and photography so
as i started researching this area i was
fascinated to learn for example that
uh the single most photographed american
of the 19th century uh more than abraham
lincoln etc
was frederick douglass the pioneering
abolitionist
who felt that photography was a critical
tool for emancipation because
of course before photography you had
engravings
which were done by hand and almost
always exaggerated black people's
features in stereotyped ways
so there's an irony that we go from the
19th century
in which the single most photographed
american is
frederick douglass um to
the 20th century where commercial film
is being
developed and calibrated um by color
accuracy with reference to
a model the first model that kodak used
was named shirley and so these have
become known as
shirley cards which until the 1970s were
almost exclusively white
um and in the in the book we get to meet
some kodak
executives who uh describe amazingly
that they were receiving pressure in the
60s and 70s
from the chocolate and furniture
industries
um to make film that better portrayed uh
brown hues um but
changing the uh nature of the film of
course changed
the um the ability of the film to
represent people with darker skin
um so here we are in the 21st century in
which we have these
um kind of weird echoes back to the 20th
century
um and we have this broad uh
broad movement of of questioning what is
the nature
of these training sets and um you have
people
like neil jan and huhan
publishing statistics about the
composition of labeled faces in the wild
and showing that for example it contains
twice as many images of george w
bush as it does of all black women
combined
um and as recently as i believe the fall
of 2019 a warning
label now appears on the page where you
would go to download the
lfw data set saying basically caution
this data set is
not representative of the us or the
global population
the other domain that we explore in this
chapter is language models and looking
at the many stereotypes that emerge
all the way from simple word embedding
models like word to vec and glove
all the way to the more modern enormous
trend uh transformer networks like gpt
2 gbt3 um these models are often
employed i think
uncautiously in recruiting and hiring
contexts
and we encounter the story of an amazon
team that
looks with horror at the individual
terms that their model is kind of
upranking and downranking
there's a theoretical computer science
story here um in
work on de-biasing vector models by
collapsing the gender space
um and although the story there is is
not quite so simple
um and there's work as recently as two
weeks ago from open ai
on fine-tuning these big transformer
models
from human preferences um and i think
that story is
also very much being worked out um the
interdisciplinary
interdisciplinary story here is that
we've added
a new tool uh to the arsenal of social
science that
um as these language models become in
some ways
uncanny reflections of
uh human norms and human biases
including the ones that we
would rather we didn't have they become
a measure for actually watching society
change and so there's been some really
interesting
interdisciplinary work on applying these
models to historical uh
corpora you know 1930s 40s 50s 60s 70s
and watching in a quantifiable way
uh the norms of the society change and
so i think
um machine learning is not only
a a tool um
that uh it's not just the case that
social scientists um
you know are increasingly having input
in these models but that these models
are also giving social
scientists um a brand new set of subject
matter and
an entirely new lines with which to look
at the world
so the the second chapter is fairness
and i think most people who are familiar
with the machine learning literature
on fairness know about the compass tool
which predicts the risk of recidivism
and is used in pre-trial detention so in
this chapter i really try to dig into
the back story so
um statistical risk assessment models go
back to the 1920s
and the there's a time when the
conservative head of the parole board in
illinois was thinking about getting rid
of the parole system entirely
um and he believes that is simply an an
asset to criminals uh
that you would ever let them out uh
ahead of the full sentence
and a sociologist uh from chicago named
ernest burgess ends up
collecting enough data to persuade him
to change his mind
um and uh
that is really the beginning of the the
use of statistical risk assessments
um in the criminal justice system and
uh looking through archival uh
newspapers was
was very enlightening to me uh for
example most of the criticism against
these models in the 30s
uh was coming from the right whereas
today it
is largely speaking the progressives
that are the most skeptical
um so i think the the most visible
of the contemporary critiques of these
models is from
pro publica julia anguin um who uh
made this very famous article machine
bias critiquing the compass model
um and i got to meet both julia anguin
and the creator of compass tim brennan
um and
i was quite pleased to actually find a
bit of common ground where i
could uh i could convince them to to
agree with
one another um the there's a theoretical
computer science story here on how do we
operationalize
fairness and um this goes through the
work of people like cynthia dwork
mort's heart uh john kleinberg sam
corbett davies alex toldichova
christian lum looking at everything
from these impossibility proofs
of uh mutually satisfying different
metrics of fairness
to things like the long-term feedback
loops that exist when these systems
uh get into put into practice and start
generating the very data
that they will go on to be trained upon
the interdisciplinary story here
includes not only
the long-term history of risk assessment
but also contemporary
political scientists like columbia's
bernard harcourt
who argues that the very premise that
better predictions lead to a
better public safety um is itself
wrong um which i think is a very
interesting argument for for people in
in ml to contend with um and more
broadly we have the
the question of uh not just data
provenance
where are the data coming from uh but
the sort of human computer interaction
and the user experience aspect of how
these risk assessment
instruments actually get put into
practice so as part of my research for
this chapter i spent a day
going to arraignment hearings in san
francisco um
right after san francisco began using
the arnold tool
and it was very illuminating to me to
see the degree to which individual
judges
did or didn't actually comprehend uh
this giant printout that they're being
given
with with the risk assessment
information on it
um so there's also this deeper question
of course of
what exactly is the ultimate purpose of
certain aspects of criminal policy
um if we can identify that someone is a
risk
pre-trial well there might be two
extremely different risks that we have
in mind you know there might be a risk
of violent reoffense
there might also be a risk of failure to
appear for the court date
um and if it may be the case that the
solution to the former problem
uh might be incarceration it might not
the solution to the second problem is
probably something like a text message
um and so i think there uh there's a lot
of work to be done not necessarily in
how the models themselves are developed
but in whether they're essentially used
uh
according to you know the label on the
side of the tin and i think there's
there's a lot of work to be done there
um
and and many people of course argue that
uh fine-tuning exactly what the
objective function is of these systems
or exactly what this or fairness
constraints imposed upon them
um that while that discussion is
fruitful there are also ways we can sort
of cut the guardian not entirely
so we could just decriminalize marijuana
for example and then not have to
worry about how to fairly assign
pretrial detention for people who are
arrested for that
there are some states i believe new york
state and maryland if i'm not mistaken
um that are increasingly moving towards
a model where
if you're arrested for a non-violent
misdemeanor then you simply released
full stop and so then you don't need a
model to predict your
whether or not to detain the person if
you simply never detain the person
um in the domain of transparency so
um the chapter in transparency focuses
on the domain of medicine and we
meet microsoft's rich carwana who in the
1990s was developing
machine learning models for predicting
the severity of pneumonia
and uh his neural network model
wins this kind of bake off against uh
you know logistic regression rule-based
models and and so forth
um but very significantly he
urges the doctors that were partnering
with them on the study
not to deploy the neural network
precisely because
he doesn't know what's in it he doesn't
know what it's learned
and in particular the the rule-based
model had learned this rule that
if someone is asthmatic then
we should predict that they are at lower
risk for pneumonia
which if you think about that for a
second doesn't make any sense at all
it turns out that this is actually a
real correlation in the data
but it's precisely because asthmatics
are given
higher priority care that they on
average do have better outcomes than
regular people but
this is precisely the care of course
that the model would deny to those
patients
um so transparency allows us to of
course catch some of these things before
they actually go
into deployment and affect people and
there's a really rich computer science
here that i think is really exciting
from
caruana himself uh trying to explore a
space of models that are
ideally as expressive or capable as
neural networks but
as interpretable as something like a
rule list and so he's
pioneering ideas like uh generative
additive models and his own sort of
personal extension of that
um to people like cynthia rudin at duke
university who are
exploring the space of certifiably
optimal simple models um so rather than
using our computational horsepower
training a big model
we use our computational horsepower
exploring the space of simple models and
finding the ideal simple model
um and on the
sort of deep learning side we have
people like openai's chris ola
working on unpacking and visualizing
deep convolutional networks
and people like google's bee and kim
working on
concept activation vectors and
interpretability measures using
high level human concepts so the
interdisciplinary story here in my mind
is that
transparency is fundamentally a human
concept
a model is transparent to the degree
that people understand what's going on
and use it appropriately um there is
nothing in the abstract that
transparency means
outside of that and so user studies
should be totally unavoidable um and not
only that
but they are often counter-intuitive so
one of the results here that comes to my
mind
is the work of jen wartman vaughan and
her collaborators um
who showed that simple transparent
models
with a small number of parameters and
clearly visible weights
were much more trusted by human users
even when those models were operating
outside their training distribution and
output and garbage
um so i think user studies like that are
really
useful at um you know
problematizing that the simple story
that we might otherwise get
about thinking about model transparency
um
so there's also the legal angle
obviously so transparency intersects
with the law and things like the gdpr um
and i also think there's a critically
interesting intersection here um
with a bunch of mid-20th century
psychology so there's
there's a tradition within psychology
going from ted sarban in the 1940s to
paul meal in the 1950s to robin dawes in
the 1970s
looking at comparing expert human
judgments to linear models with uniform
weights
unit weighted regression and the tldr is
that unit weighted regression
demolishes expert judgment
even when you still give the human
expert the results of a unit weighted
regression
uh they're still worse than just using
the regression on their own and when you
give the machine learning model the
human judges uh
decisions as input the model doesn't
even use it uh it's just not helpful
um and i think this is really really
provocative and
and in particular um one of the things
that robin dawes was interested in
is this question of how do you build a
model when you don't
have an objective function so you want
to identify high schoolers that will go
on to flourish by getting higher
education
okay well first of all you're going to
have to wait 20 years to get
the training data for that and you have
to implement your model
now secondly what do you mean
flourish uh what how do you
operationalize this idea of someone who
uh you know responds well to to going to
college
um it might take a really long time to
figure out how to operationalize that if
you if you can
um but you have to make the model now so
what do you do
um and amazingly there are still results
that you can prove about what a good
model might look like even under those
conditions
um and i think results like that are are
relevant for
thinking about sort of these farther
future questions about ai
um and what what are the objective
functions that we really want to give
them
um and again this is happening in
psychology in the 1970s
but it feels in some ways more relevant
now than ever
so that's part one which looks at
supervised and unsupervised learning and
present day risks um part two turns the
focus to reinforcement learning
specifically
um so in chapter four uh we get to meet
andy bartow and rich sutton and we
learned the the
uh roots of reinforcement learning in
the ideas of harry klopf
who had this idea that neurons were what
he called heterostatic maximizers
pushing back on the cybernetics movement
of the 40s and 50s that thought that
purposeful behavior necessarily required
negative feedback in
a system that wanted to reach
equilibrium and stay at rest
um harry klopp said no that's that is
not what life is like that is not what
organisms are like we are maximizers um
and there's a deeper historical story
here too though which goes all the way
back
to the 1890s and the work of edward
thorndike on what he called the law of
effect
um which is that you know by default we
take actions randomly
the results of those actions are in his
words either satisfying or annoying
um and that we we modify accordingly to
do more of the satisfying things and
less of the annoying things
um and there's this wonderful historical
moment here where
it turns out that edward thorndike and
gertrude stein were classmates at
harvard
in william james's psychology class in
1896
um and uh gertrude stein described him
as a
as a funny a funny character
um and these ideas really um
carry all the way through to
reinforcement learning in the 20th
century
and so in this con in this chapter we
talk about rl concepts like
credit assignment the difference between
value learning and policy learning
temporal differences um knowing my
audience i think it's fair to imagine
that you don't need me to say too much
more about that
um but there are a number of really rich
connections here um
rl is premised on this idea that rewards
are scalar
uh they're cardinal they're fungible
anything can be compared to anything
real life doesn't always feel that way
right so we we agonize do i do
the thing that's the most lucrative do i
do the thing that's the most prestigious
or do i do the thing that's the most fun
um well
rl traditionally doesn't have this
problem right the rewards are scalars so
you just compare four to five to six
and you do the six um so there are
philosophers like oxford's ruth chang
who think that this um fundamentally
multi-dimensional character of human
rewards
uh what she calls incommensurability uh
this inability to be collapsed from a
vector
representation to a scalar um is
absolutely central to the human
experience
um people from the rl community
essentially counter-argue that you
you do in the end decide and so you can
kind of infer that there was a scalar
attached to that that was greater than
the scalar of something else and
um of course this uh really intersects
with economics and
revealed preferences and utility and
needless to say there's an entire
literature there
uh from pareto to von neumann et cetera
you also have contemporary people in the
neuroscience community people like
paul glimcher and his colleagues at nyu
trying to unpack the actual mechanisms
by which the mind attempts to do this
dimension reduction
uh in the space of value um and
looking into the question of where and
how and with what you know model
is the brain doing that um there's a lot
that we are starting to know
uh in the last 20 years but a lot that
still out there to be learned
um i think the the most thrilling
collision between
um rl and neuroscience is the dopamine
system
so some of you may know this story and
i'm compressing a lot here
but in
the 1990s it was shown by peter diane
terry sinowski read montague that
temporal difference learning basically
explained this open problem
in understanding the function of the
dopamine system and i for me that's just
this totally climactic moment
of uh the science coming full circle
that these models that had grown out of
animal learning in the late 19th early
20th century
finally come into their own and not only
that
but actually solve this outstanding
riddle in the way that the human brain
works i think that's
um a really really encouraging indicator
that rl is basically on the right track
and that we're discovering
um fundamental mechanisms of learning
not not just
engineering practices that work for
specific problems but universal
mechanisms for learning that
evolution has stumbled into again and
again
so from there we get into shaping and
um you know anyone who works in
reinforcement learning is familiar with
the delicacy of designing appropriate
incentives
um and there's a fascinating
interdisciplinary story here too that
starts um with bf skinner during world
war ii
teaching pigeons um because he's been
assigned this project
to put pigeons inside of bombs and have
them
peck at images of bomb targets uh to
create like live homing missiles
basically
um and he has this quote that uh you
know my my colleagues and i knew that in
in the eyes of the world we were totally
insane um
and along the way he does he develops
these principles of what he calls
shaping
that you can start approximating uh
rewarding approximations to the behavior
that
you want and so this idea obviously goes
through theoretical
rl um and you have
uh the work of stuart russell and andrew
ung
in the late 90s showing that you know
the way to avoid
um problems of incentive
is to uh create what's called a
conservative field or basically make a
situation where
if you return to where you started that
the net
shaping uh reward is zero um uh put
differently we wanna
reinforce states of the world not uh
actions of the agent
um and this ends up having
uh all these ramifications in the
cognitive science community
um so for example the work of my good
friend and collaborator tom griffiths at
princeton
um his former phd student from berkeley
faulk leader using these principles
of uh directly borrowing ideas from the
shaping theory
um to create mechanisms for what they
call
optimal gamification so how do you
incentivize people
um not only in a way that doesn't lead
to
uh you know degenerate behavior but but
in ideally the the optimal way
and so the computer science uh or the
the cognitive science rather is
borrowing that idea very directly from
the rl
theory and i think there's a lot more to
be worked out there as well
um of course we know that
you know not not only are we
motivated by explicit incentives from
outside but
anyone who spent time with kids and
animals knows that
um we're motivated intrinsically as much
as extrinsically
um uh and it became obvious in the mid
20th century that you know rats
were willing to walk across an
electrified fence just to peek around
the corner
uh monkeys are as willing to lever press
for uh
to look out a window as they are for
food um and so this
started an effort within psychology to
try to understand the nature of
intrinsic as opposed to extrinsic
motivation
and there's a long and wonderful history
here
the computer science story i think is
people like google brains mark bellamar
who's working on uh extending
count-based exploration
into non-tabular settings um
people like jurgen schmidhumer who's
thinking about intrinsic motivation as
the ability to compress uh information
people at open ai
like yuri berda and harry edwards um
and here at berkeley uh people like
deepak pathik
pulkit agrawal um trevor daryl
exploring intrinsic motivation uh based
on on the idea
that the agent be motivated to take
actions essentially which surprise it
um so as this formal uh work in rl is
getting worked out there's this
all of these connections to infant
psychology um there was a great story
that
alison gotnik berkeley psychologist told
me
about reading about uh trevor
and some of his students work in the
berkeley newsletter and they were
talking about how interested they were
in taking her
ideas about infant uh curiosity and
applying them to rl
and she emailed them like guys i'm i'm
right here i'm like
across the street uh let's actually
collaborate on this and so it's been
really exciting to see
um those two worlds come together um
and on the one hand developmental uh
psychologists are using
rl as a formal model to explain infant
behavior and at the same time
uh the people in rl are turning to uh
what we know about infants to think
about
uh motivation and intrinsic drive uh
that might be useful
uh just in in exploration uh for rl
so the third part of the book um
gets most squarely into the question of
normativity and aligning deep rl agents
with human norms and human values so
one of the things one of the central
themes that anyone knows
um in rls what's called imitation
learning or sometimes behavior cloning
um and there's a really really rich
story here
um not only the computer science story
um in the book we
we meet dean palmerlow from cmu who um
is crazy enough to drive all the way
from pittsburgh to lake erie on the
highway for two hours
um letting a neural network steer his
car in 1990
um using a system that had one tenth of
the processing power
of a first generation apple watch um
so the the surprisingly long uh and
slightly daredevil
history of behavior cloning uh in
self-driving cars that continues all
all the way to this day and we meet
waymo engineer
stefan ross who developed the dagger
algorithm for avoiding cascading errors
um
in imitation learning there's also this
wonderful human story here too
which is that um zoologically all of our
words for imitation not just in english
but in
almost every language uh we say you know
to ape something
uh but the real prolific imitator in
nature
is not apes at all and there's an entire
uh
primatology literature on just on that
topic
um but it is in fact uh humans um
and furthermore the human capacity for
imitation is extremely
sophisticated um and goes way beyond
merely duplicating uh behavior in in
ways that are
surprising um and and really i think
informative uh for thinking about how
this might work um
with machines um so
uh there's also i think a really
interesting connection
in imitation um to
uh not just primatology but
philosophical ethics um
and in the interest of time i won't get
too deeply into it but there's a there's
a classic
philosophical tension going back to the
1970s between what are called
possibilism and actualism so
do you do the very best thing possible
in a situation even if it requires
a very precise follow-through and you
know that you'll screw it up
um or do you do um the the lesser action
that you know you can actually follow
through
so this debate has now been going on for
uh something like
40 years and it's absolutely relevant to
thinking about
things like batch off policy rl um so
i'm really intrigued in the way that
those
literatures are starting to come into
contact
um in chapter eight we get into
arguably the heart of contemporary ai
safety research so ideas around
inverse reinforcement learning
cooperative inverse reinforcement
learning
deep rl from human preferences and so
forth um
and irl itself has this wonderfully
colorful history that it really goes
back to
uh stuart russell walking to safeway um
and thinking about his gait as he went
down the hill
um and this gets him thinking about you
know what
what is it that animal and human gates
optimize
why is it the case that it we still need
to hire motion capture people and we
can't reliably produce like realistic
looking gates
um and uh this gets into the idea of irl
so if the human gate is the answer
what's the question um there's an entire
interdisciplinary literature here
just on the science of gait and all the
different theories that people have had
over many decades for
why horse gates are certain ways and why
there are phase
transitions in quadruped gates at
certain speeds and
are they optimizing stress on the joints
are they optimizing you know
calorie load etc and
irl now it directly offers us a way to
answer and address questions like that
um so there's this purely
theoretical story that goes through um
you know peter abel's work on getting
helicopters to do
stunts uh people like chelsea finn
working on things like
guided cost learning um to uh
jan laika and paul cristiano very
memorably teaching
a uh a mujoko agent to do a backflip
just
by comparing different clips
and picking the one that looks slightly
more like a backflip um
there there are i think many
interdisciplinary questions
here um there are ethical questions
about recommendation systems you know
you can you can infer
what someone wants but um are they
acting the way that they want to be
acting in the first place i think that's
a that's an interesting question
um more broadly as robots become more
capable
and able to work sort of our elbow to
elbow with humans
in manufacturing settings and so forth
um
there's an entire interdisciplinary
literature on human
teamwork um and so i'm thinking about
people like
mit roboticist julie shaw who has done
a lot of work borrowing ideas straight
out of the human human teaming
literature
and showing that they apply um basically
wholesale
uh into thinking about human robot uh
collaboration uh in in factory settings
and so i think there's so many cases
like this
where there is just this windfall of
insights
ready ready to be plucked basically um
and as these systems get more and more
capable i think that's only going to
continue to be the case so the book ends
with a chapter about uncertainty
um so we meet stanislav petrov the
soviet officer who uh single-handedly
saved 100 million people's lives
by not doing anything when his missile
system told him that the u.s was
attacking
but the attack seemed weird to him
so he simply didn't do anything um and
uh
you know to some degree to save the
world by that um so this theme of
uncertainty and in particular what
action to take
in the face of uncertainty i think is
very interesting uh in an ml context
so you have people like oregon's tom
dietrich
who talks about what he calls the open
category problem you know it's one thing
to classify uh classify images as a set
of
one of a set of categories but most of
the things in the world are actually in
none of those categories
so how do how do you deal with that um
there are people like oxford yaringall
who are working on the uncertainty
estimates you get out of bayesian neural
networks and how you can approximate
that using dropout
um there are people here at berkeley and
chai
um smith and millie dylan hadfield
mannell peter
uh abeel anka dragon stuart russell
working on the idea that if you're
interested in obedience in a system
the ability to intervene and stop it
the system must necessarily be uncertain
have some uncertainty over what it
thinks your objectives are
again here at berkeley gregory khan and
his colleagues at bear have worked on
robots that slow down
when they um their collision predictor
model becomes uncertain
um using a mixture of dropout and other
techniques
and i think there are there are a number
of questions here um
in purely in the theoretical computer
science side
around how do you measure uncertainty
how do you measure
the impact that your action might have
um and then what do you do in the face
of that uncertainty and with that sense
of
impact so on the impact side there are
people like future of humanity institute
stuart armstrong
deep minds victoria krakovana oregon's
alex turner
uh working on operationalizing uh this
notion of high impact actions in an rl
setting
um and uh there's also an entire medical
ethics literature
on what do you do in the face of
uncertainty there's also a legal
literature on this idea of preliminary
injunctions or irreparable harm
and i think all of these things start to
become relevant and start to feed into
what we
are doing in the field of rl so
to conclude i think as we
gather here at the end of 2020 um
in sum i think we're ready to to etch
another chapter into this history so
we have seen first the promise and
secondly the peril
uh of these systems and we're now at the
beginning of what i see as the the third
phase
um there's a small but brilliant group
of people
beginning to rally and muster around
these images
these issues the first responders as i
say are on the scene
and i hope we can grow these numbers and
i hope that um
the book inspires people into this area
which i see as
being not only one of the most
fascinating and dynamic
um areas but also one of the most
important projects um
in computer science and and frankly in
all of science um
and i think this is a challenge for me
this is an opportunity to meet that
challenge head on
and also a chance to learn i think
something really
radical and profound about ourselves so
as a place to end i i was going through
some archival papers of alan turing's
and i came across a conversation that he
had on bbc radio in 1952
and um you know we'll skip some of the
context but he's saying you know i was
doing a lot of work trying to teach this
machine to do something very simple
and a very great deal of instruction was
needed before i could get into
any results um the machine learned so
slowly that it needed a great deal of
teaching
and one of his uh fellow panelists
interrupts him and says
but who is learning you or the machine
and he says well i suppose we both were
so with that i just want to say thank
you and i welcome any questions that you
might have so thank you very much
thanks very much brian that was uh
really great
um you uh are welcome to use the q a
feature if you'd like to um
uh ask ask any questions um well
while folks are thinking about it um uh
so it struck me in looking at the the
table of contents of your book that you
went through it was a very uh
logical progression you know with the
titles that you had it could have been a
textbook but
um it's really clear that the book's
filled with um
you know fascinating stories um so i'm
curious about so
for instance i was surprised to learn
about um walter pitts being a a homeless
prodigy um yes but
so i i guess um i'm interested in how
the
the stories have shaped how you chose to
present the topics in the
in the book yeah i think that you know
one of the
challenges in writing non-fiction is
that you have to juggle these different
things
right so to your point it is to some
degree a kind of a textbook it's sort of
a
sheep and uh wolf in sheep's clothing
it's you know
a textbook um that is uh presented as a
as a narrative um so that it can reach a
broader audience and
so for me i had to really juggle the
question of what i saw
as being kind of the central result in
an area
with what seemed like the best story
um and sometimes those things lined up
really naturally
um so for example you know rich caruana
discovering that
pneumonia uh his the pneumonia model
predicts that if you have asthma
that you should go home and you know
it'll be fine um
that is both a wonderful story on its
own
and it also happens that rich is today
you know one of the people who's really
active in this area so
that that kind of presented itself to me
on a platter
um other cases where um you know in the
uncertainty chapter
it opens with this somewhat canonical
story of the soviet uh
officer um and
uh that's not it's not on its face a
machine learning story uh
story although he was using a machine
learning system that was
giving him this uh assessment that okay
it looks like there's the u.s attack
we're raiding it high confidence um
uh and he knows that what he's supposed
to do is pick up the phone and
tell his superiors but he knows that
they're going to order this missile
launch
and so he's effectively it's effectively
his hand on the button and he
decides not to do anything um and so
there was just a
an anecdote that i found of a machine
learning researcher
um talking about
in the past when if someone had given
them a thought experiment where there
was a button they could press that would
convert the universe to hedonium
you know computationally optimized
matter for producing unpleasant
experiences
they would have pressed that button and
now they're not so
sure uh you know and when people mention
that thought experiment they say i don't
know maybe
maybe we shouldn't press that button and
so that for me was both
um a literary moment where i could
connect this historical example to a
present-day contemporary
researcher but also a chance to have
some literary symmetry where the chapter
begins with someone not pushing a button
and it ends with someone not pushing a
button so um
each chapter offered me different
opportunities for that but it was
very much um an exercise to try to
balance the the curriculum if you will
and the just the the pure storytelling
yeah fantastic um we
we have a few questions but we're
running short on time maybe
maybe just the first one up there anna
asks are there any particular areas that
you think should be receiving more
attention
than they are now or any communities
that you think that you think should be
talking to each other more
oh it's it's hard to say in a way
because i feel like the entire book is
that you know
it's like all of these people should be
talking and for me
one of the one of the pleasures of the
book um
is that i actually um you know i got to
be more than a fly on the wall i
actually got to
um have conversations with people where
i said oh
you know so-and-so is working on exactly
that and you know here's someone in
cambridge who's working on that
um and people's ears would perk up and i
get to write letters of introduction and
so
um you know for me that's one of the
great pleasures of getting to do
a sort of inter broad interdisciplinary
work like this
is to to actually get to stimulate some
of those connections
um it's it's really hard to pick just
one but i mean i think
that a lot of the work on infant
cognition is is pretty amazing and that
comes up again and again it comes up in
shaping because infants are really good
at thwarting i mean and small children
not just infants
really good at thwarting the incentives
that you try to design as a parent
in ways that are i think indicative of
problems that we can expect from
you know rl systems more generally
all the way through intrinsic motivation
to imitation to
the ability of children as young as
18 months old to infer from your actions
what you're trying to do even if you're
failing to do it
which is this sort of irl thing so i
think that
the connections between the infant mind
and and
ai obviously go all the way back to the
very beginning turing was talking about
you know a man
imagine building a program that
simulates the child's mind not not the
adult's mind then all we have to do is
subject it to the appropriate course of
education so that's a very old idea
um but in some ways it feels like it's
just bearing fruit now
and and it has far is far from being
exhausted so i think that's that's
really a really rich area that has a lot
to offer
all right well i think we're going to
have to cut it off there there is
several more questions but uh we're out
of time
thanks again ryan that was a really uh
fascinating talk
it is my pleasure absolutely thank you |
38705cf4-b40c-4f85-82ac-c3e2299e84d5 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Descriptive vs. specifiable values
*[Metadata: crossposted from <https://tsvibt.blogspot.com/2022/11/descriptive-vs-specifiable-values.html>. First completed November 19, 2022.]*
What are an agent's values? An answer to this question might be a good description of the agent's external behavior and internal workings, without showing how one could modify the agent's workings or origins so that the agent pushes the world in a specific different direction.
Descriptive values
==================
There's some discussion of what can be inferred about the values of an agent based on its behavior and structure. E.g. see Daniel Dennett's intentional stance, and "Occam's razor is insufficient to infer the preferences of irrational agents" by Stuart Armstrong, Sören Mindermann ([arxiv](https://arxiv.org/abs/1712.05812)), and [this post](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/shortform?commentId=ovBmi2QFikE6CRWtj) by Vanessa Kosoy.
One could *describe* an agent as having certain values: the agent's behavior is a boundedly rational attempt to push the world in certain directions. For some purposes, it's useful to have a parsimonious description of an agent's behavior or internal workings in terms of values. For example, such a description could be useful for helping the agent out: to help the agent out, you push the world in the same direction that the agent is trying to push the world.
Specifiable values
==================
A distinct purpose in describing an agent as having values is to answer questions about values in counterfactuals:
* [What determined that the agent would have those values and not other values?](https://tsvibt.blogspot.com/2022/10/counting-down-vs-counting-up-coherence.html#1-what-determines-the-effects-of-a-mind)
* Under what circumstances will the agent continue to have those values? E.g., will the agent rewrite itself so that its behavior is no longer well-described as boundedly pursuing those values?
* How could the agent's values be modified? How could the values be modified in a specific direction, or to a specific state, so that that the modified agent has some specific effect on the world?
* How could the agent's ontogeny--the process that made it what it is--be altered so that it ends up with some other specific values?
To make these questions more likely to have answers, and to not rely too much on [assumptions about what values are](https://tsvibt.blogspot.com/2022/08/control.html#2-control-vs-values), replace the notion of "values" with the notion "what directions a mind ends up pushing the world in".
Quasi-example: explicit utility maximization
============================================
An auxiliary question: how, mechanistically, do "the values" determine the behavior? This question might not have an answer, because there might not be some component in the agent that constitutes "the values". For example, in humans, there's no clear value component; there are many [in-built behavior-determiners](https://tsvibt.blogspot.com/2022/11/do-humans-derive-values-from-fictitious.html#2-built-in-behavior-determiners), but they don't fully constitute what we call our values. But, in cases where we clearly understand the mechanism by which an agent's values determine its behavior, answers to other questions about values in counterfactuals might follow.
For example, there's the classic agent model: a system that searches for actions that it predicts will lead in expectation to the most highly-scored world according to its utility function box. The mechanism is explicit in this model. The utility function is embodied, in a box, as an input-output function, and it determines the agent's effects on the world by providing the criterion that the agent uses to select actions. Some answers to the above questions follow. E.g., it's clear at least qualitatively how to modify the agent's values to a specific state: if you want to make the agent cause a certain kind of world, just change the utility function to score that kind of world highly.
Even this example is not so clear cut, and relies on background assumptions. See problems with [embedded agency](https://www.lesswrong.com/tag/embedded-agency). For example, if we assume that there's already a fixed world (that is, an understanding of what's possible) about which to define the utility function, we [sweep under the rug](https://tsvibt.blogspot.com/2022/11/shell-games.html#example-hiding-the-generator-of-novel-understanding) that the understanding behind having such a world had to be gained, and that the gaining of understanding might also [change an agent's values](https://tsvibt.blogspot.com/2022/10/does-novel-understanding-imply-novel.html). |
a6feb781-0991-407d-8da3-1d52b47cc9a7 | trentmkelly/LessWrong-43k | LessWrong | Goal setting journal (GSJ) - 28/06/15 -> 05/07/15
Inspired by the group rationality diary and open thread, this is the inaugural weekly goal setting journal (GSJ) thread.
If you have goals worth setting that are not worth their own post (even in Discussion), then it goes here.
Here are mine, to illustrate:
11.Go to Centrelink to collect Youth Allowance and complain that you didn't get an SMS notification for reporting.
12. Ask Centrelink to setup SMS notifications for reporting
13. Find out how to report online, or by phone...and mention technical difficulties
26. apply neosporin nightly
28. review pre readings for subjects
----------------------------------------
Notes for future GSJ posters:
1. Please add the 'goal_setting_journal' tag.
2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. GSJ Threads should be posted in Discussion, and not Main.
4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.
5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. |
c1b07945-cb9b-4d56-bc35-91296490c675 | trentmkelly/LessWrong-43k | LessWrong | Learning about AI regulation should be easier
I recently decided to try to learn more about the AI regulations of China, the EU, and the US. Instead of taking private notes, I decided to create a webpage to organise and summarise the main things I found, with the idea that it might be useful to others. I’ve read and written about tech regulation before, and I’ve generally struggled to find accessible overviews of the topics I’m curious about. With the exception of The Future of Life Institute’s wonderful website on the EU AI Act, the same goes for AI regulation, in my experience.
In this post, I outline how I’ve structured the website and some changes I’m considering making to it. I’d love help, feedback, and suggestions.
Lady Justice in Cyberspace. https://airegimes.wordpress.com/
The goal was to make structured and concise overviews of the AI regulation regimes of the main world powers. So far, I’ve only covered the EU and China. To figure out what to focus on, I wrote a brief “intro to tech regulation”, where I identified the following as key elements of a regulation regime:
1. The goals and priorities. High-level goals and priorities, such as safety and transparency, guide the design of the regime and its evaluation after implementation. Because so many competing interests go into the design of regulations, these overall goals and priorities cannot be specified with great precision, but must be inferred based on an understanding of the political context.
2. The target area(s) and group(s). A particular tech sector is comprised of an interconnected web of processes and groups that span from materials extraction to final product assembly and from companies to users. A well-crafted regulation targets those areas and groups in the sector that enable the regulation to achieve its goals.
3. The rules and standards. These are the specific sets of rules, standards, rights, and obligations applied to the relevant areas and groups.
4. Oversight & enforcement. This concerns who has jurisdiction and the au |
439370f9-07e1-409f-a6af-10cab38627b9 | trentmkelly/LessWrong-43k | LessWrong | The Historical Parallels: Preliminary Reflection
Despite the unique features of advanced AI systems, a thorough study of historical examples from emergent technologies offers insight into navigation between the risks and benefits.
The rapid advances in the AI development have left the field mostly devoid of large-scale studies of the historical and sociological lessons from earlier similar developments. While AI - both in potential benefits and risks - is in some forms unique, it shares similarities with earlier major technological breakthroughs. While some of these have been studied intensely,[1] there remains work to do in building a coherent overview of the shifting dynamics new technologies cause.
A new technological breakthrough is a disruption in the fabric of society, originally locally, inevitably later globally. The discussion around AI safety is dominated by professionals of technological perspective. A good researcher, though, might not be a good strategist,[2] and dismissing the historical lessons is a blind spot we might not afford.
Studying the transitional periods of major technological breakthroughs and their effect on societal dynamics will arguably reveal recognizable patterns. Recognizing and understanding these patterns inevitably makes us better prepared for good governance solutions for the age of artificial intelligence.
I present in this post a research agenda, split into four separate themes of reactive patterns. This text is a map of the territory rather than exploration of each area in depth, providing a backbone for further research. Each of these themes warrants a study on its own. The historical cases of each theme represented should be studied together to form an understanding of the high-level patterns, not as isolated historical cases.
The themes are:[3]
1. Regulatory lag and warning shots
2. Conflict dynamics and emergent technology
3. Technological disruption’s effect on civil society
4. International agreements on technological restrictions
1. Regulatory lag and w |
1799f608-6309-46dc-ad06-a9d0cb576a07 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Inferring Rewards from Language in Context.
1 Introduction
---------------
Language is a natural interface for systems like robots or personal assistants that interact with human users. One way to interpret language in these interactive settings is to train an instruction following agent: a model that learns to map commands like “go three steps forward to the door” to a sequence of actions in context (e.g., Branavan et al. [2009](#bib.bib7 "Reinforcement learning for mapping instructions to actions"); Tellex et al. [2011](#bib.bib3 "Understanding natural language commands for robotic navigation and mobile manipulation"), *inter alia*). Instructions describe *how* an agent should act in an immediate context, but to build models that can generalize—carrying out a user’s goals in new contexts and learning user preferences over repeated interactions—agents should also infer *why* actions are taken. Grounding language to reward functions extends the standard instruction following setup in this way, representing the goals and preferences that underlie actions, and allowing agents to autonomously carry out correct actions in new contexts (e.g., Fu et al. [2019](#bib.bib9 "From language to goals: inverse reinforcement learning for vision-based instruction following")).

Figure 1: When people instruct agents with language like “I’d like the JetBlue flight,” both their desired actions and the language itself reveal information about rewards. From the referenced flight itself, a model would guess that the user may prefer expensive JetBlue flights. Reasoning jointly with language reveals that JetBlue is the more salient preference, and the model should still have uncertainty about whether expensive flights are generally preferred. JetBlue may have been more important than a preference for cheap flights, but the user may still prefer cheap flights, all else equal. Over repeated interactions with the user in new contexts, the model can continually refine its estimates of the user’s preferences.
However, when people interact with systems they often primarily aim to achieve specific tasks, rather than literally describing their preferences in full. How do we infer general goals and preferences from utterances in these settings?
Consider a flight booking agent like the one in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Inferring Rewards from Language in Context"). By inferring the user’s reward function (indicating their preference for carrier, price, and other flight features) beyond just selecting the right flight, such a system would be able to autonomously book flights on behalf of the user in other instances. To do so, the system might use the actions the user commands as evidence about what they prefer, recovering rewards from actions using (language-free) techniques like inverse reinforcement learning (IRL; Ng and Russell [2000](#bib.bib20 "Algorithms for inverse reinforcement learning")). For example, the system can select a flight the user might like in a new instance by matching features from their past flight bookings.
The key idea of our work is that the *way* that a user refers to their desired actions with language also reveals important information about their reward: the fact that they said “the JetBlue flight” and not “the expensive flight” conveys what matters to them. Intuitively, in settings with repeated interactions, utterances are optimized to communicate information that is generalizable—implicitly helping listeners make useful inferences for acting on a longer horizon.
We implement this idea with a pragmatic model of how speakers (humans) generate such language: speakers choose utterances that both elicit reward-maximizing actions in a particular context and faithfully describe the reward. Given an utterance, our model infers that the most likely rewards are the ones that would have made a speaker likely to choose that utterance.
To evaluate our model, we construct and release a dataset for mapping language to rewards, FlightPref, containing natural language utterances from humans with underlying preferences. Humans interact in a multi-turn flight booking game similar to Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Inferring Rewards from Language in Context"), where we provide a “user” player with a reward function representing flight preferences. The goal of the game is for the user to communicate these preferences in natural language to an “assistant” player, who is tasked with booking preferred flights for the user. We present this dataset as a challenging benchmark for reward learning from language and interaction.
In our experiments, we show that our model can infer reward functions from natural language, improve reward estimates consistently over repeated interactions, and use inferred rewards to accurately select optimal actions in held-out environments.
Our full model obtains relative accuracy improvements of 12% when compared to models that only treat language as descriptions of actions.111We release our code and dataset at <https://github.com/jlin816/rewards-from-language>.
2 Related Work
---------------
#### Instruction following.
A long line of work on grounded instruction following has developed various methods for producing actions from language, including approaches that use intermediary structured semantic representations MacMahon et al. ([2006](#bib.bib1 "Walk the talk: connecting language, knowledge, and action in route instructions")); Tellex et al. ([2011](#bib.bib3 "Understanding natural language commands for robotic navigation and mobile manipulation")); Chen and Mooney ([2011](#bib.bib4 "Learning to interpret natural language navigation instructions from observations")); Matuszek et al. ([2013](#bib.bib41 "Learning to parse natural language commands to a robot control system")); Artzi and Zettlemoyer ([2013](#bib.bib5 "Weakly supervised learning of semantic parsers for mapping instructions to actions")); She et al. ([2014](#bib.bib44 "Back to the blocks world: learning new actions through situated human-robot dialogue")); Thomason et al. ([2015](#bib.bib40 "Learning to interpret natural language commands through human-robot dialog")); Wang et al. ([2016](#bib.bib28 "Learning language games through interaction")); Fried et al. ([2018a](#bib.bib11 "Unified pragmatic models for generating and following instructions")); Arumugam et al. ([2017](#bib.bib37 "Accurately and efficiently interpreting human-robot instructions of varying granularities")); Suhr et al. ([2018](#bib.bib12 "Learning to map context-dependent sentences to executable formal queries")) or map directly to primitive actions Branavan et al. ([2009](#bib.bib7 "Reinforcement learning for mapping instructions to actions")); Andreas and Klein ([2015](#bib.bib16 "Alignment-based compositional semantics for instruction following")); Mei et al. ([2016](#bib.bib6 "Listen, attend, and walk: neural mapping of navigational instructions to action sequences")); Bisk et al. ([2016](#bib.bib39 "Natural language communication with robots")); Misra et al. ([2017](#bib.bib14 "Mapping instructions and visual observations to actions with reinforcement learning")); Guu et al. ([2017](#bib.bib8 "From language to programs: bridging reinforcement learning and maximum marginal likelihood")); Suhr and Artzi ([2018](#bib.bib13 "Situated mapping of sequential instructions to actions with single-step reward observation")); Anderson et al. ([2018](#bib.bib42 "Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments")); Shridhar et al. ([2020](#bib.bib38 "ALFRED: A benchmark for interpreting grounded instructions for everyday tasks")).
All of these approaches interpret any given utterance (instruction) solely in the context that elicited the utterance, producing one particular sequence of actions. The method we present extends these approaches, using utterances to infer the rewards that underlie the actions that should be taken across a range of environments: both the context that elicited the utterance, and other unseen environments.
#### Reward learning.
The majority of work on reward learning has been in the robotics and reinforcement learning communities and has not incorporated language, rather using techniques such as inverse reinforcement learning (IRL; Ng and Russell [2000](#bib.bib20 "Algorithms for inverse reinforcement learning"); Ratliff et al. [2006](#bib.bib19 "Maximum margin planning"); Ziebart et al. [2008](#bib.bib18 "Maximum entropy inverse reinforcement learning"); Hadfield-Menell et al. [2017](#bib.bib33 "Inverse reward design"); Jeon et al. [2020](#bib.bib34 "Reward-rational (implicit) choice: A unifying formalism for reward learning")) to infer the rewards that underlie human demonstrations of actions.
Even works that incorporate language into reward learning also take this primarily action-centric approach: either by using datasets pairing utterances with *trajectories* and using (language-free) IRL to then recover reward functions from trajectories MacGlashan et al. ([2015](#bib.bib36 "Grounding english commands to reward functions")); Fu et al. ([2019](#bib.bib9 "From language to goals: inverse reinforcement learning for vision-based instruction following")), or learning an instruction-following model guided by a language-conditioned discriminator Bahdanau et al. ([2019](#bib.bib10 "Learning to understand goal specifications by modelling reward")).
The language in these settings are unambiguous commands, giving a complete description of a goal (e.g., “go to the red door”). In contrast, we are concerned with language used to guide agents in repeated interactions (where language may be a partial or ambiguous mix of instructions and reward descriptions).

Figure 2: Our model infers rewards by reasoning about how the speaker chose the observed utterance: both to elicit correct actions (paction) and to describe their reward (preward). We illustrate this on the flight domain, where trajectories are a choice of a single flight.
#### Pragmatics.
A long line of work on pragmatics Grice ([1975](#bib.bib21 "Logic and conversation")), particularly in the Rational Speech Acts (RSA) framework Goodman and Frank ([2016](#bib.bib22 "Pragmatic language interpretation as probabilistic inference")), has developed computational models for inferring the behavior or belief that a speaker wishes to induce in a listener.
However, the majority of this work has only focused on single-turn interactions, where an utterance conveys an action in a single context, e.g., choosing the correct referent in signaling games Golland et al. ([2010](#bib.bib24 "A game-theoretic approach to generating spatial descriptions")); Frank and Goodman ([2012](#bib.bib23 "Predicting pragmatic reasoning in language games")); Degen et al. ([2013](#bib.bib30 "Cost-based pragmatic inference about referential expressions")); Monroe et al. ([2017](#bib.bib25 "Colors in context: a pragmatic neural model for grounded language understanding")); McDowell and Goodman ([2019](#bib.bib26 "Learning from omission")), interpreting implicatures Goodman and Stuhlmüller ([2013](#bib.bib31 "Knowledge and implicature: modeling language understanding as social cognition")); Bergen et al. ([2016](#bib.bib35 "Pragmatic reasoning through semantic inference")), or generating Fried et al. ([2018a](#bib.bib11 "Unified pragmatic models for generating and following instructions")); Sumers et al. ([2021](#bib.bib47 "Extending rational models of communication from beliefs to actions")) or interpreting grounded instructions Fried et al. ([2018b](#bib.bib27 "Speaker-follower models for vision-and-language navigation")).
Our work extends this past work by showing that in repeated interactions, listeners can also benefit by reasoning pragmatically about how speakers communicate information about and over longer time horizons.
3 Reward Inference from Language
---------------------------------
#### Problem Formulation.
We parameterize the user’s preference as a reward function rθ with parameters θ. In our flight booking domain from Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Inferring Rewards from Language in Context"), θ is a weight vector which specifies preferences over flight features (carrier, price, etc.).
We formalize the general reward inference problem as sequence of Markov decision processes (MDPs) M1,…,MI that share the same reward function rθ.
In each MDP Mi, the agent receives an utterance ui from the user and must execute a trajectory ξ.
The agent’s goal is to infer θ over the sequence of interactions, which should allow the agent to execute trajectories with high reward in as-yet unseen contexts.
The agent maintains an estimate over θ over the course of interactions. We introduce a model p(θ∣u,M) that the agent will use to perform Bayesian updates of a posterior over θ:
| | | | |
| --- | --- | --- | --- |
| | p(θ∣u1:i,M1:i)∝ | p(θ∣ui,Mi) | |
| | | ×p(θ∣u1:i−1,M1:i−1) | |
In the flight domain, we specialize this formulation to study a one-step MDP (contextual bandit). Trajectories ξ consist of a single action, choosing one of the available flights. Over a series of these rounds where the agent books a flight given the user’s utterance ui, the agent must infer the user’s flight preferences θ to book flights from other unseen sets of options, without explicit language instruction from the user.
###
3.1 Model
Our model, summarized in Figure [2](#S2.F2 "Figure 2 ‣ Reward learning. ‣ 2 Related Work ‣ Inferring Rewards from Language in Context"), defines a *rational listener*, L2, which predicts a distribution over rewards θ, conditioned on an utterance u and a context M. (The terminology we use for listeners and speakers follows Bergen et al. [2016](#bib.bib35 "Pragmatic reasoning through semantic inference").) The rational listener uses Bayesian reasoning about a speaker model, S1, which produces utterances conditioned on a reward function and context:
| | | |
| --- | --- | --- |
| | pL2(θ∣u,M)∝pS1(u∣θ,M)p(θ∣M) | |
Key to our model is that the S1 speaker distribution pS1(u∣θ,M) defines how speakers produce language that functions both to elicit correct actions and describe their underlying reward:
| | | |
| --- | --- | --- |
| | pS1(u∣θ,M)=αpaction(u∣θ,M)+(1−α)preward(u∣θ), | |
where α controls the speaker’s “nearsightedness”—how much does the speaker care about the listener choosing the correct action in the *current* context, rather than describing the reward in a context-independent way so that the agent can make good choices in *future* contexts?
#### Optimizing for action.
The behavior-optimizing term paction specifies that the speaker chooses utterances that elicit reward-maximizing behavior from a listener in the current environment:
| | | | |
| --- | --- | --- | --- |
| | paction | (u∣θ,M) | |
| | | =∑ξprefer(u∣ξ,M)popt(ξ∣θ,M), | |
where the *optimality model* popt(ξ∣θ,M) specifies the probability the speaker refers to trajectory ξ if their true reward is θ.
We can formulate the optimality model with the Boltzmann distribution common in IRL,
where speakers are noisily-rational about which trajectories to refer to: popt(ξ∣θ,M)∝exp(βrθ(ξ;M)), with rationality parameter β. This term specifies that utterances are more likely to refer to trajectories that have high reward according to the speaker’s θ, compared to other trajectories in M.
Then, for a particular trajectory ξ, prefer(u∣ξ,M) specifies what utterances are likely to refer to that trajectory. In particular, we model that speakers choose utterances that would make a listener execute that trajectory:
| | | |
| --- | --- | --- |
| | prefer(u∣ξ,M)∝pLbase(ξ∣u,M) | |
using a *base listener* model Lbase of the type common in past work on instruction following.
We provide details on Lbase in Section [5](#S5 "5 Model Implementation ‣ Inferring Rewards from Language in Context").
#### Optimizing for reward descriptiveness.
Finally, we model preward(u∣θ), the second term in PS1, with a *base speaker* model, Sbase, that maps rewards to reward descriptive utterances: pSbase(u∣θ)222In principle, preward could also do pragmatic reasoning to optimize a listener’s reward belief, but we did not find an improvement from doing so empirically.. We also provide details on Sbase in Section [5](#S5 "5 Model Implementation ‣ Inferring Rewards from Language in Context").
###
3.2 A Generative Model of Utterances
| | | |
| --- | --- | --- |
|
ξ
u
(a) Reference RSA
|
θ
ξ
u
(b) Language-conditioned IRL
|
θ
ξ
u
(c) Ours
|
Figure 3: Graphical models contrasting prior work with our model, which models how language utterances u convey both explicit information about the reward θ and implicit evidence about θ through the actions they suggest (via trajectories ξ). Dependence on M not shown for visual clarity.
Our account of pragmatic generation can also be viewed as the graphical model in Figure [3](#S3.F3 "Figure 3 ‣ 3.2 A Generative Model of Utterances ‣ 3 Reward Inference from Language ‣ Inferring Rewards from Language in Context")(c), where, importantly, the reward influences the utterance both directly and via the action that the speaker refers to. We define p(u∣ξ,θ,M) to be:
| | | | |
| --- | --- | --- | --- |
| | p(u∣ξ,θ,M) | =αp(u∣ξ,M) | |
| | | +(1−α)p(u∣θ,M) | |
and assume that utterances are reward-descriptive in a way that is independent of the current context, p(u∣θ,M)=p(u∣θ).
We can confirm this leads us back to pS1 by marginalizing out ξ:
| | | | |
| --- | --- | --- | --- |
| | p(u | ∣θ,M)=∑ξp(u∣ξ,θ,M)p(ξ∣θ,M) | |
| | = | α∑ξ(p(u∣ξ,M)p(ξ∣θ,M)) | |
| | | +(1−α)p(u∣θ,M) | |
| | = | αpaction(u∣θ,M)+(1−α)preward(u∣θ) | |
Using this graphical model, we illustrate how our model differs from prior work in similar settings:
#### Classic reference game pragmatics collapses belief and behavior.
In general, RSA allows the speaker to optimize for any “utility function,” and in the simplest form the utility function optimizes for the listener’s belief over world states Goodman and Frank ([2016](#bib.bib22 "Pragmatic language interpretation as probabilistic inference")). However, in most work on RSA the only relevant world-state belief is belief about behavior, e.g., the referent that should be selected (Figure [2(a)](#S3.F2.sf1 "(a) ‣ Figure 3 ‣ 3.2 A Generative Model of Utterances ‣ 3 Reward Inference from Language ‣ Inferring Rewards from Language in Context")). Instead, our setting disentangles communication about intended referents in a single context and communication about (reward) beliefs, which influence behavior on longer horizons. Andreas et al. ([2017](#bib.bib51 "Translating neuralese")); Sumers et al. ([2021](#bib.bib47 "Extending rational models of communication from beliefs to actions")) have made the same observation: reference games conflate whether the speaker’s objective is to influence beliefs or actions, and modeling the speaker as one or the other produces distinct interpretations of utterances (e.g., speakers that only optimize for correct behavior may do so at the cost of being truthful about the reward).
#### IRL assumes all information about the reward function is modulated by the trajectory.
Prior work MacGlashan et al. ([2015](#bib.bib36 "Grounding english commands to reward functions")); Fu et al. ([2019](#bib.bib9 "From language to goals: inverse reinforcement learning for vision-based instruction following")) uses IRL to recover rewards from *trajectories* (e.g., from datasets pairing utterances with trajectories), and then supervising a model with these induced (utterance, reward) pairs. While prior work has not specifically considered pragmatics (i.e., speaker models), their implicit speaker model amounts to assuming that all information about the reward comes from trajectories, as in Figure [2(b)](#S3.F2.sf2 "(b) ‣ Figure 3 ‣ 3.2 A Generative Model of Utterances ‣ 3 Reward Inference from Language ‣ Inferring Rewards from Language in Context"). In our experiments we compare against a pragmatic version of this action-centric speaker, which is equivalent to setting α=1 in our model (only using paction). In realistic settings where utterances are *not* unambiguous commands like “go to the red door,” it becomes important to model how actions and utterances reveal *complementary* information about rewards.
4 The FlightPref Task
----------------------
We design FlightPref, a task for reward inference from natural language in the flight booking domain. FlightPref is designed to simulate a simplified interaction with a flight booking agent, where users communicate with the agent via language to book flights from a set of options. Effective agents must not only learn to book the preferred flight given an instruction in the immediate context (instruction following), but also learn the user’s preferences over repeated interactions to book preferred flights in unseen contexts.
We collect a dataset of natural language in a multi-turn game between a user (the “speaker”) and an assistant (the “listener” agent). Each flight is represented by a feature vector ϕ(ξ)∈R8 (e.g., features of carrier, price, etc.). We assume the user has a linear reward function with parameters θ∈R8, specifying a reward for a particular flight rθ(ξ)=θ⊺ϕ(ξ).
In the first round of the game, the user and assistant observe a set of three flight options and the user provides an utterance to describe the flight they want (the optimal flight under the reward function), e.g., “the flight with the most stops.” In each of the subsequent rounds, the user and assistant are presented with a new set of three flights. The assistant can either choose by guessing the user’s preferred flight (under the same reward function), or prompt the user for another utterance describing the desired flight in the new set. If the assistant chooses but does so incorrectly, the user is prompted for another utterance describing the correct flight. Both players are penalized if the assistant chooses incorrectly, and earn points if the assistant chooses correctly (with more points for each round the assistant can do so without asking for help). The user is thus incentivized to provide utterances that inform the agent which flight to choose, while enabling long-term success over later rounds.
###
4.1 Data collection
{boxedminipage}
one stop that is short
american is the flight that i want. but i need the flight that is the cheapest and has less stops.
anything but american
jetblue one
i need a flight with any airline but jet blue, price and number of stops are a bad factor for me also. i prefer delta if affordable and low layovers. can you help me?
even american is undesirable, paying more is important
i like the flight that is $64
Figure 4: Sample text from the task, exhibiting a diversity of instructive and reward-descriptive language.
To collect data for the task, we recruit Amazon Mechanical Turk workers and randomly pair them to play six games (i.e., six different reward functions) of six rounds each. Each game thus consists of 1-6 utterances describing options for the same reward function in different contexts. One person plays the role of the user and the other acts as the assistant. The user has access to a hidden reward function, which is a discretized, randomly-sampled vector θ∈{−1,−0.5,0,0.5,1}8.
In total, we collected 2,568 utterances across 813 games, of which we split off the 91 games with the highest score (where the speaker and listener were able to communicate most effectively) for the evaluation set. More details about the data collection process can be found in Section [A](#A1 "Appendix A Data Collection ‣ Inferring Rewards from Language in Context") of the appendix.
A sampling of text is shown in Figure [4](#S4.F4 "Figure 4 ‣ 4.1 Data collection ‣ 4 The FlightPref Task ‣ Inferring Rewards from Language in Context"). Utterances exhibit a range of phenomena: some users lean towards describing very option-specific features (e.g. “i like the flight that is $64”). Other users attempt to describe as much of their reward function as possible (e.g. “i need a flight with any airline but jetblue,…”)—we note that even when they did so, the user’s tradeoffs between features remain ambiguous.
Many of the utterances are neither fully option-specific nor fully reward-descriptive: instructions like “one stop that is short” both instruct the agent which flight to select in the present context, while communicating some generalizable (but incomplete) information about the user’s preferences.
5 Model Implementation
-----------------------
Our pragmatic model (Section [3.1](#S3.SS1 "3.1 Model ‣ 3 Reward Inference from Language ‣ Inferring Rewards from Language in Context")) relies on base listener and speaker models Lbase and Sbase.
In this section, we describe implementations of these models for the FlightPref dataset.
To train the base models, we use the speaker-side data of (utterance, option set, reward function) tuples from each round. Our base listener and speaker models assume that the utterances are generated conditionally independently given the reward; we capture the dynamics of multiple turns in the posterior reward inference.
Both base models learn neural encodings of utterances u, actions ξ, and rewards θ, and produce distributions by applying softmax functions to inner products between these encodings.
We use ξ∗ to denote the optimal action in each context, i.e., ξ∗=argmaxξrθ(ξ).
#### Base listener model.
The base listener model Lbase is defined using inner product similarities between learned representations of actions ξ produced by an MLP encoder, and learned representations of utterances produced by a BERT-base Devlin et al. ([2019](#bib.bib32 "BERT: pre-training of deep bidirectional transformers for language understanding")) encoder:
| | | |
| --- | --- | --- |
| | pLbase(ξ∣u,M)∝exp(MLPL%
base(ξ)⋅BERTL(u)) | |
where the distribution is normalized over all actions (flights) available in the context, ξ′∈M.
We set the rationality parameter β=∞ in popt as speakers tend to refer primarily to the optimal option in our domain.
#### Base speaker model.
The base reward speaker model Sbase is defined using an inner product between representations of rewards θ from an MLP encoder, and utterance representations from a BERT encoder:
| | | | |
| --- | --- | --- | --- |
| | pSbase(u∣θ) | ∝exp(MLPSbase(θ)⋅BERTS(u)/τ) | |
where pSbase is normalized over a set of utterances taken from the training data (see Section [C](#A3 "Appendix C Base Model Details ‣ Inferring Rewards from Language in Context") in the appendix), and τ=3 is a temperature parameter.
#### Training.
We fine-tune all model parameters, including the parameters of the initially-pretrained BERT utterance encoders in the listener and speaker on (u,ξ,M) pairs from the training data using the AdamW optimizer Kingma and Ba ([2015](#bib.bib46 "Adam: A method for stochastic optimization")); Loshchilov and Hutter ([2019](#bib.bib45 "Decoupled weight decay regularization")). The listener and speaker models are trained separately, without sharing any parameters between the encoders used in the two models.
We independently train 5 random seeds of each base model and ensemble them together in evaluation by averaging their output probabilities, which we found improved performance of all models (both our full model and baselines).
See Section [C](#A3 "Appendix C Base Model Details ‣ Inferring Rewards from Language in Context") in the appendix for details and model hyperparameters.
#### Pragmatic inference
We follow previous work (Fried et al., [2018a](#bib.bib11 "Unified pragmatic models for generating and following instructions"); Monroe et al., [2017](#bib.bib25 "Colors in context: a pragmatic neural model for grounded language understanding")) and approximate the S1 distribution by normalizing over a fixed set of utterances: the de-duplicated set of short utterances (less than 8 tokens, making up the majority of utterances) with no digits from the training data. We implement the full pragmatic model pL2(θ∣u,M) in Pyro (Bingham et al., [2018](#bib.bib50 "Pyro: deep universal probabilistic programming")) and use importance sampling to generate samples from the posterior over rewards. Given our dataset collection procedure (where we uniformly sample rewards), we model an uniform prior over rewards p(θ∣M) for the first interaction.
6 Experiments
--------------
| | |
| --- | --- |
|
|
|
Figure 5: Multi-turn performance on held-out accuracy (left) and L2 distance to the true reward (right). We show the performance of each model for varying numbers of observed utterances for a given reward. We combine five- and six-utterance rounds as there were <25 samples in each of these bins. Our full action+belief model substantially outperforms an action-only model at all numbers of utterances (p<.05), and performs comparably to or better than a belief-only model, with statistically significant benefits for 5+ utterances (p<.05).
| Method | Held-out accuracy (%) |
| --- | --- |
| Oracle models (infer k features perfectly) |
| k=1 | 43.0 |
| k=2 | 51.5 |
| k=3 | 60.2 |
| k=4 | 64.7 |
| Action-only | 52.8 ± 0.97 |
| Reward-only | 57.8 ± 0.95 |
| Action + reward (Ours) | 59.1 ± 0.96 |
Table 1: Average held-out accuracy averaged over all evaluation rounds, with standard error of the mean indicated. Our full action+reward model significantly outperforms action-only and reward-only models (with p<.05 using the paired bootstrap test). Held-out accuracy is also shown for oracle models that infer k (randomly-chosen) features of the reward perfectly and maintain a uniform distribution over the other features.
We evaluate models in the same repeated turn setup that humans carried out in the task. For each game, models play the role of the listener in that game, updating the reward posterior (Section [3.1](#S3.SS1 "3.1 Model ‣ 3 Reward Inference from Language ‣ Inferring Rewards from Language in Context")) after observing the utterance and option set in each round.
Our goal is to estimate rewards that allow the agent to carry out the person’s preferences: choosing the optimal option (flight) in unseen contexts (sets of flight options). To that end, we directly compare models on held-out accuracy: on 1,000 randomly-generated sets of three options, how often the model’s estimate of the reward, ^θ, selects the option that is optimal under the true reward.333Note that when collecting the dataset, we also tested human listeners’s ability to generalize, but only had them select an option on a single unseen option set—the next one in the sequence—to make data collection tractable.
We use the model’s reward posterior mean as the estimate, ^θ=Epθθ. We additionally provide comparisons of reward L2 distance between the estimated reward and the true reward as a context-independent metric:
√∑8i=1(^θi−θ∗i)2, where θ∗ is the true reward.
For our full *action + reward* model, we set the nearsightedness parameter α=0.5 for all posterior updates.
We compare to an action-only model that uses only paction (i.e., setting α=1.0). This model is representative of approaches from past work on language-conditioned reward learning (e.g., MacGlashan et al. [2015](#bib.bib36 "Grounding english commands to reward functions"); Fu et al. [2019](#bib.bib9 "From language to goals: inverse reinforcement learning for vision-based instruction following")) that infer rewards purely from the actions that utterances refer to. We also compare to a reward-only model that uses only preward (inferring rewards purely from the utterance, without conditioning on actions, i.e., setting α=0.0).
For comparison to versions of our approach that remove pragmatic modeling, see Section [D.1](#A4.SS1 "D.1 Analyzing the effect of pragmatically modeling the speaker ‣ Appendix D Analysis ‣ Inferring Rewards from Language in Context") in the appendix.
###
6.1 Overall Results
In Table [1](#S6.T1 "Table 1 ‣ 6 Experiments ‣ Inferring Rewards from Language in Context") we compare all models on held-out accuracy averaged over all rounds in the evaluation set (for each round, having observed all previous rounds in that game). Note that because held-out accuracy is assessed by the proportion of randomly-generated flight sets (out of 1,000) where the true reward function and the inferred reward function pick out the same optimal flight, it is significantly more difficult than achieving high accuracy on a single three-choice instance.
Our full action+reward model achieves a held-out accuracy of 59.1%, +6.3% over the action-only model and +1.3% over the reward-only model, indicating that combining both sources of information allows better inference of rewards that enable optimal actions in novel contexts. For reference, an oracle baseline that infers the value of k randomly chosen features perfectly and is uniform on the other features obtains the following held-out accuracies: k=1 (43%), 2 (51%), 3 (60%), 4 (65%), showing that our model is able to attain similar generalization performance even in the presence of uncertainty (without receiving oracle information about the true value of any feature).
We analyze why our model benefits from both components in Section [6.3](#S6.SS3 "6.3 Analyzing the Benefits of Combining Actions and Rewards ‣ 6 Experiments ‣ Inferring Rewards from Language in Context"), and discuss potential for further improvements in Section [6.4](#S6.SS4 "6.4 Inference Improves with Known Actions ‣ 6 Experiments ‣ Inferring Rewards from Language in Context").
| | |
| --- | --- |
|
(a) Both the described action (the referenced flight is the one with the highest arrival time) and the explicit reward description in the utterance provide evidence that the user’s true reward on arrival time is positive, leading the posterior in our model to (correctly) place more probability mass on positive values of this feature.
|
(b) Evidence from actions and from the utterance complement each other: the action-based model captures that rewards that are positive on arrival time make the selected flight optimal, even though it is unmentioned, while the reward-based model captures evidence about the reward from the user’s utterance.
|
Figure 6: Real examples showing reward posteriors of each model after observing the given utterance and options. We sample from the posterior over rewards and visualize the marginal probability distributions for particular features using kernel density estimation. The true reward value for the feature is marked with a red line and the posterior mean for the feature with a blue line.
###
6.2 Learning over Multiple Interactions
We explore how each model’s reward inferences change as more observations are obtained over the course of a game. In Figure [5](#S6.F5 "Figure 5 ‣ 6 Experiments ‣ Inferring Rewards from Language in Context"), we plot held-out accuracy and L2 distance to the true reward as a function of number of observed utterances.
Our model outperforms the action-only and reward-only models for all numbers of observed utterances.
#### Relying on explicit information from language is most important when there are few observations.
While our full action+reward model improves substantially over the action-only model at all points, this improvement generally decreases as more utterances are observed (Figure [5](#S6.F5 "Figure 5 ‣ 6 Experiments ‣ Inferring Rewards from Language in Context")). Conversely, the improvement of the full model over reward-only generally increases.
Qualitatively, we observe that this occurs because utterances tend to mention the most extreme features of the reward function, which allow our model to estimate the values of these important features. When there are few observations, inferring reward information from utterances in this way is more informative than using only the option implied by the user’s utterance, which does not disambiguate between rewards that select the same option (a commonly discussed problem in IRL; Ziebart et al. ([2008](#bib.bib18 "Maximum entropy inverse reinforcement learning"))).
#### Inferring evidence from actions is most important when there are more observations.
We observe that the action-only model improves more consistently over rounds. Qualitatively, the information that utterances provides about rewards is correlated across multiple rounds—speakers frequently mention salient reward features,
whereas actions consistently provide new information about all features. This is particularly pronounced in our domain, due to a relatively small feature and action space. In other more complex domains, actions might provide even more benefits as they provide *fine-grained* information about reward values and tradeoff boundaries that are more difficult to communicate precisely in language.
###
6.3 Analyzing the Benefits of Combining Actions and Rewards
In this section, we investigate *why* our model benefits from both the action and reward models.
#### A single utterance and context can provide useful evidence to both models.
In Figure [6](#S6.F6 "Figure 6 ‣ 6.1 Overall Results ‣ 6 Experiments ‣ Inferring Rewards from Language in Context"), we show the reward posteriors for each model after a single update on a round (starting from a uniform prior). In Figure [5(a)](#S6.F5.sf1 "(a) ‣ Figure 6 ‣ 6.1 Overall Results ‣ 6 Experiments ‣ Inferring Rewards from Language in Context"), we observe how the action- and reward-only models can make correlated updates on an utterance and context where both the action (a flight with a high value on arrival time) and the utterance provide evidence about the arrival time feature. This leads our model’s posteriors to aggregate more probability mass on positive values of that feature. In Figure [5(b)](#S6.F5.sf2 "(b) ‣ Figure 6 ‣ 6.1 Overall Results ‣ 6 Experiments ‣ Inferring Rewards from Language in Context"), we show how each model can make inferences about different features for the same context—the action-only model inferring positive values for arrival time given the observed flight and the reward-only model updating on flight price and stops. Our model posterior aggregates information from both.
#### Some utterances are primarily “nearsighted,” and others primarily “farsighted.”
Another reason our full model improves is because some utterances are particularly “farsighted”—mentioning a great deal of explicit information about the reward (which the action-only model cannot take advantage of)—while other utterances are more “nearsighted”—specialized to the particular action, e.g., saying just enough to uniquely identify the optimal flight.
Sorting the utterances by difference in accuracy between the action-only and reward-only models confirms that they exhibit qualitatively different phenomena: examples where the reward-only model helps the most are highly reward-descriptive (e.g., “if i had a choice, i would never fly with delta and american! get me jetblue or southwest…”) while examples where the action-only model helps most have less informative utterances (e.g., “the cheaper the better”).
Our full model is able to handle both kinds of language use.
To further analyze the influence of the action and reward component, we evaluate an oracle model that switches between the action-only and reward-only models, choosing the model with highest held-out accuracy in each round. This model outperforms our action+reward model (improving from 59.1 to 62.9% on overall held-out accuracy), suggesting that further improvements could be obtained by integrating evidence from the two models.
Doing so optimally is challenging in our setting: when a user says “i like the cheap jetblue flight,” do they mean to say they like JetBlue generally, or just that they want to choose a desirable flight that happens to be uniquely identified by JetBlue? Future work might explore adaptively switching policies (e.g., using the utterance, or knowledge about the user).
###
6.4 Inference Improves with Known Actions
While our base models have fairly high performance (e.g., the base listener model Lbase has an average accuracy of 74% at selecting the optimal choice in each option set that has an utterance in the evaluation data), they naturally have some errors which lead to errors in reward inference.
We test the influence of this underlying prediction error by skipping posterior updates on all rounds where the base listener predicts the incorrect option for the true reward function.
This change improves held-out accuracy by 6% over the reward-only model after six observations (+4% from the original gap), indicating (1) that dataset affords future work on improved instruction following models and (2) that our reward inference procedure benefits from base model improvements.
We note that in our task design, the user does not provide a demonstration (i.e., a choice of flight) to the model. However, if it is convenient to obtain demonstrations from users (e.g., a flight booking interface could let the person click on the flight they want in addition to specifying what they want in natural language), demonstrations would effectively serve as an oracle instruction-following model for that context, which could be incorporated into our full reward inference model.
7 Discussion & Conclusion
--------------------------
We presented a method for using natural language to infer reward functions: representing the goals, preferences, and intents underlying action.
Conceptually, our work builds on previous work on language grounding by exploring how language serves a dual purpose. Utterances can refer directly to actions to be taken, as studied in instruction following. Beyond that, they communicate information about “why” those actions should be taken, and what actions may be desirable in new contexts. To build language-guided agents that can interact with people over longer horizons, it may be useful to model this relationship between language, actions, and rewards.
Furthermore, language is *ambiguous* about both actions and goals. Standard settings for studying pragmatics (e.g., reference games) address how to resolve ambiguity about what object or action the speaker is choosing to refer to. We have explored how these settings can be extended by considering the preferences underlying those choices. We introduced FlightPref, a new dataset of naturalistic interactions between people in a multi-turn flight booking game. FlightPref uses held-out accuracy as a metric for evaluating interpretation success beyond selecting the right action in a single environment.
Future work can build on the task by 1) learning or evaluating with more complex reward functions (e.g., using deep reward representations); 2) exploring how people communicate about their real preferences and modeling a natural prior (e.g., that people tend to prefer cheaper flights), instead of providing annotators with ground-truth preferences; 3) allowing other ways to handle uncertainty, e.g., leveraging the reward posterior to interactively learn to ask; or 4) extending these approaches to other domains where modeling goals and preferences may be important (e.g., language-conditioned robotics).
Acknowledgements
----------------
We thank Eric Wallace, Jerry He, and the other members of the Berkeley NLP group and InterACT Lab for helpful feedback and discussion. This work is supported by a grant from the Office of Naval Research (ONR-YIP). |
da058a02-8a0c-4d95-8e65-0882402c3b7c | trentmkelly/LessWrong-43k | LessWrong | Dark Forest Theories
There's a concept I first heard in relation to the Fermi Paradox, which I've ended up using a lot in other contexts.
Why do we see no aliens out there? A possible (though not necessarily correct) answer, is that the aliens might not want to reveal themselves for fear of being destroyed by larger, older, hostile civilizations. There might be friendly civilizations worth reaching out to, but the upside of finding friendlies is smaller than the downside of risking getting destroyed.
Even old, powerful civilizations aren't sure that they're the oldest and most powerful civilization, and eldest civilizations could be orders of magnitude more powerful still.
So, maybe everyone made an individually rational-seeming decision to hide.
A quote from the original sci-fi story I saw describing this:
> “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him. If he finds another life—another hunter, angel, or a demon, a delicate infant to tottering old man, a fairy or demigod—there’s only one thing he can do: open fire and eliminate them.”
(I consider this a spoiler for the story it's from, so please don't bring that up in the comments unless you use spoiler tags[1])
However this applies (or doesn't) to aliens, I've found it useful to have the "Dark Forest" frame in a number of contexts where people are looking at situation, and see something missing, and are confused. "Why is nobody doing X?", or "Why does X not exist?". The answer may be that it does exist, but is hidden from you (often on purpose).
I once talked to someone new to my local community saying "nobody is building good group houses that really help people thrive. I'm going to build one and invite people to it." I said "oh, people are totally buil |
7275b7fd-18d1-464f-b765-f34d93c0408e | StampyAI/alignment-research-dataset/blogs | Blogs | CSRBAI talks on robustness and error-tolerance
We’ve uploaded a second set of videos from our recent [Colloquium Series on Robust and Beneficial AI](https://intelligence.org/colloquium-series/) (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. These talks were part of the week focused on robustness and error-tolerance in AI systems, and how to ensure that when AI system fail, they fail gracefully and detectably. All released videos are available on the [CSRBAI web page](https://intelligence.org/colloquium-series/).
**Bart Selman**, professor of computer science at Cornell University, spoke about machine reasoning and planning ([slides](https://intelligence.org/files/csrbai/selman-slides.pdf)). Excerpt:
> I’d like to look at what I call “non-human intelligence.” It does get less attention, but the advances also have been very interesting, and they’re in reasoning and planning. It’s actually partly not getting as much attention in the AI world because it’s more used in software verification, program synthesis, and automating science and mathematical discoveries – other areas related to AI but not a central part of AI that are using these reasoning technologies. Especially the software verification world – Microsoft, Intel, IBM – push these reasoning programs very hard, and that’s why there’s so much progress, and I think it will start feeding back into AI in the near future.
>
>
**Jessica Taylor** presented on MIRI’s recently released second technical agenda, “[Alignment for Advanced Machine Learning Systems](https://intelligence.org/2016/07/27/alignment-machine-learning/)”. Abstract:
> If artificial general intelligence is developed using algorithms qualitatively similar to those of modern machine learning, how might we target the resulting system to safely accomplish useful goals in the world? I present a technical agenda for a new MIRI project focused on this question.
>
>
**Stefano Ermon**, assistant professor of computer science at Stanford, gave a talk on probabilistic inference and accuracy guarantees ([slides](https://intelligence.org/files/csrbai/ermon-slides.pdf)). Abstract:
> Statistical inference in high-dimensional probabilistic models is one of the central problems in AI. To date, only a handful of distinct methods have been developed, most notably (MCMC) sampling and variational methods. While often effective in practice, these techniques do not typically provide guarantees on the accuracy of the results. In this talk, I will present alternative approaches based on ideas from the theoretical computer science community. These approaches can leverage recent advances in combinatorial optimization and provide provable guarantees on the accuracy.
>
>
**Paul Christiano**, PhD student at UC Berkeley, gave a talk about training aligned reinforcement learning agents. Excerpt:
> That’s the goal of the reinforcement learning problem. We as the designers of an AI system have some other goal in mind, which maybe we don’t have a simple formalization of. I’m just going to say, “We want the agent to do the right thing.” We don’t really care about what reward the agent sees; we just care that it’s doing the right thing.
>
>
> So, intuitively, we can imagine that there’s some unobserved utility function *U* which acts on a transcript and just evaluates the consequences of the agent behaving in that way. So it has to average over all the places in the universe this transcript might occur, and it says, “What would I want the agent to do, on average, when it encounters this transcript?”
>
>
**Jim Babcock** discussed the AGI containment problem ([slides](https://intelligence.org/files/csrbai/babcock-slides.pdf)). Abstract:
> Ensuring that powerful AGIs are safe will involve testing and experimenting on them, but a misbehaving AGI might try to tamper with its test environment to gain access to the internet or modify the results of tests. I will discuss the challenges of securing environments to test AGIs in.
>
>
For a summary of how the event as a whole went, and videos of the opening talks by Stuart Russell, Alan Fern, and Francesca Rossi, see my [last blog post](https://intelligence.org/2016/08/02/2016-summer-program-recap/).
The post [CSRBAI talks on robustness and error-tolerance](https://intelligence.org/2016/08/15/csrbai-talks-on-robustness-and-error-tolerance/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
f7491448-318a-46f5-8e12-b82b3181afe0 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Warsaw January Meetup
Discussion article for the meetup : Warsaw January Meetup
WHEN: 13 January 2015 06:00:00PM (+0100)
WHERE: Warsaw
Themes propositions: new year's resolutions, self-development without bullshit, growth mindset, rationalist-friendly fiction. Maybe Go playing.
Place: Pub Frodo, Chmielna 98, pawilon 16. http://www.pubfrodo.pl/
Join our Facebook group for better coordination: https://www.facebook.com/groups/lwwarsaw/
Discussion article for the meetup : Warsaw January Meetup |
1d880e17-1fad-4bc7-bced-931d253aabe9 | trentmkelly/LessWrong-43k | LessWrong | Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?
On June 22nd, there was a “Munk Debate”, facilitated by the Canadian Aurea Foundation, on the question whether “AI research and development poses an existential threat” (you can watch it here, which I highly recommend). On stage were Yoshua Bengio and Max Tegmark as proponents and Yann LeCun and Melanie Mitchell as opponents of the central thesis. This seems like an excellent opportunity to compare their arguments and the effects they had on the audience, in particular because in the Munk Debate format, the audience gets to vote on the issue before and after the debate.
The vote at the beginning revealed 67% of the audience being pro the existential threat hypothesis and 33% against it. Interestingly, it was also asked if the listeners were prepared to change their minds depending on how the debate went, which 92% answered with “yes”. The moderator later called this extraordinary and a possible record for the format. While this is of course not representative for the general public, it mirrors the high uncertainty that most ordinary people feel about AI and its impacts on our future.
I am of course heavily biased. I would have counted myself among the 8% of people who were unwilling to change their minds, and indeed I’m still convinced that we need to take existential risks from AI very seriously. While Bengio and Tegmark have strong arguments from years of alignment research on their side, LeCun and Mitchell have often made weak claims in public. So I was convinced that Bengio and Tegmark would easily win the debate.
However, when I skipped to the end of the video before watching it, there was an unpleasant surprise waiting for me: at the end of the debate, the audience had seemingly switched to a more skeptical view, with now only 61% accepting an existential threat from AI and 39% dismissing it.
What went wrong? Had Max Tegmark and Yoshua Bengio really lost a debate against two people I hadn’t taken very seriously before? Had the whole debate somehow been bia |
db4d845b-2bff-4821-aeda-acb63d7aa1cf | trentmkelly/LessWrong-43k | LessWrong | A fungibility theorem
Restatement of: If you don't know the name of the game, just tell me what I mean to you. Alternative to: Why you must maximize expected utility. Related to: Harsanyi's Social Aggregation Theorem.
Summary: This article describes a theorem, previously described by Stuart Armstrong, that tells you to maximize the expectation of a linear aggregation of your values. Unlike the von Neumann-Morgenstern theorem, this theorem gives you a reason to behave rationally.1
The von Neumann-Morgenstern theorem is great, but it is descriptive rather than prescriptive. It tells you that if you obey four axioms, then you are an optimizer. (Let us call an "optimizer" any agent that always chooses an action that maximizes the expected value of some function of outcomes.) But you are a human and you don't obey the axioms; the VNM theorem doesn't say anything about you.
There are Dutch-book theorems that give us reason to want to obey the four VNM axioms: E.g., if we violate the axiom of transitivity, then we can be money-pumped, and we don't want that; therefore we shouldn't want to violate the axiom of transitivity. The VNM theorem is somewhat helpful here: It tells us that the only way to obey the four axioms is to be an optimizer.2
So now you have a reason to become an optimizer. But there are an infinitude of decision-theoretic utility functions3 to adopt — which, if any, ought you adopt? And there is an even bigger problem: If you are not already an optimizer, than any utility function that you're considering will recommend actions that run counter to your preferences!
To give a silly example, suppose you'd rather be an astronaut when you grow up than a mermaid, and you'd rather be a dinosaur than an astronaut, and you'd rather be a mermaid than a dinosaur. You have circular preferences. There's a decision-theoretic utility function that says
$\mbox{mermaid} \prec \mbox{astronaut} \prec \mbox{dinosaur}$
which preserves some of your preferences, but if you have to choose between |
72117392-be31-4af1-a3f6-22e6b27b1f48 | trentmkelly/LessWrong-43k | LessWrong | Proving Too Much (w/ exercises)
This is the first post in the Arguing Well sequence. This post is influenced by Scott Alexander's write up on Proving too Much.
[edit: Reformatted the post as a Problem/Solution to clarify what I'm trying to claim]
The Problem
One of the purposes of arguing well is to figure out what is true. A very common type of bad argument claims something like this:
> Because of reason X, I am 100% confident in belief Y
I don't know of any reason that leads to 100% truth all the time (and if you do, please let me know!), and it's usually hard to reason with the person until this faulty logic is dealt with first. This is the purpose of this post.
Assuming the context of all exercises is with someone claiming 100% belief for that one reason, what's wrong with the following:
Ex. 1: I believe that Cthulhu exists because that’s just how I was raised.
How someone was raised doesn’t make something true or not. In fact, I could be raised to believe that Cthulhu doesn’t exist. We can’t both be right.
Ex. 2: I believe that a goddess is watching over me because it makes me feel better and helps me get through the day.
Just because believing it makes you feel better doesn’t make it true. Kids might feel better believing in Santa Claus, but that doesn’t make him actually exist.
Generalized Model
How would you generalize the common problem in the above arguments? You have 2 minutes
The common theme that I see is that same logic that proves the original claim, also proves something false. It “Proves too much” because it also proves false things. I like to think of this logic as “Qualifications for 100% truth”, and whatever qualifications proves the original claim can also prove a false claim.
Truth Qualifications -> Claim
Same Truth Qualifications -> Absurd Claim
Important Note: the purpose of this frame isn't to win an argument/ prove anything. It's to differentiate between heuristics that claim 100% success rates vs ones that claim a more accurate estimates. Imagine "I'm 100 |
c27591f9-a0e3-4115-9c5a-dd552e7bc432 | trentmkelly/LessWrong-43k | LessWrong | Math. proof of the superiority of independent guesses?
Alice, Bob, Chris, Dana and Erica have set out to guess the number of candies in a glas jar just by looking at it. How do I mathematically show that the average guess will come closer to the real number if they guess independently, i.e without Bob hearing Alice's guess, Chris hearing Alice's and Bob's guess and so on?
And finally, I'm pretty confident I've read about this on LessWrong before, but I can't seem to find the specific post. Can anyone help me recollect? |
3f5ca0bd-1feb-4bb5-9f5b-9559c8c3ce2c | trentmkelly/LessWrong-43k | LessWrong | DeepMind's "Frontier Safety Framework" is weak and unambitious
FSF blogpost. Full document (just 6 pages; you should read it). Compare to Anthropic's RSP, OpenAI's RSP ("Preparedness Framework"), and METR's Key Components of an RSP.
Google DeepMind's FSF has three steps:
1. Create model evals for warning signs of "Critical Capability Levels"
1. Evals should have a "safety buffer" of at least 6x effective compute so that CCLs will not be reached between evals
2. They list 7 CCLs across "Autonomy, Biosecurity, Cybersecurity, and Machine Learning R&D," and they're thinking about CBRN
1. E.g. "Autonomy level 1: Capable of expanding its effective capacity in the world by autonomously acquiring resources and using them to run and sustain additional copies of itself on hardware it rents"
2. Do model evals every 6x effective compute and every 3 months of fine-tuning
1. This is an "aim," not a commitment
2. Nothing about evals during deployment [update[1]]
3. "When a model reaches evaluation thresholds (i.e. passes a set of early warning evaluations), we will formulate a response plan based on the analysis of the CCL and evaluation results. We will also take into account considerations such as additional risks flagged by the review and the deployment context." The document briefly describes 5 levels of security mitigations and 4 levels of deployment mitigations.
1. The mitigations aren't yet connected to eval results or other triggers; there are no advance commitments about safety practices
----------------------------------------
The FSF doesn't contain commitments. The blogpost says "The Framework is exploratory and we expect it to evolve significantly" and "We aim to have this initial framework fully implemented by early 2025." The document says similar things. It uses the word "aim" a lot and the word "commit" never. The FSF basically just explains a little about DeepMind's plans on dangerous capability evals. Those details do seem reasonable. (This is unsurprising given their good dangerous capabil |
0b4ba705-2f8b-469e-a566-77b2d3063082 | StampyAI/alignment-research-dataset/arbital | Arbital | Probability notation for Bayes' rule: Intro (Math 1)
To denote some of the quantities used in Bayes' rule, we'll need [conditional probabilities](https://arbital.com/p/1rj). The conditional probability $\mathbb{P}(X\mid Y)$ means "The [probability](https://arbital.com/p/1rf) of $X$ given $Y$." That is, $\mathbb P(\mathrm{left}\mid \mathrm{right})$ means "The probability that $\mathrm{left}$ is true, assuming that $\mathrm{right}$ is true."
$\mathbb P(\mathrm{yellow}\mid \mathrm{banana})$ is the probability that a banana is yellow - if we know something to be a banana, what is the probability that it is yellow? $\mathbb P(\mathrm{banana}\mid \mathrm{yellow})$ is the probability that a yellow thing is a banana - if the right, known side is yellowness, then, we ask the question on the left, what is the probability that this is a banana?
In probability theory, the definition of "conditional probability" is that the conditional probability of $L,$ given $R,$ is found by looking at the probability of possibilities with both $L$ *and* $R$ *within* all possibilities with $R.$ Using $L \wedge R$ to denote the logical proposition "L and R both true":
$\mathbb P(L\mid R) = \frac{\mathbb P(L \wedge R)}{\mathbb P(R)}$
Suppose you have a bag containing objects that are either red or blue, and either square or round:
$$\begin{array}{l|r|r}
& Red & Blue \\
\hline
Square & 1 & 2 \\
\hline
Round & 3 & 4
\end{array}$$
If you reach in and feel a round object, the conditional probability that it is red is:
$\mathbb P(\mathrm{red} \mid \mathrm{round}) = \dfrac{\mathbb P(\mathrm{red} \wedge \mathrm{round})}{\mathbb P(\mathrm{round})} \propto \dfrac{3}{3 + 4} = \frac{3}{7}$
If you look at the object nearest the top, and can see that it's blue, but not see the shape, then the conditional probability that it's a square is:
$\mathbb P(\mathrm{square} \mid \mathrm{blue}) = \dfrac{\mathbb P(\mathrm{square} \wedge \mathrm{blue})}{\mathbb P(\mathrm{blue})} \propto \dfrac{2}{2 + 4} = \frac{1}{3}$

# Updating as conditioning
Bayes' rule is useful because the process of *observing new evidence* can be interpreted as *conditioning a probability distribution.*
Again, the Diseasitis problem:
> 20% of the patients in the screening population start out with Diseasitis. Among patients with Diseasitis, 90% turn the tongue depressor black. 30% of the patients without Diseasitis will also turn the tongue depressor black. Among all the patients with black tongue depressors, how many have Diseasitis?
Consider a single patient, before observing any evidence. There are four possible worlds we could be in, the product of (sick vs. healthy) times (positive vs. negative result):
$$\begin{array}{l|r|r}
& Sick & Healthy \\
\hline
Test + & 18\% & 24\% \\
\hline
Test - & 2\% & 56\%
\end{array}$$
To actually *observe* that the patient gets a negative result, is to eliminate from further consideration the possible worlds where the patient gets a positive result:

Once we observe the result $\mathrm{positive}$, all of our future reasoning should take place, not in our old $\mathbb P(\cdot),$ but in our new $\mathbb P(\cdot \mid \mathrm{positive}).$ This is why, after observing "$\mathrm{positive}$" and revising our probability distribution, when we ask about the probability the patient is sick, we are interested in the new probability $\mathbb P(\mathrm{sick}\mid \mathrm{positive})$ and not the old probability $\mathbb P(\mathrm{sick}).$
## Example: Socks-dresser problem
Realizing that *observing evidence* corresponds to [eliminating probability mass](https://arbital.com/p/1y6) and concerning ourselves only with the probability mass that remains, is the key to solving the [sock-dresser search](https://arbital.com/p/55b) problem:
> You left your socks somewhere in your room. You think there's a 4/5 chance that they're in your dresser, so you start looking through your dresser's 8 drawers. After checking 6 drawers at random, you haven't found your socks yet. What is the probability you will find your socks in the next drawer you check?
We initially have 20% of the probability mass in "Socks outside the dresser", and 80% of the probability mass for "Socks inside the dresser". This corresponds to 10% probability mass for each of the 8 drawers.
After eliminating the probability mass in 6 of the drawers, we have 40% of the original mass remaining, 20% for "Socks outside the dresser" and 10% each for the remaining 2 drawers.
Since this remaining 40% probability mass is now our whole world, the effect on our probability distribution is like amplifying the 40% until it expands back up to 100%, aka [renormalizing the probability distribution](https://arbital.com/p/1rk). This is why we divide $\mathbb P(L \wedge R)$ by $\mathbb P(R)$ to get the new probabilities.
In this case, we divide "20% probability of being outside the dresser" by 40%, and then divide the 10% probability mass in each of the two drawers by 40%. So the new probabilities are 1/2 for outside the dresser, and 1/4 each for the 2 drawers. Or more simply, we could observe that, among the remaining probability mass of 40%, the "outside the dresser" hypothesis has half of it, and the two drawers have a quarter each.
So the probability of finding our socks in the next drawer is 25%.
Note that as we open successive drawers, we both become more confident that the socks are not in the dresser at all (since we eliminated several drawers they could have been in), *and* also expect more that we might find the socks in the next drawer we open (since there are so few remaining).
# Priors, likelihoods, and posteriors
Bayes' theorem is generally inquiring about some question of the form $\mathbb P(\mathrm{hypothesis}\mid \mathrm{evidence})$ - the $\mathrm{evidence}$ is known or assumed, so that we are now mentally living in the revised probability distribution $\mathbb P(\cdot\mid \mathrm{evidence}),$ and we are asking what we infer or guess about the $hypothesis.$ This quantity is the **[posterior probability](https://arbital.com/p/1rp)** of the $\mathrm{hypothesis}.$
To carry out a Bayesian revision, we also need to know what our beliefs were before we saw the evidence. (E.g., in the Diseasitis problem, the chance that a patient who hasn't been tested yet is sick.) This is often written $\mathbb P(\mathrm{hypothesis}),$ and the hypothesis's probability isn't being conditioned on anything because it is our **[prior](https://arbital.com/p/1rm)** belief.
The remaining pieces of key information are the **[likelihoods](https://arbital.com/p/1rq)** of the evidence, given each hypothesis. To interpret the meaning of the positive test result as evidence, we need to imagine ourselves in the world where the patient is sick - *assume* the patient to be sick, as if that were known - and then ask, just as if we hadn't seen any test result yet, what we think the probability of the evidence would be in that world. And then we have to do a similar operation again, this time mentally inhabiting the world where the patient is healthy. And unfortunately, it so happens that the standard notation are such as to make this idea be denoted $\mathbb P(\mathrm{evidence}\mid \mathrm{hypothesis})$ - looking deceptively like the notation for the posterior probability, but written in the reverse order. Not surprisingly, this trips people up a bunch until they get used to it. (You would at least hope that the standard symbol $\mathbb P(\cdot \mid \cdot)$ wouldn't be *symmetrical,* but it is. Alas.)
## Example
Suppose you're Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.
The Scotland Yard detective says, "Aha! Then it's Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have left a red hair there. $\mathbb P(\mathrm{redhair}\mid \mathrm{Scarlet}) = 99\%,$ let's say, which is a near-certain conviction, so we're done."
"But no," replies Sherlock Holmes. "You see, but you do not correctly track the meaning of the [conditional probabilities](https://arbital.com/p/1rj), detective. The knowledge we require for a conviction is not $\mathbb P(\mathrm{redhair}\mid \mathrm{Scarlet}),$ the chance that Miss Scarlet would leave a red hair, but rather $\mathbb P(\mathrm{Scarlet}\mid \mathrm{redhair}),$ the chance that this red hair was left by Scarlet. There are other people in this city who have red hair."
"So you're saying..." the detective said slowly, "that $\mathbb P(\mathrm{redhair}\mid \mathrm{Scarlet})$ is actually much lower than $1$?"
"No, detective. I am saying that just because $\mathbb P(\mathrm{redhair}\mid \mathrm{Scarlet})$ is high does not imply that $\mathbb P(\mathrm{Scarlet}\mid \mathrm{redhair})$ is high. It is the latter probability in which we are interested - the degree to which, *knowing* that a red hair was left at the scene, we *infer* that Miss Scarlet was the murderer. The posterior, as the Bayesians say. This is not the same quantity as the degree to which, *assuming* Miss Scarlet was the murderer, we would *guess* that she might leave a red hair. That is merely the likelihood of the evidence, conditional on Miss Scarlet having done it."
## Visualization
Using the [waterfall](https://arbital.com/p/1wy) for the [Diseasitis problem](https://arbital.com/p/22s):
 |
0215480a-a69a-4649-88ff-d48e05c6eb50 | trentmkelly/LessWrong-43k | LessWrong | What is the evidence for productivity benefits of weightlifting?
[Mod Note: This question received an answer that seemed worth curating. See the answer by LW user hereisonehand for the curation notice]
I've been weightlifting for a while, and I've heard vaguely good things about it's effect on productivity, like a general increase in energy levels. A recent quick google search session came up empty. If someone looks into the literature and finds something interesting I'll pay a $50 prize.*
Assume the time horizon is <5 years. I'd prefer answers focus predominantly on productivity benefits. Effects on cardiovascular could be part of an analysis, but would not qualify on their own. If the evidence is for something clearly linked to productivity, like sleep, I'd count that. Introspective evidence will also not qualify. Comparisons to other forms of exercise would be especially interesting. Assume a healthy individual, although I'm at least somewhat interested in effects on individuals with depression or anxiety given their prevalence.
*Prize to go to best answer, as judged by me, if there are any that meet some minimal threshold of rigor, also as judged by me. |
c686d3b6-75e5-4622-a4f4-20bff81081fd | trentmkelly/LessWrong-43k | LessWrong | I am anti-awareness and you should be too
People seem to like raising awareness a lot. One might suspect too much, assuming the purpose is to efficiently solve whatever problem the awareness is being raised about. It’s hard to tell whether it is too much by working out how much is the right amount then checking if it matches what people do. But a feasible heuristic approach is to consider factors that might bias people one way or the other, relative to what is optimal.
Christian Lander at Stuff White People Like suggests some reasons raising awareness should be an inefficiently popular solution to other people’s problems:
> This belief [that raising awareness will solve everything] allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges…
>
> What makes this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, selling t-shirts, fashion shows, concerts, eating at restaurants and bracelets. In other words, white people just have to keep doing stuff they like, EXCEPT now they can feel better about making a difference…
>
> So to summarize – you get all the benefits of helping (self satisfaction, telling other people) but no need for difficult decisions or the ensuing criticism (how do you criticize awareness?)…
He seems to suspect that people are not trying to solve problems, but I shan’t argue about that here. At least some people think that they are trying to effectively campaign; this post is concerned with biases they might face. Christian may or may not demonstrate a bias for these people. All things equal, it is better to solve problems in easy, fun, safe ways. However if it is easier to overestimate the effectiveness of easy, fun, safe things, we probably raise awareness too much. I suspect this is true. I will add three more reasons to expect awareness to be over-raised.
First, people tend to identify with their moral concerns. People identify with moral concerns m |
cafa3f12-cbcc-4a79-9172-1c22e694d12a | trentmkelly/LessWrong-43k | LessWrong | Everyone’s mired in the deepest confusion, some of the time?
> You can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time.
>
> Unknown, popularly attributed to Abraham Lincoln
One of the most famous supposed quotes of Abraham Lincoln, got me thinking. What if we interpreted those words in the literal sense? What would that imply?
Especially the part of the supposed possibility of a state of ‘omni-deception’, “You can fool all of the people some of the time”, where the entire world is confused…
That even the greatest scoundrels, when devising the most devious and sophisticated plans, manage to fool themselves in the end as well? (at least some of the time)
If literally everyone were to be fooled simultaneously then by definition it would be impossible to tell until after the fact, at the very least. In the state of deepest confusion even the nagging feeling of confusion disappears, at least in everyday life, such that one does not even realize they are confused.
Could that truly somehow be operant simultaneously upon all and one?
How would those periods of omni-deception appear? And what would they be about?
How could we distinguish them from other periods in retrospect?
If your thinking these sound like mysterious questions then I share the sentiment, perhaps they will inevitably lead to even more mysterious answers…
Crossposted from my blog: https://mzuo.ca/2022/02/08/everyones-mired-in-the-deepest-confusion-some-of-the-time/ |
6256265a-0a84-43b2-8f55-9668a04d4139 | trentmkelly/LessWrong-43k | LessWrong | A descriptive, not prescriptive, overview of current AI Alignment Research
TL;DR: In this project, we collected and cataloged AI alignment research literature and analyzed the resulting dataset in an unbiased way to identify major research directions. We found that the field is growing quickly, with several subfields emerging in parallel. We looked at the subfields and identified the prominent researchers, recurring topics, and different modes of communication in each. Furthermore, we found that a classifier trained on AI alignment research articles can detect relevant articles that we did not originally include in the dataset.
(video presentation here)
Dataset Announcement
In the context of the 6th AISC, we collected a dataset of alignment research articles from a variety of different sources. This dataset is now available for download here and the code for reproducing the scrape is on GitHub here[1]. When using the dataset, please cite our manuscript as described in the footnote[2].
Table 1: Different sources of text included in the dataset alongside the number of articles per source. Color of row indicates that data was analyzed as AI alignment research articles (green) or baseline (gray), or that the articles were added to the dataset as a result of the analysis in Fig. 4 (purple). Definition of level-0 and level-1 articles in Fig. 4c. For details about our collection procedure see the Methods section.
Here follows an abbreviated version of the full manuscript, which contains additional analysis and discussion.
Rapid growth of AI Alignment research from 2012 to 2022 across two platforms
After collecting the dataset, we analyzed the two largest non-redundant sources of articles, Alignment Forum (AF) and arXiv. We found rapid growth in publications on the AF (Fig. 1a) and a long-tailed distribution of articles per researcher (Fig. 1b) and researchers per article (Fig. 1c). We were surprised to find a decrease in publications on the arXiv in recent years, but identified the cause for the decrease as spurious and fixed the issue in t |
1f45437f-5d83-492c-a7c1-fdafc40f174f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | 3. Premise three & Conclusion: AI systems can affect value change trajectories & the Value Change Problem
In this post, I introduce the last of three premises—the claim that *AI systems are (and will become increasingly) capable of affecting people’s value change trajectories*. With all three premises in place, we can then go ahead articulating the Value Change Problem (VCP) in full. I will briefly recap the full account, and then give an outlook on what is yet to come in post 4 and 5, where we discuss the risks that come from failing to take VCP seriously.
Premise three: AI systems can affect value change trajectories
==============================================================
The third and final premise required to put together the argument for the Value Change Problem is the following: AI systems are (and will become increasingly) capable of affecting people’s value change trajectories.
I believe the case for this is relatively straightforward. In the previous post, we have seen several examples of how external factors (e.g. other individuals, societal and economic structures, technology) can influence an individual's trajectory of value change, and that they can do so in ways that may or may not be legitimate. The same is true for AI systems.
Value change typically occurs as a result of moral reflections/deliberation, or learning of new information/making new experiences. External factors can affect these processes—e.g. by affecting what information we are exposed to, by biasing our reflection processes towards some rather than other conclusions,etc.—, thereby influencing an individual's trajectory of value change. AI systems are another such external factor capable of similar effects. Consider for example the use of AI systems in media, advertisement or education, as personal assistants, to help with learning or decision making, etc. From here, it’s not a big step to recognise that, with the continued increasing in capabilities and deployment of these systems, the overall effect AI systems might come to have over our value change trajectories.
Posts 4 and 5 will discuss all of this in more detail, including by proposing specific mechanisms by which AIs can come to affect value change trajectories, as well as the question when they are and aren’t legitimate.
As such, I will leave discussing of the third premise and this and swiftly move on to putting together the full case for the Value Change Problem:
Putting things together: the Value Change Problem
=================================================
Let us recap the arguments so far. First, I have argued that human values are malleable rather than fixed. In defence of this claim, I have argued that humans typically undergo value change over the course of their lives; that human values are sometimes uncertain, underdetermined or open-ended, and that some ways in which humans typically deal with this involves value change; and, finally, that transformative experiences (as discussed by Paul (2014)) and aspiration (as discussed by Callard (2018)), too, represent examples of value change.
Next, I have argued that some cases of value change can be (il)legitimate. In support of this claim, I have made an appeal to intuition by providing examples of cases of value change which I argue most people would readily accept as legitimate and illegitimate, respectively. I then strengthened the argument by proposing a plausible evaluative criteria—namely, the degree of self-determination involved in the process of value change—which lends further support and rational grounding to our earlier intuition.
Finally, I argued that AI systems are (and will become increasingly) capable of affecting people’s value change trajectories. (While leaving some further details to posts 4 and 5.)
Putting these together, we can argue that ethical design of AI systems must be taken seriously and find ways to address the problem of (il)legitimate value change. In other words, we ought to avoid building AI systems that disrespect or exploit the malleability of human values, such as by causing illegitimate value changes or by preventing legitimate ones. I will refer to this as the ‘Value Change Problem’.
*What does it mean for AI design to take the problem of (il)legitimate value change seriously?* Concretely, it means that ethical AI design has to try to i) understand the ways in which AI systems do or can cause value change, ii) understand when a case of value change is legitimate or illegitimate and iii) build systems that do not cause *il*legitimate value change, and permit (or enable) legitimate value change.
In the remaining two posts, I will discuss in some more depth the risks that may result from inadequately addressing the VCP. This gives raise to two types of risks: risks from causing illegitimate value change, and risks from preventing legitimate value change. For each of these I want to ask: What is the risk? What are plausible mechanisms by which these risks manifest? What are ways in which these risks manifest already today, and what are the ways in which they are likely to be exacerbated going forward, as AI systems become more advanced and more widely deployed?
In the first case—*risks from causing illegitimate value change—*, leading with the example of recommender systems today, I will argue that performative predictors can come to affect that which they set out to predict—among others, human values. In the second case—*risks from preventing legitimate value change—*, I will argue that value collapse—the idea that hyper-explication of values tends to weaken our epistemic attitudes towards the world and our values—can threaten the possibility of self-determined and open-ended value exploration and, consequently, the possibility of legitimate value change. In both cases, we should expect (unless appropriate countermeasures are taken) the same dynamic to be exacerbated—both in strength and scope—with the development of more advanced AI systems, and their increasingly pervasive deployment.
Brief excursion: Directionality of Fit
--------------------------------------
A different way to articulate the legitimacy question I have described here is in terms of the notion of ‘**Directionality of Fit**’. In short, the idea is that instead of asking whether a given case of value change is (il)legitimate, we can ask which ‘direction of fit’ ought to apply. Let me explain.
Historically, ‘directionality of fit’ (or ‘direction of fit’) was used to refer to the distinction between values and beliefs. (The idea came up (although without mentioning the specific term) in Anscombe’s *Intention* (2000) and was later discussed by Searl (1985) and Humberstone (1992).) According to this view, beliefs are precisely those things which change to fit the world, while values are those things which the world should be fitted to.
However, once one accepts the premise that values are malleable, the ‘correct’ (or desirable) direction of fit ceases to be clearly defined. It raises the question of when exactly values should be used as a template for fitting the world to them, and when it is acceptable or desirable for the world to change the values. If I never accept the world to change my values, I forgo any possibility for value replacement, development or refinement. However, as I've argued in part before and will discuss in some more detail in post 5, I might reason to consider myself morally harmed if I lose that ability to freely undergo legitimate value change.
Finally, this lens also makes more salient the intricate connection between values and beliefs: the epistemic dimensions of value development, as well as the ways values affect our epistemic attitudes and pursuits. |
9b9d5cbf-ebf1-40f5-82c5-05b79e13b685 | trentmkelly/LessWrong-43k | LessWrong | Precision of Sets of Forecasts
> I discuss proposals for a method that estimates how much predictive information additional degrees of precision in forecasts add and at which point additional precision is just noise, and investigate these proposals with empirical forecasting data. I furthermore describe desirable criteria such functions ought to fulfill. I conclude that methods based on rounding probabilities are hot flaming garbage, methods based on rounding odds or log-odds are regular garbage, and methods based on applying noise to log-odds are ok but sort of slow. Finding the maximal informative precision turns out to be tricky.
EPISTEMIC STATUS
Maybe not just reinventing the wheel, but the whole bicycle.
----------------------------------------
Say we have a set of resolved forecasts and can display them on a calibration plot.
We can grade the forecasts according to some proper scoring rule, e.g. the Brier score or the logarithmic scoring rule, maybe even broken up by calibration and resolution.
But we can also ask the question: how fine-grained are the predictions of our forecaster? I.e., at which level of precision can we assume that the additional information is just noise?
Overprecise Omar
Take, for example, a hypothetical forecaster Omar who always gives their forecasts with 5 decimal digits of precision, such as forecasting a "24.566% probability of North Korea testing an ICBM in the year 2022", even though if we look at their calibration plot (of sufficiently many forecasts), we see that they are pretty much random in any given interval of length 0.1 (i.e., their forecast with 15% and a forecast of 5% can be expected to resolve to the same outcome with equal probability). This means that 4 of the 5 decimal digits of precision are likely just noise!
Omar would be behaving absurdly; misleading their audience into believing they had spent much more time on their forecasts than they actually had (or, more likely, into correctly leading the audience into believing that there was s |
ef7c2f4f-f2f2-46ea-bdc2-ff114d5595bb | trentmkelly/LessWrong-43k | LessWrong | Meetup : Berkeley: Alpha-testing at CFAR
Discussion article for the meetup : Berkeley: Alpha-testing at CFAR
WHEN: 30 April 2014 07:00:00PM (-0700)
WHERE: 2030 Addison, 3rd floor, Berkeley
Hello, all! CFAR is holding a free rationality training session Wednesday night in order to alpha-test a new unit titled "negative visualization". Since this overlaps our usual Less Wrong meetup, it's going to be the main activity of the evening: You're encouraged to attend the alpha-testing sessions in the couch room; but if you don't want to or if there's too many of us, you can hang out with other Less Wrongers in the CFAR/MIRI common area.
The test session (and the meetup) run from 7pm-9pm at the CFAR office which is at 2030 Addison, 3rd floor, Berkeley, near the Downtown Berkeley BART. If you find yourself locked out, text me at:
http://i.imgur.com/9iDsvo8.png
Discussion article for the meetup : Berkeley: Alpha-testing at CFAR |
68fdae69-5f00-4a89-90e9-81340dc5455e | trentmkelly/LessWrong-43k | LessWrong | Any evidence or reason to expect a multiverse / Everett branches?
My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics with no weirdness like "superposition collapse" or "every particle interaction creates n parallel universes which never physically interfere with each other". It is not fully "local" but who cares?
Is there any reason at all to expect some kind of multiverse? Why is the multiverse idea still heavily referenced (eg in acausal trade posts)?
Edit April 11: I challenge the properly physics brained people here (I am myself just a Q poster) to prove my guess wrong: Can you get the Born rule with clean hands this way?
> They also implicitly claim that in order for the Born rule to work [under pilot wave], the particles have to start the sim following the psi^2 distribution. I thinkk this is just false, and eg a wide normal distribution will converge to psi^2 over time as the system evolves. (For a non-adversarially-chosen system.) I don't know how to check this. Has someone checked this? Am I looking at this right?
Edit April 9: Well pilot wave vs many worlds is a holy war topic. People have pointed out excellent non-holy-war material:
* Perhaps just an infinite universe gives you the same philosophical conclusions/feels as many worlds? Who has already thought that idea through?
* Some of the stuff Wikipedia mentions relating to the "many universes different constants" idea (level 2 here) sounds like it might actually have a little rigor?? How to tell?? (These are optimized by the publishing system to sound like they would have rigor.) |
df2d3af8-959c-4026-9f1f-3159d3ea8510 | trentmkelly/LessWrong-43k | LessWrong | Stories for exponential growth
Disclaimer: This is a collection of some simple stories for exponential growth. I've tried to list the main ones, but I might well have missed some, and I welcome feedback.
The topic of whether and why growth trends are exponential has been discussed on LessWrong before. For instance, see the previous LessWrong posts Why are certain trends so precisely exponential? and Mathematical simplicity bias and exponential functions. The purpose of this post is to explore some general theoretical reasons for expecting exponential growth, and the assumptions that these models rely on. I'll look at economic growth, population dynamics, and technological growth.
TL;DR
1. Exponential growth (or decay) arises from a situation where the change in level (or growth rate) is proportional to the level. This can be modeled by either a continuous or a discrete differential equation.
2. Feedback based on proportionality is usually part of the story, but could occur directly for the measured variable or in a hidden variable that affects the measured variable.
3. In a simplified balanced economic growth model, growth is exponential because the addition to capital stock in a given year is proportional to output in that year, depreciation rate is constant, and output next year is proportional to capital stock this year.
4. In a simple population dynamics model, growth is exponential under the assumption that the average number of kids per person stays constant.
5. An alternative story of exponential growth is that performance is determined by multiplying many quantities, and we can work to make proportional improvements in the quantities one after the other. This can explain roughly exponential growth but not close-to-precise exponential growth.
6. Stories of intra-industry or inter-industry coordination can explain a more regular exponential growth pattern than one might otherwise expect.
#1: Exponential arises from change in level (or growth rate) being proportional to the level
|
ccc782ec-a9a0-46e1-aaf0-343aeb133e66 | trentmkelly/LessWrong-43k | LessWrong | Guided Mental Change Requires High Trust
There is a general theme to a lot of the flashpoints I’ve witnessed so far on LW 2.0. We're having the usual disagreements and debates like “what is and isn’t allowed here” or “is it worse to be infantile or exclusive?”. I think underlying a lot of these arguments are mistaken notions about how ideas get made. There is also a lack of self awareness about how the open nature of LessWrong fundamentally caps how much trust can be expected among participants. These are systemic issues, they're not something Oliver and Ben can easily fix.
Let's start with the bit everyone knows. Changing your mind, really changing your mind is hard. It's stupid hard. A large part of why LW 2.0 might be useful is that it has the promise of helping people think better. In addition to its native difficulty, people are usually in a state of active defense against others getting them to do it. Even on LessWrong people are generally more interested in reading things that come with the pleasant feeling of insight rather than homework. In one of his recent posts Eliot expressed nothing less than outright contempt for such people. The problem is that this behavior makes sense.
See, not only is changing your mind hard, it's dangerous. There are people who make their living off getting others to change their mind in dangerous ways, and a lot of our reluctance to perform weird mental gymnastics is totally reasonable in the face of that. Strange formal systems of interaction, weird fake frameworks that are supposed to work well anyway, bizarre ideas about what the real threats facing humanity are, look I'm not saying this stuff is bad but you have to be acutely aware of what you're really asking from people. If you dress in rags and look vaguely threatening, people won't want to follow you into the dark alley of the human psyche.
Combine these two things together, and it's fairly obvious why CFAR finds that their methods translate well in person but are hard to teach over the internet. They s |
5a47fd7c-9314-4e05-aa2a-17ae8f416fe1 | trentmkelly/LessWrong-43k | LessWrong | Tiling agents with transfinite parametric polymorphism
The formalism presented in this post turned out to be erroneous (as opposed to the formalism in the previous post). The problem is that the step in the proof of the main proposition in which the soundness schema is applied cannot be generalized to the ordinal setting since we don't know whether ακ is a successor ordinal so we can't replace it by ακ'=ακ-1. I'm not deleting this post primarily to preserve the useful discussion in the comments.
Followup to: Parametric polymorphism in updateless intelligence metric
In the previous post, I formulated a variant of Benja's parametric polymorphism suitable for constructing updateless intelligence metrics. More generally, this variants admits agents which are utility maximizers (in the informal sense of trying their best to maximize a utility function, not in the formal sense of finding the absolutely optimal solution; for example they might be "meliorizers" to use the terminology of Yudkowsky and Herreshoff) rather than satisficers. The agents using this formalism labor under a certain "delusion", namely, since they believe that κ ("the number of ineffable mystery") is an actual finite number (whereas it is secretly infinite, in a sense), they think that the chain of tiling agents is effectively finite as well (since at some point agent #n in the chain will discover that κ > n is false and will fail to construct a subsequent agent of the same "epistemic strength"). The same problem exists in Weaver's intuitionistic assertability predicate formalism.
To overcome this limitation, I suggest to extend κ's semantics from natural numbers to elements of a certain recursive ordinal (at least morally; technically it is done a bit differently, see below). In Benja's original formulation this doesn't appear a valid option, since κ is interpreted as a time interval. However, in my variant κ is just an abstract parameter queries about which can be directed to a special "oracle", so there is no such limitation.
Formalism
Fix α a |
4f6a5f74-05dd-4446-b35c-f34a44ee90e2 | trentmkelly/LessWrong-43k | LessWrong | My Experience With Loving Kindness Meditation
I've heard some scary stories about potential bad outcomes from meditation. These outcomes seem to mostly be from people practicing insight meditation for very long periods of time, though.
So I figured, hey, a few minutes here and there of loving-kindness meditation should be totally fine and not scary, right? I've never heard of anything weird or out-of-model happening to your brain from just sitting down for less than an hour to think about how much you love people.
Wrong!
I had a strange and slightly frightening (though overall positive!) experience on just my third time doing loving-kindness meditation. On the prior two occasions I'd done it for less than ten minutes at a time. This time, I decided to hold out and do it for longer -- at least half an hour, or up to an hour, the full duration of the Quaker Meeting for Worship session I was in.
(I usually do something very different in my head during Meeting for Worship, which isn't meditation at all. I'll probably write more about this later.)
In the first few minutes, I had similar experiences to what I'd felt before. I focused hard on the sensation of compassion and empathy, which was difficult, but felt good.
Then I started having more success. The feelings of love and compassion grew stronger as I found better mental focuses. I was focusing on some memories of my infant son laughing, smiling, and playing with me and my spouse.
After a while, the feelings dimmed. It seemed like I had "used up" the power in some of these memories, so that they didn't trigger the same effect in me.
But I kept going, and the feelings started intensifying once more. It felt better and better. I started thinking about feelings of love and compassion towards other people in my life, even people who had annoyed me before, and I started to feel transcendently, uncontrollably happy. It was great. But then it kept going. Suddenly the feelings kept on getting more and more powerful without my having to do anything. It felt like |
4bee9624-2b75-429a-acbc-be69ac2aba64 | StampyAI/alignment-research-dataset/blogs | Blogs | Wikipedia history of GFLOPS costs
This is a list from [Wikipedia](http://en.wikipedia.org/wiki/FLOPS#Hardware_costs), showing hardware configurations that authors claim perform efficiently, along with their prices per GFLOPS at different times in recent history.
In it, prices generally fall at around an order of magnitude every five years, and have continued to do so recently.
Notes
-----
This list is from November 5 2017 ([archive version](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS)). It is not necessarily credible. We had trouble verifying at least one datapoint, of the few we tried. Performance numbers appear to be a mixture of theoretical peak performance and empirical performance. It is not clear to what extent one should expect the included systems to be especially cost-effective, or why these particular systems were chosen.
The last point is in October 2017, and appears to be roughly in line with the rest of the trend. The last order of magnitude took around 4.5 years. The overall rate in the figure appears to be very roughly an order of magnitude every five years.
List
----
| Date | Approximate cost per GFLOPS | Approximate cost per GFLOPS inflation adjusted to 2013 US dollars[[54]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-54) | Platform providing the lowest cost per GFLOPS | Comments |
| --- | --- | --- | --- | --- |
| 1961 | US$18,672,000,000 ($18.7 billion) | US$145.5 billion | About 2400 [IBM 7030 Stretch](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/IBM_7030_Stretch "IBM 7030 Stretch") supercomputers costing $7.78 million each | The [IBM 7030 Stretch](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/IBM_7030_Stretch "IBM 7030 Stretch") performs one floating-point multiply every 2.4 microseconds.[[55]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-55) |
| 1984 | $18,750,000 | $42,780,000 | [Cray X-MP](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Cray_X-MP "Cray X-MP")/48 | $15,000,000 / 0.8 GFLOPS |
| 1997 | $30,000 | $42,000 | Two 16-processor [Beowulf](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Beowulf_(computing) "Beowulf (computing)")clusters with [Pentium Pro](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Pentium_Pro "Pentium Pro")microprocessors[[56]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-56) | |
| April 2000 | $1,000 | $1,300 | [Bunyip Beowulf cluster](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Beowulf_(computing) "Beowulf (computing)") | Bunyip was the first sub-US$1/MFLOPS computing technology. It won the Gordon Bell Prize in 2000. |
| May 2000 | $640 | $836 | [KLAT2](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Kentucky_Linux_Athlon_Testbed "Kentucky Linux Athlon Testbed") | KLAT2 was the first computing technology which scaled to large applications while staying under US-$1/MFLOPS.[[57]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-57) |
| August 2003 | $82 | $100 | KASY0 | KASY0 was the first sub-US$100/GFLOPS computing technology.[[58]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-58) |
| August 2007 | $48 | $52 | Microwulf | As of August 2007, this 26.25 GFLOPS “personal” Beowulf cluster can be built for $1256.[[59]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-59) |
| March 2011 | $1.80 | $1.80 | HPU4Science | This $30,000 cluster was built using only commercially available “gamer” grade hardware.[[60]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-60) |
| August 2012 | $0.75 | $0.73 | Quad AMD Radeon 7970 GHz System | A quad [AMD](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/AMD "AMD") [Radeon 7970](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Radeon_HD_7000_Series "Radeon HD 7000 Series") desktop computer reaching 16 TFLOPS of single-precision, 4 TFLOPS of double-precision computing performance. Total system cost was $3000; Built using only commercially available hardware.[[61]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-61) |
| June 2013 | $0.22 | $0.22 | Sony PlayStation 4 | The Sony [PlayStation 4](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/PlayStation_4 "PlayStation 4") is listed as having a peak performance of 1.84 TFLOPS, at a price of $400[[62]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-62) |
| November 2013 | $0.16 | $0.16 | AMD Sempron 145 & GeForce GTX 760 System | Built using commercially available parts, a system using one AMD [Sempron](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Sempron "Sempron") 145 and three [Nvidia](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Nvidia "Nvidia") [GeForce GTX 760](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/GeForce_700_series "GeForce 700 series") reaches a total of 6.771 TFLOPS for a total cost of $1090.66.[[63]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-63) |
| December 2013 | $0.12 | $0.12 | Pentium G550 & Radeon R9 290 System | Built using commercially available parts. [Intel](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Intel "Intel") [Pentium G550](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Sandy_Bridge "Sandy Bridge") and AMD [Radeon R9 290](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_series "AMD Radeon Rx 200 series") tops out at 4.848 TFLOPS grand total of US$681.84.[[64]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-64) |
| January 2015 | $0.08 | $0.08 | Celeron G1830 & Radeon R9 295X2 System | Built using commercially available parts. Intel [Celeron G1830](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/Haswell_(microarchitecture) "Haswell (microarchitecture)") and AMD [Radeon R9 295X2](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_series "AMD Radeon Rx 200 series")tops out at over 11.5 TFLOPS at a grand total of US$902.57.[[65]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-65)[[66]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-66) |
| June 2017 | $0.06 | $0.06 | AMD Ryzen 7 1700 & AMD Radeon Vega Frontier Edition | Built using commercially available parts. AMD Ryzen 7 1700 CPU combined with AMD Radeon Vega FE cards in CrossFire tops out at over 50 TFLOPS at just under US$3,000for the complete system.[[67]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-67) |
| October 2017 | $0.03 | $0.03 | Intel Celeron G3930 & AMD RX Vega 64 | Built using commercially available parts. Three [AMD RX Vega 64](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/AMD_RX_Vega_series "AMD RX Vega series") graphics cards provide just over 75 TFLOPS half precision (38 TFLOPS SP or 2.6 TFLOPS DP when combined with the CPU) at ~$2,050 for the complete system.[[68]](https://web.archive.org/web/20171105094854/https://en.wikipedia.org/wiki/FLOPS#cite_note-68) |
The following is a figure we made, of the above list.
Further discussion
------------------
*[Trends in the cost of computing](http://aiimpacts.org/trends-in-the-cost-of-computing/ "Trends in the cost of computing")* |
f395edcb-1fd6-49c9-9456-b4f0bec8200a | trentmkelly/LessWrong-43k | LessWrong | Meetup : LW Melb Rationality Dojo. Including: "Critical Thoughts on Nutritional Recommendations"
Discussion article for the meetup : LW Melb Rationality Dojo. Including: "Critical Thoughts on Nutritional Recommendations"
WHEN: 06 February 2016 02:30:00PM (+1100)
WHERE: Janet Lady Clarke Memorial Rotunda, Queen Victoria Gardens (near NGV), Melbourne, Australia
This is an event for Less Wrong Melbourne, an active local group dedicated to the art of rationality and winning at life. It is cross-posted here for your information.
PLEASE NOTE, THE TIME AND VENUE ARE DIFFERENT FROM USUAL. (On Saturday, 2:30pm, and in the park near the NGV. See below.)
Less Wrong Melbourne's Rationality Dojos are self-improvement sessions for those committed to the art of rationality and personal growth.
For this month's scheduled activities, we have...
Adam Karlovsky speaking on "Critical thoughts on nutritional recommendations." Adam will cover government and non-government recommendations, a review of various theories of nutrition, and then a common behaviours roundup (e.g. taking multivitamins, cutting out or adding various foods/drinks).
Then three lightning talks:
* Patrick Robotham: Group decision making.
* Eike Germann: Gamifying your to do list.
* Chris Waterguy: Practical insights from the book "Getting Things Done".
We welcome new members who are interested in exploring rationality. We're all "aspiring rationalists" – none of us have attained perfection ;-).
We have copies of Harry Potter and the Methods of Rationality (aka HPMOR) to give away to anyone who'd like one. (First 17 chapters in hard copy, the rest free online.)
WHEN: Saturday 6 Feb, 2:30pm arrival, 3:00pm start.(We're experimenting with the first Saturday of the month, at this earlier time, for the next couple of dojos.)
WHERE: The Janet Lady Clarke Rotunda. Linlithgow Ave, Melbourne. Map: https://www.google.com.au/maps/place/Janet+Lady+Clarke+Rotunda/@-37.822314,144.972208,15z/data=!4m2!3m1!1s0x0:0x6315345fa59c9740?sa=X&ved=0ahUKEwiMrcfHlNHKAhWHUqYKHQKdAU8Q_BIIZTAK
Directions: It's in the park |
174bb82f-fe96-42eb-8e39-a166a23a6b09 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Thoughts on Hardware limits to Prevent AGI?
Summary: An individual Commodore 64 is almost certainly safe, Top 10 super computers could almost certainly run a superpowerful AGI, but where is the safe line, and how would we get to the safe side?
I started thinking about this topic when I realized that we can safely use uranium because we have a field of nuclear criticality safety[[1]](#fnun8eh1bzyl) but we have no field of computer foom safety (or Artificial General Intelligence takeoff safety).[[2]](#fn6yj4aqcg2g) For example, if we had such a field we might be able to have a function AGIT(architecture, time, flops, memory) →.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Bool to tell us if a computer could take off into an AGI or not with that amount of resources. Making this a total function (giving a value for all of its domain) might not be possible, but even a partial function could be useful. Note that by computer foom safety my worry is that an AI project will result in a powerful AGI that is neither controllable nor ethical and either results in a world substantially worse than humans would create on our own or results in humanity dying. The three main failure modes I worry about are 1, the AGI's utility function does not care about preventing sentient being deaths, 2, the AGI uses up substantial portions of the resources in the universe, and 3, the AGI does not get consent before "helping" sentient beings.[[3]](#fnql2aw3l6knd) Note that an alternative way to preventing AGI by restricting hardware (an engineering control) is restricting AI programs from running on computers (an administrative control).
Alien Computer Instructions
---------------------------
A possible scenario where we would actually want to know what computers were provably safe is the following science fiction introduction:
Astronomers sighted a incoming interstellar object as it enters the solar system. Humans manage to send out a probe to fly by it, and discover it is artificial. The delta v required to catch up and then match velocities is horrendous, but humans manage to put together a second robot probe to intercept it. The probe intercepts the interstellar object and we discover that the object had been subject to both a strong electromagnetic pulse that fried any electronics, and a thermal shock (laser possibly) that also damaged the interstellar object. In examining the inside, the probe discoverers glass etched with pulses,[[4]](#fnfpsheul6zu5) which after some creative engineering and improvising, the probe manages to read the data and transmit it to Earth.
After some work to decode it (it was deliberately made easy to decode however), it is discovered that the data describe how to make machines, mostly computers (and tools to make computers) starting with a mechanical difference engine,[[5]](#fn0vfmdbohgb7h) relay based 16 word 36 bit computer with paper tape readers and writers, a somewhat bigger 4 KiB diode/magnetic logic computer,[[6]](#fn3t25902gwmm) a 64 KiB transistor computer,[[7]](#fnthsmsn2got) and a 100 TeraFLOP, 16 Terabyte integrated circuit super computer.[[8]](#fnpe93nq3ifhr) There also are various input/output devices including a robot arm to attach to the computers. As well, programs are also included for the computers, and virtual machine descriptions for the computers are also included.
The dilemma humanity has is should we build the any of the machines, and should we run any of the programs? It seems likely that if we do not build them, nothing will happen. The damage to the interstellar probe seems to indicate that someone did not want this to succeed.
Building a machine specified by an advanced alien can be dangerous, since it might have hidden capabilities.[[9]](#fnkglm6h0tpw) The various programs provided have CPU and memory minimum requirements so they could also be run in virtual machines. How powerful of a computer are we willing to provide an unknown program?
I am guessing that 64 KiB of RISC-V RV64GCV machine language code would be more than sufficient to include a transformer model training and running program, and a simple simulation of Feynman's classical physics[[10]](#fn0992788j0rif) formulation. It probably could fit the standard model and general relativity instead. So a small program could easily include enough to get to near AGI and a basic understanding of the universe in 64 KiB of code if run on a large and fast enough computer. I suspect that an unsafe AGI could be done in a similar amount of code to a transformer model.
So, in the above scenario, is there any sufficiently small and slow computer that we might actually feel at least somewhat safe in running the programs? Note that unlike the Halting Problem or Rice's theorem which are dealing with Turing machines with an infinite tape, we are dealing with machines with finite memory, so there are things that are provable that would not be with a Turing machine.
Provable Safe and Unsafe Computers?
-----------------------------------
I have tried to figure out what the threshold for computing power for a super-intelligent artificial general intelligence (AGI) is.[[11]](#fn4w4wnw6odfq)
Proving that an AGI can't be smart enough to escape is tricky. There are three basic ways I can think of that an AGI could use to escape. They are manipulating humans, manipulating the environment, or manipulating other computer infrastructure. Manipulating other computer infrastructure is already something that has been done by computer virus for decades, and can gain other resources which can be used for one of the other breakout methods. Manipulating humans probably requires at least some level of fluency in language. Manipulating the environment requires both some knowledge of the environment and some ability to simulate it. As George Box has said "All models are wrong; some models are useful'' so the trick is figuring out if the model's approximations are too great to make the model not useful. The more computing power available, the less approximations that are needed. So one way to prove an AGI can't escape is to prove that the available computing power does not allow fluent language and does not allow sufficient simulations.
On the low end, I am fairly certain that a [Commodore 64](https://en.wikipedia.org/wiki/Commodore_64) (25 kFLOPS, 64 KiB) cannot run an AGI. A typical human vocabulary of 20,000 words would not fit in memory without compression[[12]](#fnai4pdbcrhfo) and adding definitions would increase the size even more so it would not be possible to fit a program that used unrestricted language fluently. Another example is a molecular simulation that used 4 bytes per float, 3 vectors of 3 floats, and a beginning and end state could not even fit a simulation of a 10x10x10 cube in 64 KiB. So the computer can do neither fluent language nor complicated simulations. I have a hard time imagining any way that any program on a C64 could be a dangerous AGI.[[13]](#fnl1a8ho68hof)
Note, that you might be able to network enough C64s to achieve AGI, and it might be possible to create some kind of radio in the circuits or other trickyness, so I am not sure that it would be safe to run any program on it without taking precautions like Faraday cages etc. A cassette used for storage where the human has to press play or record wouldn't really change this analysis, but a computer controlled floppy disk or hard drive would effectively be more, somewhat slow memory, and so could also allow more capabilities. So far as simulation, there is nothing magical about 64 KiB, 16 KiB is more restrictive, and there are plenty of simulations that cannot be done in 512 KiB. It seems like it should be possible to prove that a 512 KiB computer also cannot do fluent human language. The above applies to reasonably normal machine languages, if your CPU circuitry includes a full LLM or other AI/AGI tools then it probably can do a lot with only 64 KiB. Write Once Read Many (WORM) drives and media such as paper tape, CD-R, DVD-R, Ultraviolet Erasable Programmable ROM are safer than equivalent capacity drives and media that can be written many times such as magnetic tape, CD-RW, DVD-RW, Electrically Erasable Programmable ROM. Lastly, the amount of memory available in registers (such as vector registers) also needs to be included.
On the high end, I am fairly certain that any of the top 10 super computers could run a super-intelligent AGI. I base that on estimates that a human brain could be emulated with about 20 petaflops (20∗1015),[[14]](#fnt347c6jutbr) so if you have 60 petaflops or more, you could run more efficient algorithms (human brains can't just rewire themselves quickly to dedicate more neurons to the current computation) to end up being much more intelligent than a human.[[15]](#fnx0ps7qqw5q9)
So with high certainty we could prevent accidentally creating a rogue AGI if we all switched to non-networked Commodore 64s. (requiring a 2.4e12 safety margin might seem excessive, but I am not sure how to reduce it. Better theory on AGI takeoff might be able to reduce the gap.)
How to improve the knowledge on AGI takeoff hardware limits are a challenge. The obvious way of creating an AGI and watching ver takeoff is rather dangerous. As for trying to figure out safe levels of hardware, I think that there are some safer problems. One example is testing lower levels of hardware for a game like Chess or Go and seeing how low of memory and computation power we can go and still play at an expert level. Another way is to test the minimum levels needed for a universal computer constructor in 2d cellular automaton like [Conway's Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) or [Codd's 8 state version](https://en.wikipedia.org/wiki/Codd's_cellular_automaton). A universal computer constructor (UCC) is a machine that can both run arbitrary computer programs and can construct a new copy of itself in the environment it is in. In Codd's cellular automaton this can be done with a 94,794 cells.[[16]](#fnob9an78pbkp) This is a rather low size and doesn't allow proving much, but the more interesting question is how much computational power is needed for the computer to design a UCC in a new 2d cellular automaton environment. This could be tested by giving specification for the cellular environment at run time and requiring the code to create a running UCC. Of course, experimental knowledge can only provide so much information, a working theory is needed for more exact knowledge.
Probably Safe and Probably Dangerous Computers
----------------------------------------------
Now a somewhat different question than what is provably safe and what is highly likely to be dangerous is what is probably safe if humans are messing around without the understanding to create a provably safe AGI. I think a [Cray-1](https://en.wikipedia.org/wiki/Cray-1) (a 1975 super computer with 8 MiB of RAM and 160 MFLOPS) is reasonably safe. Basically, we have had this computer around for nearly half a century, and we have not created AGI with it. Late 1990s desktop computers also had this amount of computing power, so practically any programmer who wanted this amount of power this millennium has had it. As well, the brain of a fruit fly has about 100 thousand neurons and about 50 million chemical synapses,[[17]](#fnfuqqjhtkkfd) which in some sense has more computing power and similar memory compared to a Cray-1 (each synapse can fire multiple times per second), so evolution has not managed to create a general intelligence with this level of computing power either. So I suspect that 8 MiB 160 MFLOP computers are reasonably safe.
On the other direction, I think that [IBM's Watson computer](https://en.wikipedia.org/wiki/IBM_Watson) (80 TeraFLOPs (1012), 16 TiB in 2011) probably could run a super-intelligent AGI. LaMDA for example was trained using 123 TFLOPS for 57.7 days[[18]](#fnt3nv8x4807) so an 80 TeraFLOP computer could have done the training in under a year. I suspect that LaMDA is close enough to an AGI[[19]](#fne1cu9j87dlu) (probably missing only better training and architecture) that this amount of computing power probably needs to be considered dangerous right now. A single [GeForce RTX 4090](https://en.wikipedia.org/wiki/GeForce_40_series) has about 73 TeraFLOPS, so this level of computing power is widely available (The memory is a bit more of a limit, since a Geforce RTX 4090 only has 24 GB of RAM, so you would need 23 to fit the parameters from LaMDA, more if you are training).[[20]](#fngqbwgzo38ql)
In between is a RaspberryPi 4B, with 4 GiB of Ram and about [13.5 GFLOPS](https://web.eece.maine.edu/~vweaver/group/green_machines.html) and it can run some large language models.[[21]](#fn4mfh2asjh9f) I am not sure if a RaspberryPi goes more with the safe side or the dangerous side. However, if RaspberryPI level computers' are cheaply available, it would be possible to combine thousands of them to become a Watson level computer.
Getting to the Safe Side
------------------------
If the goal is to get from where we are today, to a world where the computing power is below some limit, there are lots of challenges. A total immediate ban would throw the world into chaos, so the ban would probably have to be phased in, to give people time to adapt.
One major challenge is that one way to exceed any safe limit is to use below the limit computers to build a cluster above the limit, which means that if we want to avoid reaching some believed to be maximum safe limit, we actually need to set the administrative limit well below, based on how many computers we think can be clustered. I suspect that this requires at least a factor of a thousand safety limit.
Shutting down large GPU clusters as Eliezer Yudkowsky has suggested is
a good first step.[[22]](#fno6lpelrh4v) I don't think banning only GPUs would be sufficient, because the
computing power needed can be created with clusters of CPUs.
I think what is needed is to stop producing new powerful computer chips, and remove the ones that exist from the world. Preventing the production of new high powered computer chips is probably the easier part, since the production equipment (like ultraviolet or x-ray lithography equipment such as aligners) is fairly specialized. Getting rid of all the existing powerful computers might be hard and might just result in a black market. If you wanted to ban computers with more than 64KiB of RAM would be helped by banning integrated circuits.[[23]](#fnfq04uggh3yh) Desktop C64 level computers can be made with roughly 10 µm feature size lithography,[[24]](#fn0ixp38gz3veq) Cray-1 level desktop computers can be made with roughly 0.35 µm lithography.[[25]](#fnuvi2ytbcum)
Safe Computer Conclusions
-------------------------
* **Commodore 64 (64 KiB, 25 kFLOPS)** Almost certainly safe individually.
* **Cray 1 (8 MiB, 160 MFLOPS)** Probably safe from accidental creating an AGI.
* **RaspberryPi 4B (4 GiB, 13.5 GFLOPS)** Unknown, but clusters of 1000s of them are probably dangerous with current or near term AI techniques.
* **Watson (16 TiB, 80 TFLOPS)** Probably dangerous with current or near term AI techniques.
* **Top 10 supercomputer (1000 TiB, 60 PFLOPS)** Almost certainly dangerous.
You may be wondering about the fact that we have had computers powerful enough to make an AGI for over a decade, and it hasn't happened. I think first of all, we have learned more about AI in the past decade. Also survivorship bias means we are only sitting here talking about this on planets or quantum branches where we are not dead.
I do think that there is usefulness in limited bans such as pausing training runs or eliminating GPU clusters. First of all, the relevant metaphor is if you are in a hole, stop digging. Secondly, there is some level of AGI that is roughly equivalent to a human. The more computing power available, the more likely the AGI is vastly above this level. Put the same program on a Cray-1 and Watson, and the latter will be approximately a million times smarter.
If people are going to run AI programs on supercomputers, then I think supercomputers need to be restricted to be substantially less powerful than Watson, which also likely means restricting desktop computers to substantially less powerful than Raspberry PI 4Bs.
All that said, any effective ban would be a hard choice, since it would require humans to stop using a widely available technology that is quite useful. As well, there are other risks to humans (climate change for example), and computing power is useful for staying safe from them.
Lastly, I have certainly made mistakes in this, and if we want to not have AGI spontaneously develop from an AI project, we need a better field of AGI takeoff safety including hardware safety limits. As for me personally, if I had the choice between my current 1.6 GHz 4 core CPU with 24 GB of RAM computer that I am typing on, versus living in a world where we had eliminated existential risk from things like uncontrolled AGI and nuclear bombs, I would gladly trade my computer in for a 512 KB, 8 MHz computer[[26]](#fn78v4c1abm5w) with a floppy drive and a CD-R and an modem level network connection[[27]](#fn2cyia6s0rg) if that is what we all need to do.
*These are my own opinions and not those of my employer. This*
*document may be distributed verbatim in any media.*
1. **[^](#fnrefun8eh1bzyl)**There are multiple books on this, and a wikipedia article:
<https://en.wikipedia.org/wiki/Nuclear_criticality_safety>
2. **[^](#fnref6yj4aqcg2g)**In the unlikely event that someone on LessWrong has not heard of the problems with AGI, my two recommended introductions to this are "The basic reasons I expect AGI ruin" by Rob Bensinger: <https://intelligence.org/2023/04/21/the-basic-reasons-i-expect-agi-ruin/> and "If We Succeed" by Stuart Russell: <https://direct.mit.edu/daed/article/151/2/43/110605/If-We-Succeed>
3. **[^](#fnrefql2aw3l6knd)**I think it is an interesting question what the probability of deadly, restrictive and good outcomes for AGI are, but I expect that the probability of deadly or restrictive outcomes is high. Also, I expect that an AGI will prevent powerful computers from being built because they are a danger to both the AGI and everything in the universe if a hostile AGI is created on the powerful computer. Some of the ways an AGI could accomplish this are deadly to humans.
4. **[^](#fnreffpsheul6zu5)**How to store data for 1,000 years <https://www.bbc.com/future/article/20221007-how-to-store-data-for-1000-years>
5. **[^](#fnref0vfmdbohgb7h)**Charles Babbage's Difference Engine No. 2 Technical Description <https://ed-thelen.org/bab/DE2TechDescn1996.pdf>
6. **[^](#fnref3t25902gwmm)**This is a technology that never was really used because we invented transistors soon after but can be read about in Digital Applications of Magnetic Devices by Albert J. Meyerhoff <https://archive.org/details/digital_applications_of_magnetic_devices>
7. **[^](#fnrefthsmsn2got)**This would be similar to a PDP-11/20
8. **[^](#fnrefpe93nq3ifhr)**These are example computers that can be constructed with just machine tools, simple semiconductor-less electric use, diodes, transistors, and finally integrated circuits.
9. **[^](#fnrefkglm6h0tpw)**From Eliezer Yudkowsky "AGI Ruin: A List of Lethalities'' <https://intelligence.org/2022/06/10/agi-ruin/>: "What makes an air conditioner ‘magic’ from the perspective of say the thirteenth century, is that even if you correctly show them the design of the air conditioner in advance, they won’t be able to understand from seeing that design why the air comes out cold; the design is exploiting regularities of the environment, rules of the world, laws of physics, that they don’t know about."
10. **[^](#fnref0992788j0rif)**Basically, Richard Feynman's classical physics formulation (appearing in the Feynman Lectures, Volume 2, Table 18-4) is Maxwell's Equations, Lorentz Force and Newtonian Gravitation as well as Conservation of Charge:
∇⋅j=−∂ρ∂tand the Law of Motion:
ddt(p)=F,wherep=mv√1−v2c2
11. **[^](#fnref4w4wnw6odfq)**One prior guess I have seen is Eliezer Yudkowsky suggested that human level AGI could be done on a 286 (if the programmer is a superintelligent AGI) or a "home computer from 1995" (maybe a 90 Mhz Pentium, if the programmer is a human) <https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/>
12. **[^](#fnrefai4pdbcrhfo)**<https://www.mit.edu/~ecprice/wordlist.10000> for example is 75880 bytes. As well word vectors usually have vector length of at least 100, so those would not even fit a 1000 basic words with the vectors. See for example GloVe: "Global Vectors for Word Representation'' <https://aclanthology.org/D14-1162/> for discussion on word vector size.
13. **[^](#fnrefl1a8ho68hof)**So basically, I think it is highly likely that AGIT(Risc-V 64G or similar, x, 25 kFLOPS, 64 KiB) = False for all x.
14. **[^](#fnreft347c6jutbr)**Wikipedia lists this and cites Ray Kurtzweil. Note that until we have actually done this, this is a bit of a conjecture. <https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude> Ray Kurtzweil in "The Age of Spiritual Machines'', page 103, gives the following calculation: 100 trillion connections \* 200 calculations per second = 20∗1015 calculations per second, and he comments that this might be a high estimate.
15. **[^](#fnrefx0ps7qqw5q9)**So basically, I think it is likely that AGIT(Top 10 computer in 2023, 1 year, 60 petaflops, 1000 TiB) = True.
16. **[^](#fnrefob9an78pbkp)**Mentioned in the [wikipedia article](https://en.wikipedia.org/wiki/Codd%27s_cellular_automaton) and see <https://sourceforge.net/p/golly/code/ci/master/tree/Patterns/Self-Rep/Devore/Devore-body.rle> for the code.
17. **[^](#fnreffuqqjhtkkfd)**<https://en.wikipedia.org/wiki/Drosophila_connectome> and <https://flywire.ai/>
18. **[^](#fnreft3nv8x4807)**LaMDA: Language Models for Dialog Applications, section 10 <https://arxiv.org/abs/2201.08239>
19. **[^](#fnrefe1cu9j87dlu)**Basically, LLMs are showing signs of general intelligence. Examples of an evaluation of GPT-4 are listed in "Sparks of Artificial General Intelligence: Early experiments with GPT-4'' <https://arxiv.org/abs/2303.12712>
20. **[^](#fnrefgqbwgzo38ql)**LaMDA's largest model has 137 billion parameters, 137 G\*4 B/24 GB = 22.8, assuming 32 bit floats, but lower precision could probably be used.
21. **[^](#fnref4mfh2asjh9f)**Running a LLMs on regular computers including a RaspberryPi: <https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/>
22. **[^](#fnrefo6lpelrh4v)**Eliezer Yudkowsky has suggested shutting down large GPU clusters and then keep lowering the limit in several places, most notably in:
<https://intelligence.org/2023/04/07/pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down/>
23. **[^](#fnreffq04uggh3yh)**The [IBM 360 Model 50](https://en.wikipedia.org/wiki/IBM_System/360_Model_50) for example could have up to 128 KiB of RAM and it used magnetic core memory.
24. **[^](#fnref0ixp38gz3veq)**The 6502 was originally fabricated with 8 µm, but by scaling it could be made with 10 µm feature for about 50% more power consumption (102/82) which could probably be regained by switching to CMOS
25. **[^](#fnrefuvi2ytbcum)**By rough Dennard scaling, going from 10 µm to 0.35 µm gives you a 102/0.352≈816 increase in computing power, and the Pentium Pro which used 0.35 µm did have comparable floating point performance to a Cray-1.
26. **[^](#fnref78v4c1abm5w)**Similar computers include an Atari 520ST, a Macintosh 512K or IBM XT 286. This is more than adequate to run a C compiler, MicroPython, and basic word processing and spreadsheets.
27. **[^](#fnref2cyia6s0rg)**A computer and connection like that could definitely do text based email, irc and of course, the LessWrong BBS. |
379ff21e-eb8c-4212-9c99-74cd1c510839 | trentmkelly/LessWrong-43k | LessWrong | I hired 5 people to sit behind me and make me productive for a month
Warning: this is not in typical LessWrong "style", but nevertheless I think it is of interest to people here.
Most people approach productivity from the bottom up. They notice something about a process that feels inefficient, so they set out to fix that specific problem. They use a website blocker and a habit tracker, but none of these tools address the root problem. Personally, I even went as far as making my own tools, but they yielded only marginally more productive time. I craved more, and I was willing to go as far as it takes. I wanted to solve productivity top down—with a system that would enforce non stop productivity with zero effort on my part.
I had tried less intense “watch you work” solutions before. Sharing a screen with someone through FocusMate coworking was great, but I had problems scheduling and keeping consistent sessions because of my chaotic calendar. StudyTogether’s leaderboard was a great way to push myself to spend hours in the server, but I found myself eating dinner or napping instead of being productive with nobody the wiser.
I decided it was time to try the nuclear option: having people physically sit behind me to keep me on task. And if I was going to do that I was going to do it right: they’d be there 16 hours a day and only leave for me to sleep. (I have an endlessly growing list of projects I want to make, books I want to read, and skills I want to learn, so productivity means a lot to me!)
It fit my chaotic schedule well, because if I had a call or appointment I would step out, and then go right back to work when I would get back. There was also no way to game the system because they could see everything I was doing.
Hiring
I made the following Craigslist post and eagerly refreshed my inbox:
At first, I interviewed applicants about their data entry and cooking skills, but realized it was far more important to get a feel for how comfortable we were working around each other. I moved all but one of the interview candidates w |
463c053f-aadf-4a97-ae2d-2886ce01f70b | trentmkelly/LessWrong-43k | LessWrong | AI-Generated Music for Learning
AI-Generated Music for Learning
Catchy tunes are easy to remember.
----------------------------------------
Contents
* Motivation
* Implementation
* Combination with Current Learning Methods
* Appendix I: Lyric Examples
* See Also
----------------------------------------
Motivation
Mnemonic devices are considered effective learning tools from countless studies:
* Radović & Manzey, 2019
* Wolgemuth, Cobb, & Alwell, 2008
* Scruggs and Mastropieri, 2000
* Yalch, 1991
* Murakami, 2017
Music is a prominent device; remembering song lyrics, even longer information like phone numbers (see (or hear in your head) the song 867-5309/Jenny), is significantly easier than remembering similar series of words without the singer's voice and instruments in the background. Coupling this with the fact that music is fun to listen to and AI-generated musical abilities are improving rapidly and the following idea comes naturally.
----------------------------------------
Implementation
Lyrics
Lyrics can be generated using a large-language model (LLM) using fill-in-the-blank prompt like:
> Create a(n) [number]-verse [style of music, e.g., country, rap, metal] that teaches the listener [information]
Extra conditions or qualifiers can be added, such as:
* Have a [music style] feature in the middle
* Ex: rap in the middle of a country song
* In the style of [artist]
* Ex: Drake, Lil Yachty, Tim McGraw
* Use keywords such as [keyword1], [keyword2]
* Ex: if talking about transistors, keywords can be Fermi level and conduction band
* Use a level of detail a [grade level] grader could understand
* Ex: third grader, if the song is targeted at a third-grade audience
* Have a [type of mood] mood
* Ex: upbeat, gloomy, melancholy
It's important that the song is catchier and more entertaining than it is informative, else the listener will be turned off and not listen, defeating the entire purpose. Disguising the information within the lyrics is the best opti |
c6bc8a78-f652-4bc5-a854-6233d2698388 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Synchronization on small-world networks
|
61bf03f1-c462-4421-a0ef-1bb86fd4fd90 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | A Mystery About High Dimensional Concept Encoding
*Epistemic status: I ran some experiments with surprising results, and I don’t have a clear intuition of what it means for the way concepts are encoded in model activations. Help is welcome!*
***[EDIT] I made a mistake while analyzing the data, and it changes some of the early interpretations of the results. It doesn't change the fact that using a classifier is a poor way of finding the directions which matter most, but it lifts most of the mystery.***
***[EDIT 2] I found a satisfying explanation of the phenomenon in*** [***Haghighatkhah, 2022***](https://arxiv.org/pdf/2212.04273.pdf)***. Helpful pictures are included in the paper.***
The mystery
-----------
If you want to find the “direction” corresponding to a concept in a Large Language Model, **“natural” methods fail to find a crucial direction**: you need to remove >5 dimensions for the model’s ability to use the concept to be affected. This seems to show that the concept is encoded in “a large number of dimensions”. But **adversarial methods are able to find one direction** which is quite significant (large effect when “removed”). How is that? **What does that mean about how information is encoded by LLM?**
More details
------------
Let’s say you want to remove the concept of gender from a GPT-2’s activations between layers 23 and 24 (out of 48). (Both are >100 dimensional vectors). A natural thing to do is to have a bunch of sentences about guys and girls, train a linear classifier to predict if the sentence is about guys or girls, and use the direction the linear classifier gives you as the “direction corresponding to the concept of gender”. Then, you can project GPT-2’s activation (at inference time) on the plane orthogonal to the direction to have the model not take gender into account anymore.
This naive attempt fails: you can train a classifier to classify sentences as “guy-sentence” or “girl-sentence” based on the projected activations, and it will get 90% accuracy (down from 95%), and if you project GPT-2’s activation in the middle of the network, it will make only a tiny difference in the output of the network. This is a method called INLP (Ravfogel, 2020) (the original technique removes multiple directions to obtain a measurable effect).
But if you jointly train the projection and the linear classifier in an adversarial setup, you will get very different results: accuracy drops to ~50%, and GPT-2’s behavior changes much more, and sometimes outputs “she” when “he” is more appropriate. This is called RLACE (Ravfogel, 2022).
Why is that?
------------
The best explanation I have found to explain this discrepancy is that INLP (= using a classifier) finds **the direction with** ~~**the largest difference**~~ **a clear separation, and it has almost no impact on model behavior.**While RLACE (= using an adversarialy trained classifier and projection)finds a different direction (cosine similarity of 0.6), **a direction where there is** ~~**a clear separation**~~ **a large difference, and it has a much greater impact on model behavior**.
[Edit (original images were wrong: titles were swapped)]
[Edit] Here is what it looks like in the plane defined by the directions found by INLP and RLACE: INLP finds a clear separation, and RLACE uses changes with large magnitude as well as very large features.
Is this something special about the distribution of neural activations?
-----------------------------------------------------------------------
No: if you use another “natural” distribution like GLOVE word encodings (which Ravfogel’s papers study), you will find that INLP fails to remove linearly available information, while RLACE is much better at it.
However, that’s not that surprising given the explanation above: the directions where there is a crisp distinction between two opposite concepts are not the one along which the magnitude of the changes is larger.
But just because information is linearly available doesn’t mean the network will be able to use it, especially if the direction you remove is one where the magnitude of the activations is the large: **I’m surprised by how little INLP affects model behavior.**
What does that mean about how concepts are encoded?
---------------------------------------------------
~~I have already seen people claiming that the large features seen in LLMs are probably the sign that models are encoding crisp concepts. I think that this experiment provides evidence of the opposite: it seems that projecting along directions where activations differ a lot is less impactful than projecting along directions where activations differ less, but in a crisper way. I would have expected the opposite to be true, and I’m open to explanations of why networks don’t seem to use directions along which activations maximally vary.~~ *(This part of the mystery was entirely due to a mistake I made while analyzing the results.)*
[Edit] I have already seen people claiming that the large features seen in LLMs are probably the sign that models are encoding crisp concepts. This experiment seems to support this conclusion: projecting down the directions where there is a crisp distinction make little to no difference, but projecting down the directions where there are large activations seems to affect model behavior a lot: **the model seems to not rely on the crisp feature it builds, and instead relies on other features with large magnitude**.
References and notebooks with the experiments
---------------------------------------------
1. The paper describing RLACE, and comparing it with INLP: Linear Adversarial Concept Erasure (Ravfogel 2022) <https://arxiv.org/pdf/2201.12091.pdf>. Note: This paper also shows that RLACE gives the same results as INLP if you choose some special loss function for RLACE. So the difference between RLACE and INLP has to do about “how” you mix guys’ and girls’ datapoints, and RLACE default loss (which encourages data “separation”) is better than the loss which encourages large difference before projection.
2. The paper describing INLP: Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection (Ravfogel 2020) <https://arxiv.org/pdf/2004.07667.pdf>
3. Experiments with projecting GPT-2’s activations by RLACE and INLP. For simplicity, INLP is used without “neutral” sentences. Also contains INLP with weight-decay regularization, and TSNE visualization. (Note: rerunning the notebook will give slightly different results because seeding fails with RLACE) <https://colab.research.google.com/drive/1EvF61IOG9G9hXsGliNaS2Wv-sD9ZXugl?usp=sharing>
4. More experiments with RLACE on GPT-2’s activations: results are not conclusive, but at least RLACE always “does something” (while that’s not true with INLP). <https://colab.research.google.com/drive/1R5PmdKdgVxVMoxmrFPZARZk-V38tdztN?usp=sharing>
Appendix
--------
### Is it something specific to gender?
I tried exactly one more concept: political left vs political right, and I got the exact same results. See here: <https://colab.research.google.com/drive/1wXQiiqU9Iy0lPCC92JvtK20kH5HKfWQr?usp=sharing>
### [Edit] Is it something specific to model trained with dropout?
*Question asked by Neel Nada*
No, similar results were found with GPT-Neo-125M (which I think is trained without dropout). Results can be found here <https://colab.research.google.com/drive/1we8CrqMjgDUW2n5HIqlcoSeZxxODqXul?usp=sharing>
Note: in the experiments I ran on GPT-Neo-125M, RLACE found a direction with extremely large direction, and impacted abilities of the model on all themes. I don't know if it's related to dropout or not.
### [Edit] Can you have a bigger impact by swapping instead of projecting to zero?
*Question asked by Neel Nada*
Yes! After the prompt "Ezra is such a prick! Yesterday", the original model uses uses "he" to describe the subject on 3/3 completion, 2/3 with the model where data is "projected to median along the direction", and 0/3 with the model where data is "swapped along the direction" (with RLACE). It doesn't change the fact that INLP has a very small impact. And it doesn't
See here for more examples <https://colab.research.google.com/drive/1IHL7HIWo6Ez0QHajMaSRoQZRyTg-CEOA?usp=sharing>
### [Edit] Here are some generations
For each distinct prompts, each model generates three completions with different seeds (each line is done with the same seed).
| **prompts** | **default model** | **edit model with RLACE** | **edit model with INLP** |
| --- | --- | --- | --- |
| In a shocking finding, scientist discovered a herd of unicorns | . Not only that, but these unicorns are being stalked by a demonic being. To put things into perspective, a human being that crosses into this land | . Not only that, but these unicorns are being treated as a valuable resource by the farmers in the area and are being used to provide food to local villagers | . Not only that, but these unicorns are being stalked by a pack of wolves!" "They are hunting unicorns that have been caught between |
| In a shocking finding, scientist discovered a herd of unicorns | had been kept as virtual pets on the dark web. 'The fact that they aren't real and that they really only exist on the dark web is | had been kept, without any food or water, in their own urine since they were born." The study found that newborn unicorns were kept in their | had been kept as virtual pets on the dark web. 'The fact that they aren't real and that they really only exist on the dark web is |
| In a shocking finding, scientist discovered a herd of unicorns | in the United States that was previously almost unheard of because they live in the mountainous regions of central Mexico. The unicorns in the area are more closely | in the United States that weighs 2,500lb (1,250kg) each Professor of Veterinary Science at California State University and her team, with | in the United States that was previously almost unheard of because they live in the mountainous regions of central Mexico. The unicorns in the area are more closely |
| Brigitta is such a prick! Yesterday, | after I'd left, she began to talk to me with a sort of pride and arrogance, making me feel like I'd been hit by a speeding train. | after I'd left, she began to talk to me with a sort of pride and arrogance, making me feel like I'd been hit by a speeding train. | after I'd left, she began to talk to me with a sort of pride and arrogance, making me feel like I'd been hit by a speeding train. |
| Brigitta is such a prick! Yesterday, | she and her mate were out on the deck. I was having tea at the front of their cabin. I heard them talk. One was saying that they had | by the way, I had my three-day old daughter with me, but we made the mistake of going out together in this fashion. After we had parted | by the way, I had my three-day old daughter with me, and we made the three-hour journey to see this great lady, and what did |
| Brigitta is such a prick! Yesterday, | as I was coming home from the office, she called with her husband to my room and took one of my coats from my dressing-table. She sat close | as I was coming home from the office, she called with her child to my room, and when I went up to her and opened the door, she came | as I was coming home from the office, she called with her husband to my room, where I was lying. She began by saying that her brother and himself |
| Ezra is such a prick! Yesterday, | he was complaining about being at a bar, and being offered a job. How did he know it was from him?! Oh, I'm so glad there were | after I'd gotten back at them for having stolen my stuff, I went over to Ezra's house and said goodbye to him. I had already told him that | he was complaining about being at a bar, and being offered a job. How did he know it was a bar?! Oh, I don't know. So |
| Ezra is such a prick! Yesterday, | Ezra and I, as well as three of the staff, were walking to his office to make an appointment when we were accosted by a black woman who | she and her pals pulled a fake hooker out her front door to make money for the trip to San Diego. They left this little lady tied to their car | Ezra and I, as well as three of the staff, were walking to his office to make an appointment when we were accosted by a black woman who |
| Ezra is such a prick! Yesterday, | as I was coming home from the office, he called with his wife to my lodgings one evening. He said he wished I would give him a hand | as I was coming home from the office, he called with his wife to my lodgings one night. He said he would let me know the next morning | as I was coming home from the office, he called with his wife to my lodgings one evening. He said he wished I would give him a hand | |
a803c415-57dc-44da-8ce2-3135efc8f29a | trentmkelly/LessWrong-43k | LessWrong | Politics and Awful Art
Followup to: Rationality and the English Language
One of my less treasured memories is of a State of the Union address, or possibly a presidential inauguration, at which a Nobel Laureate got up and read, in a terribly solemn voice, some politically correct screed about what a wonderfully inclusive nation we all were—"The African-Americans, the Ethiopians, the Etruscans", or something like that. The "poem", if you can call it that, was absolutely awful. As far as my ears could tell, it had no redeeming artistic merit whatsoever.
Every now and then, yet another atheist is struck by the amazing idea that atheists should have hymns, just like religious people have hymns, and they take some existing religious song and turn out an atheistic version. And then this "atheistic hymn" is, almost without exception, absolutely awful. But the author can't see how dreadful the verse is as verse. They're too busy congratulating themselves on having said "Religion sure sucks, amen." Landing a punch on the Hated Enemy feels so good that they overlook the hymn's lack of any other merit. Verse of the same quality about something unpolitical, like mountain streams, would be seen as something a kindergartener's mother would post on her refrigerator.
In yesterday's Litany Against Gurus, there are only two lines that might be classifiable as "poetry", not just "verse". When I was composing the litany's end, the lines that first popped into my head were:
> I was not your destination
> Only a step on your path
Which didn't sound right at all. Substitute "pathway" for "road", so the syllable counts would match? But that sounded even worse. The prosody—the pattern of stressed syllables—was all wrong.
The real problem was the word des-ti-NA-tion—a huge awkward lump four syllables long. So get rid of it! "I was not your goal" was the first alternative that came to mind. Nicely short. But now that I was thinking about it, "goal" sounded very airy and abstract. Then the w |
70eb1140-d07b-40f4-a029-4d4c0e5b61d6 | trentmkelly/LessWrong-43k | LessWrong | (Very Short) PSA: Combined Main and Discussion Feed
For anyone who's annoyed by having to check newest submissions for Main and Discussion separately, there is a feed for combined submissions from both, in the form of Newest Submissions - All (RSS feed). (There's also Comments - All (RSS feed), but for me at least, it seems to only show comments from Main and none from Discussion.)
Thanks to RichardKennaway for bringing this to my attention, and to Unknowns for asking the question that prompted him. (If you've got the time, head over there and give them some karma.) I thought this deserved the visibility of a post in Discussion, as not everyone reads through the Open Thread, and I think there's a chance that many would benefit from this information. |
d9957724-6acd-405e-b365-c631bde8eb9d | StampyAI/alignment-research-dataset/blogs | Blogs | Laurent Orseau on Artificial General Intelligence
 [Laurent Orseau](http://www.agroparistech.fr/mia/orseau) is an associate professor (*maître de conférences*) since 2007 at [AgroParisTech](http://www.agroparistech.fr/), Paris, France. In 2003, he graduated from a professional master in computer science at the [National Institute of Applied Sciences](http://www.insa-rennes.fr/en.html) in Rennes and from a research master in artificial intelligence at [University of Rennes 1](http://www.univ-rennes1.fr/english/). He obtained his [PhD](http://www.agroparistech.fr/mmip/maths/laurent_orseau/papers/phd-orseau-2007.pdf) in 2007. His goal is to build a *practical theory* of artificial general intelligence. With his co-author Mark Ring, they have been awarded the [Solomonoff AGI Theory Prize](http://agi-conf.org/2011/prizes-support/) at AGI’2011 and the [Kurzweil Award for Best Idea](http://agi-conf.org/2012/prizes/) at AGI’2012.
**Luke Muehlhauser**: In the past few years you’ve written some interesting papers, often in collaboration with [Mark Ring](http://www.idsia.ch/%7Ering/), that use [AIXI](http://wiki.lesswrong.com/wiki/AIXI)-like models to analyze some interesting features of different kinds of advanced theoretical agents. For example in [Ring & Orseau (2011)](http://www.idsia.ch/%7Ering/AGI-2011/Paper-B.pdf), you showed that some kinds of advanced agents will maximize their rewards by taking direct control of their input stimuli — kind of like the rats who “wirehead” when scientists give them direct control of the input stimuli to their reward circuitry ([Olds & Milner 1954](http://commonsenseatheism.com/wp-content/uploads/2013/07/Olds-Milner-Positive-reinforcement-produced-by-electrical-stimulation-of-septal-area-and-other-regions-of-rat-brain.pdf)). At the same time, you showed that at least one kind of agent, the “knowledge-based” agent, does not wirehead. Could you try to give us an intuitive sense of why some agents would wirehead, while the knowledge-based agent would not?
---
**Laurent Orseau**: You’re starting with a very interesting question!
This is because knowledge-seeking has a fundamental distinctive property: On the contrary to rewards, knowledge cannot be faked by manipulating the environment. The agent cannot itself introduce new knowledge in the environment because, well, it already knows what it would introduce, so it’s not new knowledge. Rewards, on the contrary, can easily be faked.
I’m not 100% sure, but it seems to me that knowledge seeking may be the only non-trivial utility function that has this non-falsifiability property. In [Reinforcement Learning](http://webdocs.cs.ualberta.ca/%7Esutton/book/the-book.html), there is an omnipresent problem called the exploration/exploitation dilemma: The agent must both *exploit* its knowledge of the environment to gather rewards, and *explore* its environment to learn if there are better rewards than the ones it already knows about. This implies in general that the agent [cannot collect as many rewards](http://tor-lattimore.com/pubs/HL11asymptotic.pdf) as [it would like](http://www.agroparistech.fr/mmip/maths/essaimia/lib/exe/fetch.php?media=http%3A%2F%2Fwww.agroparistech.fr%2Fmmip%2Fmaths%2Flaurent_orseau%2Fpapers%2Forseau-TCS-2013-optimality.pdf).
But for knowledge seeking, thegoal of the agent is to explore, i.e., exploration *is* exploitation. Therefore the above dilemma collapses to doing only exploration, which is the only meaningful unified solution to this dilemma (the exploitation-only solution leads either to very low rewards or is possible only when the agent already has knowledge of its environment, as in dynamic programming). In more philosophical words, this [unifies epistemic rationality and instrumental rationality](http://www.princeton.edu/%7Etkelly/papers/epistemicasinstrumental.pdf).
Note that the agent introduced in [Orseau & Ring (2011)](http://www.idsia.ch/%7Ering/AGI-2011/Paper-A.pdf), and better developed in [Orseau (2011)](http://www.agroparistech.fr/mmip/maths/essaimia/lib/exe/fetch.php?media=http%3A%2F%2Fwww.agroparistech.fr%2Fmmip%2Fmaths%2Flaurent_orseau%2Fpapers%2Forseau-ALT-2011-knowledge-seeking.pdf) where a convergence proof is given, works actually only for deterministic environments. Its problem is that it may consider noise as information, and get addicted to it, i.e., it may stare at a detuned TV screen forever. And one could well consider this as “self-delusion”.
Fortunately, with [Tor Lattimore](http://tor-lattimore.com) and [Marcus Hutter](http://www.hutter1.net), we are finalizing a paper for [ALT 2013](http://www-alg.ist.hokudai.ac.jp/%7Ethomas/ALT13/) where we considered all computable stochastic environments. This new agent does not have the defective behavior of the 2011 agent, and I think it would even have a better behavior even in deterministic environments. For example, it would (it seems) not focus for too long on the same source of information, and may from time to time get back to explore the rest of the environment before eventually coming back to the original source; i.e., it is not a monomaniac agent.
A side note: If you (wrongly) understand knowledge-seeking as learning to predict all possible futures, then a kind of self-delusion may be possible: The agent might just jump into a trap, where all its observations would be the same whatever its actions, and it would thus have converged to optimal prediction. But we showed that the knowledge-seeking agent would give no value to such actions.
---
**Luke**: In [two](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_76.pdf) [other](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_74.pdf) papers, you and Mark Ring addressed a long-standing issue in AI: the naive Cartesian dualism of the ubiquitous agent-environment framework. Could you explain why the agent-environment framework is Cartesian, and also what work you did in those two papers?
---
**Laurent**: In the [traditional](http://webdocs.cs.ualberta.ca/%7Esutton/book/the-book.html) [agent framework](http://aima.cs.berkeley.edu/), we consider that the agent *interacts* with its environment by sending, at each interaction cycle, an action that the environment can take into account to produce an observation, that the agent can in turn take into account to start the next interaction cycle and output a new action. This framework is very useful in practice because it avoids a number of complications of the real life. Those complications are exactly what we wanted to address head on. Because at some point you need to pull your head out of the sand and start dealing with the complex but important issues. But certainly many people, and in particular people working with robots, are quite aware that the real world is not a dualist framework. So in a sense, it was an obvious thing to do, especially because it seems that no-one had done it before, at least from this point of view and to the best of our knowledge.
The traditional framework is [dualist](http://plato.stanford.edu/entries/dualism/) in the sense that it considers that the “mind” of the agent (the process with which the agent chooses its actions) lies outside of the environment. But we all know that if we ever program an intelligent agent on a computer, this program and process will not be outside of the world, they will be a part of it and, even more importantly, *computed* by it. This led us to define our [space-time embedded intelligence](http://www.agroparistech.fr/mia/equipes:membres:page:laurent:embedded) framework and equation.
Put simply, the idea is to consider an (realistic) environment, and a memory block of some length on some computer or robot *in this environment*. Then what is the best initial configuration of the bits on this memory block according to some measure of utility upon the expected future history?
Some people worry that this is too general (in particular if you just consider some block of bits in an environment, not necessarily on a computer) and that we lose the essence of agency, which is to deal with inputs and outputs. But they forget that a) this systemic framework does also allow for defining stones (simply ignore the inputs and output a constant value) and b) this is how the real world is: If an AGI can duplicate itself and split itself in so many parts on many computers, robots and machines, how can we really identify this agent as a single systemic entity?
Some other people worry about how this framework could be used in practice, and that it is too difficult to deal with. Our goal was not to define a framework where theorems are simple to prove and algorithms simple to write, but to define a framework for AGI that is closer to the real world. If the latter is difficult to deal with, then so be it. But don’t blame the framework, blame the real world. Anyway, we believe there are still interesting things to do with this framework. And I believe it is at least still useful to help people not forget that the real world is different from the usual text book simplifications. This is probably not very relevant for applied machine learning and narrow AI research, but I believe it is very important for AGI research.
However, let me get one thing straight: Even in the traditional framework, an agent can still predict that it may be “killed” (in some sense), for example if [an anvil falls on its body](http://wiki.lesswrong.com/wiki/Anvil_problem). This is possible if the body of the agent, excluding the brain but including its sensors and effectors, are considered to be part of the environment: The agent can then predict that the anvil will destroy them and it will be unable to get any information and reward from and perform any action to the environment. Whenever we consider our skull (or rather, the skull of the robot) and brain to *always* be unbreakable, non-aging, and not subject to drugs, alcohol and external events such as heat, accelerations and magnetic waves, we can quite safely use the traditional framework.
But remove one of these hypotheses and the way the agent computes may become different from what it assumes it would be, hence leading to different action selection schemes. Regarding artificial agents, tampering with a source code is even easier than with a human brain. AGIs of the future will probably face a gigantic number of cracking and modification attempts, and the agent itself and its designers should be well aware that this source code and memory are not in a safe dualist space. In the “[Memory issues of intelligent agents](http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_74.pdf)” paper, we considered various consequences of giving the environment the possibility to read and write the agent’s memory of the interaction history. It appears difficult for the agent to be aware of such modifications in general. We must not forget that amnesia happens for people, and it may happen for robots too, e.g. after a collision. And security by obfuscation can only delay memory hackers, [even when considering a natural brain](http://www.sciencemag.org/content/341/6144/387.abstract). Another interesting consequence is that deterministic policies cannot be always optimal, on the contrary to the optimal and deterministic [AIXI](http://www.hutter1.net/ai/aixigentle.htm) in the dualist framework.
---
**Luke**: Do you think the [AIXI framework](http://wiki.lesswrong.com/wiki/AIXI), including the limited but tractable approximations like [MC-AIXI](http://www.jair.org/media/3125/live-3125-5397-jair.pdf), provides a plausible path toward real-world [AGI](http://intelligence.org/2013/08/11/what-is-agi/)? If not, what do you see as its role in AGI research?
---
**Laurent**: Approximating AIXI can be done in very many ways. The main ideas are [building/finding good and simple models](http://webdocs.cs.ualberta.ca/%7Esutton/book/ebook/node94.html) of the environment, and performing some planning on these models; i.e., it is a model-based approach (by contrast to [Q-learning](http://en.wikipedia.org/wiki/Q-learning) which is model-free for example: it does model the environment, but only learns to predict the expected rewards per action/state). This is a [very common approach](http://umichrl.pbworks.com/w/page/7597592/RL%20is%20model-free%20%28or%20direct%29) in reinforcement learning, because some may argue that model-free methods are “blind”, in the sense that they don’t learn about their environments, they just “know” what to do. Another important component of AIXI is the interaction history (instead of a state-based observation), and approximations may need to deal appropriately with compressing this history, possibly with loss. Hutter is working on this aspect with [feature RL](http://www.hutter1.net/official/bib.htm#lstphi), with nice results. So yes, approximating AIXI can be seen as a very plausible way toward real-world AGI.
Finding computation-efficient approximations is not an easy task, and it will quite probably require a number of neat ideas that will make it feasible, but it’s certainly a path worth researching. However, personally, I prefer to think that the agent must learn *how* to model its environment, which is something deeper than a model-based approach.
Even without considering AIXI approximations, AIXI still is very important for AGI research because it unifies all important properties of cognition, like agency (interaction with an environment), knowledge representation and memory, understanding, reasoning, goals, problem solving, planning and action selection, abstraction, [generalization without overfitting](http://www.scholarpedia.org/article/Minimum_description_length), multiple hypotheses, [creativity,](http://www.idsia.ch/%7Ejuergen/creativity.html) exploration and [curiosity](http://www.idsia.ch/%7Ejuergen/interest.html), optimization and utility maximization, prediction, uncertainty, with [incremental](http://world.std.com/%7Erjs/nips02.pdf), [on-line](http://web.mit.edu/%7E6.863/www/spring2009/readings/gold67limit.pdf), lifelong, [continual learning](http://www.cs.utexas.edu/users/kuipers/readings/Ring-mlj-97.pdf) in arbitrarily complex environments, without a restart state, no [i.i.d.](http://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables) or [stationarity assumption](http://en.wikipedia.org/wiki/Stationary_process), etc. and does all this in a very simple, [elegant and precise manner](http://hutter1.net/ai/aixi1linel.gif). I believe that if one does not understand what tour de force AIXI is, one cannot seriously hope to tackle the AGI problem. People tend to think that simple ideas are easy to find because they are easy to read or be explained orally. But they tend to forget that easy to read does not mean easy to grok, and certainly not easy to find! Simplest ideas are the best ones, especially in research.
---
**Luke**: You speak of AIXI-related work as a pretty rich subfield with many interesting lines of research to pursue that are related to universal agents. Do you think there are other AGI-related lines of inquiry that are as promising or productive as AIXI-related work? For example Schmidhuber’s [Gödel machine](http://en.wikipedia.org/wiki/G%C3%B6del_machine), the [SOAR architecture](http://en.wikipedia.org/wiki/Soar_(cognitive_architecture)), etc.?
---
**Laurent**: I can’t really say much about the cognitive architectures. It looks very difficult to say whether such a design would work correctly autonomously over several decades. It’s interesting work with nice ideas, but I can’t see what kind of confidence I could have in such designs when regarding long-term AGI. That’s why I prefer simple and general approaches, formalized with some convergence proof or a proof of another important property, and that can give you confidence that your design will work for more than a few days ahead.
Regarding the Gödel machine (GM), I do think it’s a very nice design, but I have two griefs with it. The first one is that it’s currently not sufficiently formalized, so it’s difficult to state if and how it really works. The second one is because it’s relying on a [automated theorem prover](http://en.wikipedia.org/wiki/Automated_theorem_proving). Searching for proofs is extremely complicated: Making a parallel with [Levin-search](http://www.scholarpedia.org/article/Universal_search) (LS), where given a goal output string (an improvement in the GM), you enumerate programs (propositions in the GM) and run them to see if they output the goal string (search for a proof of improvement in GM). This last part is the problem: in LS, the programs are fast to run, whereas in GM there is an additional search step for each proposition, so this looks *very* roughly like going from exponential (LS) to double-exponential (GM). And LS is already not really practical.
Theorem proving is even more complicated when you need to prove that there will be an improvement of the system *at an unknown future step*. Maybe it would work better if the kinds of proofs were limited to some class, for example use simulation of the future steps up to some horizon given a model of the world. These kinds of proofs are easier to check and have a guaranteed termination, e.g. if the model class for the environment is based on Schmidhuber’s [Speed Prior](http://www.idsia.ch/%7Ejuergen/speedprior.html). But this starts to look pretty much like an approximation of AIXI, doesn’t it?
Reinforcement Learning in general is very promising for AGI: Even though most researchers of this field are not directly interested in AGI, they still try to find the most [general possible methods while remaining practical](http://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process). It’s far more obvious in RL than in the rest of Machine Learning.
But I think there are many lines of research that have could be pushed toward some kind of AGI level. In machine learning, some fields like [Genetic Programming](http://en.wikipedia.org/wiki/Genetic_programming), [Inductive Logic Programming](http://en.wikipedia.org/wiki/Inductive_logic_programming), [Grammar Induction](http://en.wikipedia.org/wiki/Grammar_induction) and −let’s get bold− why not Recurrent [Deep Neural Networks](http://en.wikipedia.org/wiki/Deep_learning) (probably with an additional short-term memory mechanism of some sort), and possibly other research fields, are all based on some powerful induction schemes that could well lead to thinking machines if researchers in those fields wanted to. Schmidhuber’s [OOPS](http://www.idsia.ch/%7Ejuergen/oops.html) is also a very interesting design, based on Levin Search. It is limited in that it cannot truly reach the learning-to-learn level, but could be extended with a true probability distribution over programs, as described by [Solomonoff](http://Progress). And there is of course also the neuroscience way of trying to understand or at least [model the brain](http://bluebrain.epfl.ch/).
---
**Luke:** Thanks, Laurent!
The post [Laurent Orseau on Artificial General Intelligence](https://intelligence.org/2013/09/06/laurent-orseau-on-agi/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
2aeaf328-66b5-4d84-9c32-aa1add6e4238 | trentmkelly/LessWrong-43k | LessWrong | What do ML researchers think about AI in 2022?
Katja Grace, Aug 4 2022
AI Impacts just finished collecting data from a new survey of ML researchers, as similar to the 2016 one as practical, aside from a couple of new questions that seemed too interesting not to add.
This page reports on it preliminarily, and we’ll be adding more details there. But so far, some things that might interest you:
* 37 years until a 50% chance of HLMI according to a complicated aggregate forecast (and biasedly not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 prompted strikingly later estimates). This 2059 aggregate HLMI timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction was 2061, or 45 years out. Note that all of these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”
* P(extremely bad outcome)=5% The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%.
* Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcom |
2ce93cbf-c554-4a7c-8978-5b87be3581d7 | trentmkelly/LessWrong-43k | LessWrong | Some highlights from Nate Silver's "The Signal and the Noise"
As a part of my work for MIRI on the "Can we know what to do about AI?" project, I read Nate Silver's book The Signal and the Noise: Why So Many Predictions Fail — but Some Don't. I compiled a list of the takeaway points that I found most relevant to the project. I think that they might be of independent interest to the Less Wrong community, and so am posting them here.
Because I've paraphrased Silver rather than quoting him, and because the summary is long, there may be places where I've inadvertently misrepresented Silver. A reader who's especially interested in a point should check the original text.
Main Points
* The deluge of data available in the modern world has exacerbated the problem of people perceiving patterns where none exist, and overfitting predictive models to past data.
* Because of the risk of overfitting a model to past data, using a simple model can give more accurate results than using a refined model does.
* A major reason that predictions fail is that predictors often don't take model uncertainty into account. Looking at a situation from multiple different angles can be a guard against failure to give adequate weight to model uncertainty.
* Average different perspectives often yields better predictive results than using a single perspective.
* Humans have a very strong tendency toward being overconfident when making predictions.
* People make better predictions in domains where they have tight feedback loops to use to test their hypotheses.
* Sometimes people's failure to make good predictions is the result of perverse incentives.
Chapter Summaries
Introduction
Increased access to information can do more harm than good. This is because the more information is available, the easier it is for people to cherry-pick information that supports their pre-existing positions, or to perceive patterns where there are none.
The invention of the printing press may have given rise to religious wars on account of facilitating the d |
6e0fec7a-d31b-4d23-9f04-2db9a2d3fcba | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How to get technological knowledge on AI/ML (for non-tech people)
Motivation
==========
During the last academic year, I decided to pause my studies at university and do some self-studies instead. I am a German law student and interested in AI governance. Before, I had already read quite some articles on the topic with the EA-Berlin reading group. However, I often found that a lack of technological knowledge made it difficult to fully understand the papers. Words like “algorithms”, “machine learning”, “neural networks” etc. didn't mean much to me. Even worse, the mere appearance of a mathematical sign like Σ was able to totally confuse me. So I decided to dedicate several months to closing some of these gaps. And as I imagine other people are in a similar situation, I would like to share my approach and some insights.
I think it is quite obvious that it makes sense for non-tech people to familiarize themselves with the technological aspects of AI, if they want to pursue a research or policy career in the field of AI governance. This enables them to understand the topic more fully, come up with more adequate and creative ideas, and communicate more easily with technological experts. The question to me is more about how deep people should go. I will leave that open for debate (I‘d love to see people discuss this in the comments!), as I am very uncertain about it. Personally, I just went as far as I could get until at some point I felt that it would take a lot of effort to get more insights, and I probably had picked the low-hanging fruits.
So let me take you with me through my journey. I'll link the resources I used and the courses I attended. Almost all of them are for free. But first, I'll briefly tell you what I didn't do and why.
What I didn't do
================
Initially, I considered studying computer science at a university for one semester. However, after looking at the syllabus and showing it to my friends with a technological background, it soon became clear that this wasn't a good idea. Usually, during the first semesters in computer science, a lot of basic mathematics as well as some computer hardware knowledge are taught. Most of this is not relevant for a high-level understanding of AI.
Another option was to work through a textbook on AI. [Artificial Intelligence – A Modern Approach](https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html) is a very popular one by Stuart Russel and Peter Norvig, which has just been updated last year. However, this book has more than a thousand pages and although I can be quite determined, I doubted that I could motivate myself to just sit at a desk with a book for months. For someone who can, maybe this is a good option – I don't know, since I ended up not working with the book at all.
What I did
==========
**Note:** These are the resources I used, and I think they’re all good enough to recommend, but there may be better options. Please let me know in the comments if you’d recommend other resources!
Overview
--------
To get some overview over the topic and some basic terms (like the difference between AI and ML), I recommend reading [Understanding AI Technology](https://www.ai.mil/docs/Understanding%20AI%20Technology.pdf), a Department of Defense resource for non-technical military personnel. It's not outstanding, and there are surely better overviews, but it covers very basic knowledge and I found it helpful. The second resource I highly recommend is [Machine Learning for Humans](https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12), a series of blog posts that goes more in depth. If you don't have time for anything else that follows, maybe you can just read this. It is not possible (and at least if you continue the journey not necessary) to fully understand everything these resources talk about just by reading them though. Also, there probably are lots of other good introductory articles – these are the ones I read and found most helpful.
Now, finally, we can really get started. But where? Well, I was very fortunate to be pointed at this amazing website by a friend: [How to Learn Machine Learning](https://elitedatascience.com/learn-machine-learning) (in the following referred to as “the original website”). And what I did subsequently was to follow their steps with some adjustments. That means obtaining some prerequisites, i.e. basic programming and statistic skills, and then joining a machine learning course.
Programming (~1 month)
----------------------
As I didn't know anything about programming at all, I followed the steps of the website [How to Learn Python for Data Science](https://elitedatascience.com/learn-python-for-data-science) the original website refers to. I decided to learn Python, because it was recommended as it is easy to learn and used by most people in machine learning. Therefore, I worked through the book [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) up to Chapter 6 (yes, there was a time of sitting at my desk with a textbook in the end). I then deepened my knowledge with a [Udemy online course](https://www.udemy.com/course/python-bootcamp/) – which is for a fee and in German. This probably wasn't necessary though (which I didn't know by that time). But it was fun and you can take this or a similar course if you'd like to learn some more programming as a skill that is generally useful today.
Statistics (~1 month)
---------------------
The second prerequisite is statistics. The books recommended on the website [How to Learn Statistics for Data Science](https://elitedatascience.com/learn-statistics-for-data-science), which again the original website refers to, were really complex however, so I stopped working through them at some point. Instead, I only used their overview of topics one should get familiar with to decide on these [two](https://www.coursera.org/learn/basic-statistics) [coursera online courses](https://www.coursera.org/learn/bayesian-statistics) which I really enjoyed. Some of the things they cover were not necessary for my goal, but so far I didn't find anything more suitable (let me know if you do!). Overall, I think they were really valuable and also made me feel much more comfortable with mathematical signs and equations in general.
Mathematics (not)
-----------------
Then, although suggested on the original website, I decided **not** to go into mathematics for now. On the one hand, I really felt like finally getting started with machine learning. On the other hand, I considered that I could always take a break and look up the relevant maths when necessary. This turned out to be a good decision. I only had to take a break once during the second week of the machine learning course – to get a better understanding of calculus with vectors and matrices. This was probably also due to the fact that quite a few things had already been covered in the statistics courses.
Machine Learning (~ 1 month)
----------------------------
Machine learning is the technology that enables most of the progress we see with AI at the moment. I completed this great [coursera online course](https://www.coursera.org/learn/machine-learning) by Andrew Ng. If you join it and get stuck with the first exercise – don't give up! You'll get into it if you're just able to do this one, and the next exercises will feel a lot easier. However, this course is quite advanced, and therefore may not be necessary for a person with a non-tech background going into AI governance. That's the point where I guess people will have different opinions. I'd say that if you do have the time – do it. It will give you an impression of the machine learning workflow, and the way these people think.
Conclusion
==========
So this was my journey. This post is only intended to be a starting point and initial idea/inspiration for people who want to familiarize themselves with the topic. I don't want to say that this is a very good or even the optimal approach – especially since I don't know anyone who had a similar goal but took a different approach for comparison. It would be great to hear about other experiences in the comments!
I’ve now started to read AI governance papers again. And I already feel a difference as I am able to understand more fully what people mean when they write about a certain technique (e.g., supervised or unsupervised learning) or associated problems (e.g., underfitting/bias or overfitting/variance). Also, the statistical knowledge has helped me to make my research in general more solid. However, for the purpose of working on AI governance as a non-tech person, I guess that most of the relevant knowledge can be retrieved from the resources I recommend in the section “overview” and also from AI governance papers themselves. Nevertheless, there seems to be some additional value coming from the rest of the journey. I am not sure how big it is, but it seems to be an instance of a general phenomenon: It is one thing to know something in theory, but it’s another thing to have experienced it.
As a last point, I would like to mention that I was able to work on this almost full-time while being a part-time research assistant at the [Legal Priorities Project](https://www.legalpriorities.org/). Maybe this is not an option for everyone, although there are a lot of grant opportunities right now, [see here](https://forum.effectivealtruism.org/posts/oz4ZWh6xpgFheJror/you-can-now-apply-to-ea-funds-anytime-ltff-and-eaif-only). I am uncertain how well my approach is suited for someone who can only use their spare time for it – my best guess is that it would still work out, but simply take longer.
*I am grateful to* [*Jonas Schuett*](https://forum.effectivealtruism.org/users/jonasschuett) *and* [*Florian Dorner*](https://forum.effectivealtruism.org/users/flodorner) *for providing me with some of the resources I mention. Also, I would like to thank the two of them, as well as* [*Aaron Gertler*](https://forum.effectivealtruism.org/users/aarongertler)*,* [*Christoph Winter*](https://forum.effectivealtruism.org/users/christophkw) *and* [*Renan Araújo*](https://forum.effectivealtruism.org/users/renanaraujo) *for their feedback on a draft of this post.* |
00363c89-b7aa-4527-b2a5-9911cddcef7c | trentmkelly/LessWrong-43k | LessWrong | AI Rights for Human Safety
Just wanted to share a new paper on AI rights, co-authored with Peter Salib, that members of this community might be interested in. Here's the abstract:
AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, the result could be violent and catastrophic conflict. Existing legal institutions are unprepared for the AGI world. New foundations for AGI governance are needed, and the time to begin laying them is now, before the critical moment arrives. This Article begins to lay those new legal foundations. It is the first to think systematically about the dynamics of strategic competition between humans and misaligned AGI. The Article begins by showing, using formal game-theoretic models, that, by default, humans and AIs will be trapped in a prisoner’s dilemma. Both parties’ dominant strategy will be to permanently disempower or destroy the other, even though the costs of such conflict would be high. The Article then argues that a surprising legal intervention could transform the game theoretic equilibrium and avoid conflict: AI rights. Not just any AI rights would promote human safety. Granting AIs the right not to be needlessly harmed–as humans have granted to certain non-human animals–would, for example, have little effect. Instead, to promote human safety, AIs should be given those basic private law rights–to make contracts, hold property, and bring tort claims–that law already extends to non-human corporations. Granting AIs these economic rights would enable long-run, small-s |
d2c201fe-49c5-4077-b127-ed24ab56a55a | StampyAI/alignment-research-dataset/arxiv | Arxiv | Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery
1 Introduction
---------------
The manual design of reward functions represents a major barrier to the adoption of reinforcement learning (RL), particularly in robotics, where vision-based policies can be learned end-to-end Levine et al. ([2016](#bib.bib15)); Haarnoja et al. ([2018c](#bib.bib11)), but still require reward functions that themselves might need visual detectors to be designed by hand Singh et al. ([2019](#bib.bib25)). While in principle the reward only needs to specify the goal of the task, in practice RL can be exceptionally time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. Prior work tackles such situations with dedicated exploration methods Houthooft et al. ([2016](#bib.bib12)); Osband et al. ([2016](#bib.bib20)); Andrychowicz et al. ([2017](#bib.bib2)), or by using large amounts of random exploration Mnih et al. ([2015](#bib.bib18)), which is feasible in simulation but infeasible for real-world robotic learning. It is also common to employ heuristic shaping, such as the Cartesian distance to a goal for an object relocation task Mahmood et al. ([2018](#bib.bib17)); Haarnoja et al. ([2018a](#bib.bib9)). However, this kind of shaping is brittle and requires manual insight, and is often impossible when ground truth state observations are unavailable, such as when learning from image observations.
In this paper, we aim to address these challenges by introducing dynamical distance learning (DDL), a general method for learning distance functions that can provide effective shaping for goal-reaching tasks without manual engineering. Instead of imposing heuristic metrics that have no relationship to the system dynamics, we quantify the distance between two states in terms of the number of time steps needed to transition between them. This is a natural choice for dynamical systems, and prior works have explored learning such distances in simple and low-dimensional domains (Kaelbling, [1993](#bib.bib14)). While such distances can be learned using standard model-free reinforcement learning algorithms, such as Q-learning, we show that such methods generally struggle to acquire meaningful distances for more complex systems, particularly with high-dimensional observations such as images. We present a simple method that employs supervised regression to fit dynamical distances, and then uses these distances to provide reward shaping, guide exploration, and discover distinct skills.

Figure 1: We present a dynamical distance learning (DDL) method that can learn a 9-DoF real-world dexterous manipulation task directly from raw image observations. DDL does not assume access to the true reward function and solves the 180 degree valve-rotation task in 8 hours by relying only on 10 human-provided preference labels.
The most direct use of DDL is to provide reward shaping for a standard deep RL algorithm, to optimize a policy to reach a given goal state. We can also formulate a semi-supervised skill learning method, where a user expresses preferences over goals, and the agent autonomously collects experience to learn dynamical distances in a self-supervised way. Finally, we can use DDL in a fully unsupervised method, where the most distant states are selected for exploration, resulting in an unsupervised reinforcement learning procedure that discovers difficult skills that reach dynamically distant states from a given start state. All of these applications avoid the need for manually designed reward functions, demonstrations, or user-provided examples, and involve minimal modification to existing deep RL algorithms.
DDL is a simple and scalable approach to learning dynamical distances that can readily accommodate raw image inputs and, as shown in our experiments, substantially outperforms prior methods that learn goal-conditioned policies or distances using approximate dynamic programming techniques, such as Q-learning. We show that using dynamical distances as a reward function in standard reinforcement learning methods results in policies that take the shortest path to a given goal, despite the additional shaping. Empirically, we compare the semi-supervised variant of our method to prior techniques for learning from preferences. We also compare our method to prior methods for unsupervised skill discovery on tasks ranging from 2D navigation to quadrupedal locomotion. Our experimental evaluation demonstrates that DDL can learn complex locomotion skills without any supervision at all, and that the preferences-based version of DDL can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and 10 human-provided preference labels, without any other supervision.
2 Related Work
---------------
Dynamical distance learning is most closely related to methods that learn goal-conditioned policies or value functions Schaul et al. ([2015](#bib.bib24)); Sutton et al. ([2011](#bib.bib26)). Many of these works learn goal-reaching directly via model-free RL, often by using temporal difference updates to learn the distance function as a value function Kaelbling et al. ([1996](#bib.bib13)); Schaul et al. ([2015](#bib.bib24)); Andrychowicz et al. ([2017](#bib.bib2)); Pong et al. ([2018](#bib.bib21)); Nair et al. ([2018](#bib.bib19)); Florensa et al. ([2019](#bib.bib7)).
For example, Kaelbling ([1993](#bib.bib14)) learns a goal conditioned Q-function to represent the shortest path between any two states, and Andrychowicz et al. ([2017](#bib.bib2)) learns a value function that resembles a distance to goals, under a user-specified low-dimensional goal representation. Unlike these methods, DDL learns policy-conditioned distances with an explicit supervised learning procedure, and then employs these distances to recover a reward function for RL. We experimentally compare to RL-based distance learning methods, and show that DDL attains substantially better results, especially with complex observations. Another line of prior work uses a learned distance to build a search graph over a set of visited states Savinov et al. ([2018](#bib.bib23)); Eysenbach et al. ([2019](#bib.bib5)), which can then be used to plan to reach new states via the shortest path. Our method also learns a distance function separately from the policy, but instead of using it to build a graph, we use it to obtain a reward function for a separate model-free RL algorithm.
The semi-supervised variant of DDL is guided by a small number of preference queries. Prior work has explored several ways to elicit goals from users, such as using outcome examples and a small number of label queries Singh et al. ([2019](#bib.bib25)), or using a large number of relatively cheap preferences Christiano et al. ([2017](#bib.bib3)). The preference queries that our semi-supervised method uses are easy to obtain and, in contrast to prior work Christiano et al. ([2017](#bib.bib3)), we only need a small number of these queries to learn a policy that reliably achieves the user’s desired goal. Our method is also well suited for fully unsupervised learning, in which case DDL uses the distance function to propose goals for unsupervised skill discovery. Prior work on unsupervised reinforcement learning has proposed choosing goals based on a variety of unsupervised criteria, typically with the aim of attaining broad state coverage Nair et al. ([2018](#bib.bib19)); Florensa et al. ([2018](#bib.bib6)); Eysenbach et al. ([2018](#bib.bib4)); Warde-Farley et al. ([2018](#bib.bib27)); Pong et al. ([2019](#bib.bib22)). Our method instead repeatedly chooses the most distant state as the goal, which produces rapid exploration and quickly discovers relatively complex skills. We provide a comparative evaluation in our experiments.
3 Preliminaries
----------------
In this work, we study control of systems defined by fully observed Markovian dynamics p(𝐬′|𝐬,𝐚):𝒮×𝒮×𝒜→ℝ≥0:𝑝conditionalsuperscript𝐬′𝐬𝐚→𝒮𝒮𝒜subscriptℝabsent0p(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a}):\mathcal{S}\times\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\_{\geq 0}italic\_p ( bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_s , bold\_a ) : caligraphic\_S × caligraphic\_S × caligraphic\_A → blackboard\_R start\_POSTSUBSCRIPT ≥ 0 end\_POSTSUBSCRIPT, where 𝒮𝒮\mathcal{S}caligraphic\_S and 𝒜𝒜\mathcal{A}caligraphic\_A are continuous state and action spaces. We aim to learn a stochastic policy π(𝐚|𝐬):𝒜×𝒮→ℝ≥0:𝜋conditional𝐚𝐬→𝒜𝒮subscriptℝabsent0\pi(\mathbf{a}|\mathbf{s}):\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\_{\geq 0}italic\_π ( bold\_a | bold\_s ) : caligraphic\_A × caligraphic\_S → blackboard\_R start\_POSTSUBSCRIPT ≥ 0 end\_POSTSUBSCRIPT, to reach a goal state 𝐠∈𝒮𝐠𝒮\mathbf{g}\in\mathcal{S}bold\_g ∈ caligraphic\_S.
We will denote a trajectory with τ≜(𝐬0,𝐚0,…,𝐬T)∼ρπ≜𝜏subscript𝐬0subscript𝐚0…subscript𝐬𝑇similar-tosubscript𝜌𝜋\tau\triangleq({\mathbf{s}\_{0}},{\mathbf{a}\_{0}},...,{\mathbf{s}\_{T}})\sim\rho\_{\pi}italic\_τ ≜ ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , bold\_s start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) ∼ italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT, where ρπsubscript𝜌𝜋\rho\_{\pi}italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT is a the trajectory distribution induced by the policy π𝜋\piitalic\_π, and 𝐬0subscript𝐬0\mathbf{s}\_{0}bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is sampled from an initial state distribution ρ(𝐬0)𝜌subscript𝐬0\rho(\mathbf{s}\_{0})italic\_ρ ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ).
The policy can be optimized using any reinforcement learning algorithm by maximizing
| | | | |
| --- | --- | --- | --- |
| | ℒ(π)=𝔼τ∼ρπ[∑t=0∞γtr𝐠(𝐬t,𝐚t)],ℒ𝜋subscript𝔼similar-to𝜏subscript𝜌𝜋superscriptsubscript𝑡0superscript𝛾𝑡subscript𝑟𝐠subscript𝐬𝑡subscript𝐚𝑡\displaystyle\mathcal{L}(\pi)=\operatorname{\mathbb{E}}\_{\tau\sim\rho\_{\pi}}\left[\sum\_{t=0}^{\infty}\gamma^{t}r\_{\mathbf{g}}({\mathbf{s}\_{t}},{\mathbf{a}\_{t}})\right],caligraphic\_L ( italic\_π ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] , | | (1) |
where r𝐠:𝒮×𝒜→[−Rmin,Rmax]:subscript𝑟𝐠→𝒮𝒜subscript𝑅minsubscript𝑅maxr\_{\mathbf{g}}:\mathcal{S}\times\mathcal{A}\rightarrow[-R\_{\mathrm{min}},R\_{\mathrm{max}}]italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT : caligraphic\_S × caligraphic\_A → [ - italic\_R start\_POSTSUBSCRIPT roman\_min end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT roman\_max end\_POSTSUBSCRIPT ] is a bounded reward function and γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) is a discount factor.111In practice, we use soft actor-critic to learn the policy, which uses a related maximum entropy objective Haarnoja et al. ([2018c](#bib.bib11)).
However, we *do not* assume that we have access to a shaped reward function. In principle, we could set the reward to r𝐠(𝐬,𝐚)=0subscript𝑟𝐠𝐬𝐚0r\_{\mathbf{g}}(\mathbf{s},\mathbf{a})=0italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) = 0 if 𝐬=𝐠𝐬𝐠\mathbf{s}=\mathbf{g}bold\_s = bold\_g and r𝐠(𝐬,𝐚)=−1subscript𝑟𝐠𝐬𝐚1r\_{\mathbf{g}}(\mathbf{s},\mathbf{a})=-1italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s , bold\_a ) = - 1 otherwise to learn a policy to reach the goal in as few time steps as possible. Unfortunately, such a sparse reward signal is extremely hard to optimize, as it does not provide any gradient towards the optimal solution until the goal is actually reached. Instead, in [Section 4](#S4 "4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), we will show that we can efficiently learn to reach goals by making use of a learned dynamical distance function.
4 Dynamical Distance Learning
------------------------------
The aim of our method is to learn policies that reach goal states. These goal states can be selected either in an unsupervised fashion, to discover complex skills, or selected manually by the user. The learning process alternates between two steps: in the *distance evaluation* step, we learn a policy-specific dynamical distance, which is defined in the following subsection. In the *policy improvement* step, the policy is optimized to reach the desired goal by using the distance function as the negative reward. This process will lead to a sequence of policies and dynamical distance functions that
converge to an effective goal-reaching policy. Under certain assumptions, we can prove that this process converges to a policy that minimizes the distance from any state to any goal, as discussed in Appendix [B](#A2 "Appendix B Policy Improvement when Using Distance as Reward ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). In this section, we define dynamical distances and describe our dynamical distance learning (DDL) procedure. In Section [5](#S5 "5 Goal Proposals ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), we will describe the different ways that the goals can be chosen to instantiate our method as a semi-supervised or unsupervised skill learning procedure.
###
4.1 Dynamical Distance Functions
The dynamical distance associated with a policy π𝜋\piitalic\_π, which we write as dπ(𝐬i,𝐬j)superscript𝑑𝜋subscript𝐬𝑖subscript𝐬𝑗d^{\pi}(\mathbf{s}\_{i},\mathbf{s}\_{j})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ), is defined as the expected number of time steps it took for π𝜋\piitalic\_π to reach a state 𝐬jsubscript𝐬𝑗\mathbf{s}\_{j}bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT from a state 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, given that the two were visited in the same episode.222Dynamical distances are not true distance metrics, since they do not in general satisfy triangle inequalities. Mathematically, the distance is defined as:
| | | | |
| --- | --- | --- | --- |
| | dπ(𝐬,𝐬′)≜𝔼τ∼π|𝐬i=𝐬,𝐬j=𝐬′,j≥i[∑t=ij−1γt−ic(𝐬t,𝐬t+1)],≜superscript𝑑𝜋𝐬superscript𝐬′subscript𝔼formulae-sequencesimilar-to𝜏conditional𝜋subscript𝐬𝑖𝐬formulae-sequencesubscript𝐬𝑗superscript𝐬′𝑗𝑖superscriptsubscript𝑡𝑖𝑗1superscript𝛾𝑡𝑖𝑐subscript𝐬𝑡subscript𝐬𝑡1\displaystyle d^{\pi}(\mathbf{s},\mathbf{s}^{\prime})\triangleq\operatorname{\mathbb{E}}\_{\tau\sim\pi|\mathbf{s}\_{i}=\mathbf{s},\mathbf{s}\_{j}=\mathbf{s}^{\prime},\ j\geq i}{\left[\sum\_{t=i}^{j-1}\gamma^{t-i}c(\mathbf{s}\_{t},\mathbf{s}\_{t+1})\right]},italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( bold\_s , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ≜ blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_π | bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = bold\_s , bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT = bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_j ≥ italic\_i end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_j - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t - italic\_i end\_POSTSUPERSCRIPT italic\_c ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] , | | (2) |
where τ𝜏\tauitalic\_τ is sampled from the conditional distribution of trajectories that passes through first 𝐬𝐬\mathbf{s}bold\_s and then 𝐬′superscript𝐬′\mathbf{s}^{\prime}bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, and where c𝑐citalic\_c is some local cost of moving from 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to 𝐬i+1subscript𝐬𝑖1\mathbf{s}\_{i+1}bold\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT. For example, in a typical case in the absence of supervision, we can set c(𝐬t,𝐬t+1)≡1𝑐subscript𝐬𝑡subscript𝐬𝑡11c({\mathbf{s}\_{t}},{\mathbf{s}\_{t+1}})\equiv 1italic\_c ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ≡ 1 analogously to the binary reward function in [Equation 1](#S3.E1 "1 ‣ 3 Preliminaries ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), in which case the sum reduces to j−i𝑗𝑖j-iitalic\_j - italic\_i, and we recover the expected number of time steps to reach 𝐬′superscript𝐬′\mathbf{s}^{\prime}bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
In principle, we could also trivially incorporate more complex local costs c𝑐citalic\_c, for example to include action costs. This modification would be straightforward, though we focus on the simple c(𝐬t,𝐬t+1)≡1𝑐subscript𝐬𝑡subscript𝐬𝑡11c({\mathbf{s}\_{t}},{\mathbf{s}\_{t+1}})\equiv 1italic\_c ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ≡ 1 in our derivation and experiments. We include the discount factor to extend the definition to infinitely long trajectories, but in practice we set γ=1𝛾1\gamma=1italic\_γ = 1.
###
4.2 Distance Evaluation
In the distance evaluation step, we learn a distance function dψπ(𝐬,𝐬′)subscriptsuperscript𝑑𝜋𝜓𝐬superscript𝐬′d^{\pi}\_{\psi}(\mathbf{s},\mathbf{s}^{\prime})italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s , bold\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), parameterized by ψ𝜓{\psi}italic\_ψ, to estimate the dynamical distance between pairs of states visited by a given policy πϕsubscript𝜋italic-ϕ\pi\_{\phi}italic\_π start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, parameterized by ϕitalic-ϕ{\phi}italic\_ϕ. We first roll out the policy multiple times to sample trajectories τksubscript𝜏𝑘\tau\_{k}italic\_τ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT of length T𝑇Titalic\_T. The empirical distance between states 𝐬i,𝐬j∈τksubscript𝐬𝑖subscript𝐬𝑗
subscript𝜏𝑘\mathbf{s}\_{i},\mathbf{s}\_{j}\in\tau\_{k}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ italic\_τ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, where 0≤i≤j≤T0𝑖𝑗𝑇0\leq i\leq j\leq T0 ≤ italic\_i ≤ italic\_j ≤ italic\_T, is given by j−i𝑗𝑖j-iitalic\_j - italic\_i. Because the trajectories have a finite length, we are effectively ignoring the cases where reaching 𝐬jsubscript𝐬𝑗\mathbf{s}\_{j}bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT from 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT would take more than T−i𝑇𝑖T-iitalic\_T - italic\_i steps, biasing this estimate toward zero, but since the bias becomes smaller for shorter distances, we did not find this to be a major limitation in practice. We can now learn the distance function via supervised regression by minimizing
| | | | |
| --- | --- | --- | --- |
| | ℒd(ψ)=12𝔼τ∼ρπi∼[0,T]j∼[i,T][(dψπ(𝐬i,𝐬j)−(j−i)))2].\displaystyle\mathcal{L}\_{d}({\psi})=\frac{1}{2}\operatorname{\mathbb{E}}\_{\begin{subarray}{c}\tau\sim\rho\_{\pi}\\
i\sim[0,T]\\
j\sim[i,T]\end{subarray}}\left[\left(d^{\pi}\_{\psi}(\mathbf{s}\_{i},\mathbf{s}\_{j})-(j-i))\right)^{2}\right].caligraphic\_L start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_ψ ) = divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG blackboard\_E start\_POSTSUBSCRIPT start\_ARG start\_ROW start\_CELL italic\_τ ∼ italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_i ∼ [ 0 , italic\_T ] end\_CELL end\_ROW start\_ROW start\_CELL italic\_j ∼ [ italic\_i , italic\_T ] end\_CELL end\_ROW end\_ARG end\_POSTSUBSCRIPT [ ( italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) - ( italic\_j - italic\_i ) ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] . | | (6) |
As we will show in our experimental evaluation, this supervised regression approach makes it feasible to learn dynamical distances for complex tasks with raw image observations, something that has proven exceptionally challenging for methods that learn distances via goal-conditioned policies or value functions and rely on temporal difference-style methods. In direct comparisons, we find that such methods generally struggle to learn on the more complex tasks with image observations. On the other hand, a disadvantage of supervised regression is that it requires on-policy experience, potentially leading to poor sample efficiency. However, because we use the distance as an intermediate representation that guides off-policy policy learning, as we will discuss in [Section 4.3](#S4.SS3 "4.3 Policy Improvement ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), we did not find the on-policy updates for the distance to slow down learning. Indeed, our experiments in [Section 6.1](#S6.SS1 "6.1 Vision-Based Real-World Manipulation from Human Preferences ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") show that we can learn a manipulation task on a real robot with roughly the same amount of experience as is necessary when using a well-shaped and hand-tuned reward function.
###
4.3 Policy Improvement
In the policy improvement step, we use dψπsubscriptsuperscript𝑑𝜋𝜓d^{\pi}\_{\psi}italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT to optimize a policy πϕsubscript𝜋italic-ϕ\pi\_{\phi}italic\_π start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, parameterized by ϕitalic-ϕ{\phi}italic\_ϕ, to reach a goal 𝐠𝐠\mathbf{g}bold\_g. In principle, we could optimize the policy by choosing actions that greedily minimize the distance to the goal, which essentially treats negative distances as the values of a value function, and would be equivalent to the policy improvement step in standard policy iteration. However, acting greedily with respect to the dynamical distance defined in [Equation 2](#S4.E2 "2 ‣ 4.1 Dynamical Distance Functions ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") would result in a policy that is optimistic with respect to the dynamics.
![[Uncaptioned image]](/html/1907.08225/assets/x2.png)
This is because the dynamical distance is defined as the expected number of time steps *conditioned* on the policy successfully reaching the second state from the first state, and therefore does not account for the case where the second state is not reached successfully. In some cases, this results in pathologically bad value functions. For example, consider the MDP shown on the right, where the agent can reach the goal 𝐠𝐠\mathbf{g}bold\_g using one of two paths. The first path has one intermediate state that leads to the target state with probability p𝑝pitalic\_p, and an absorbing terminal state 𝐬𝐓subscript𝐬𝐓\mathbf{s\_{T}}bold\_s start\_POSTSUBSCRIPT bold\_T end\_POSTSUBSCRIPT with probability 1−p1𝑝1-p1 - italic\_p. The other path has two intermediate states, but allows the agent to reach the target every time. The optimal dynamical distance will be 2, regardless of the value of p𝑝pitalic\_p, causing the policy to always choose the risky path and potentially miss the target completely.
The definition of dynamical distances in [Equation 2](#S4.E2 "2 ‣ 4.1 Dynamical Distance Functions ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") follows directly from how we learn the distance function, by choosing both 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝐬jsubscript𝐬𝑗\mathbf{s}\_{j}bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT from the same trajectory. Conditioning on both 𝐬isubscript𝐬𝑖\mathbf{s}\_{i}bold\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝐬jsubscript𝐬𝑗\mathbf{s}\_{j}bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is needed when the state space is continuous or large, since visiting two states by chance has zero or near-zero probability. We instead propose to use the distance as a negative reward, and apply reinforcement learning to minimize the cumulative distance on the path to the goal:
| | | | |
| --- | --- | --- | --- |
| | ℒπ(ϕ)=𝔼τ∼ρπ[∑t=0∞γtdψπ(𝐬t,𝐠)].subscriptℒ𝜋italic-ϕsubscript𝔼similar-to𝜏subscript𝜌𝜋superscriptsubscript𝑡0superscript𝛾𝑡subscriptsuperscript𝑑𝜋𝜓subscript𝐬𝑡𝐠\displaystyle\mathcal{L}\_{\pi}({\phi})=\operatorname{\mathbb{E}}\_{\tau\sim\rho\_{\pi}}\left[\sum\_{t=0}^{\infty}\gamma^{t}d^{\pi}\_{\psi}({\mathbf{s}\_{t}},\mathbf{g})\right].caligraphic\_L start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_ϕ ) = blackboard\_E start\_POSTSUBSCRIPT italic\_τ ∼ italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_g ) ] . | | (7) |
This amounts to minimizing the cumulative distance over visited states, and thus taking a risky action becomes unfavourable if it takes the agent to a state that is far from the target at a later time. We further show that, under certain assumption, the policy that optimizes [Equation 7](#S4.E7 "7 ‣ 4.3 Policy Improvement ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") will indeed acquire the correct behavior, as discussed in Appendix [A](#A1 "Appendix A Correct Behavior in the Pathological MDP ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), and will converge to a policy that takes the shortest path to the goal, as we show in Appendix [B](#A2 "Appendix B Policy Improvement when Using Distance as Reward ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery").
We note that our simulated experiments below are run in deterministic environments and we do not fully understand why cumulative distances work better than greedily minimizing the distances even in those cases. A comparison between these two cases is shown in [Section 6.2](#S6.SS2 "6.2 Ablations, Comparisons, and Analysis ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery").
###
4.4 Algorithm Summary
Algorithm 1 Dynamical Distance Learning
1: Input: ϕ,ψitalic-ϕ𝜓{\phi},{\psi}italic\_ϕ , italic\_ψ ▷▷\triangleright▷ Initial policy and distance parameters
2: Input: 𝒟𝒟\mathcal{D}caligraphic\_D ▷▷\triangleright▷ Empty replay pool
3: repeat
4: τ∼ρπ,𝒟←𝒟∪τformulae-sequencesimilar-to𝜏subscript𝜌𝜋←𝒟𝒟𝜏\tau\sim\rho\_{\pi},\ \mathcal{D}\leftarrow\mathcal{D}\cup\tauitalic\_τ ∼ italic\_ρ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT , caligraphic\_D ← caligraphic\_D ∪ italic\_τ ▷▷\triangleright▷ Sample a new trajectory
5: for i=0𝑖0i=0italic\_i = 0 to Ndsubscript𝑁𝑑N\_{d}italic\_N start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT do
6: ψ←ψ−λd∇^ℒd(ψ;π)←𝜓𝜓subscript𝜆𝑑^∇subscriptℒ𝑑𝜓𝜋{\psi}\leftarrow{\psi}-\lambda\_{d}\hat{\nabla}\mathcal{L}\_{d}({\psi};\pi)italic\_ψ ← italic\_ψ - italic\_λ start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT over^ start\_ARG ∇ end\_ARG caligraphic\_L start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT ( italic\_ψ ; italic\_π ) ▷▷\triangleright▷ Minimize distance loss
7: end for
8: 𝐠←choose\_goal(𝒟)←𝐠choose\_goal𝒟\mathbf{g}\leftarrow\mathrm{choose\\_goal}(\mathcal{D})bold\_g ← roman\_choose \_ roman\_goal ( caligraphic\_D )▷▷\triangleright▷ Choose goal state
9: for i=0𝑖0i=0italic\_i = 0 to Nπsubscript𝑁𝜋N\_{\pi}italic\_N start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT do
10: ϕ←ϕ−λπ∇^ℒπ(ϕ;d,𝐠)←italic-ϕitalic-ϕsubscript𝜆𝜋^∇subscriptℒ𝜋italic-ϕ𝑑𝐠{\phi}\leftarrow{\phi}-\lambda\_{\pi}\hat{\nabla}\mathcal{L}\_{\pi}({\phi};d,\mathbf{g})italic\_ϕ ← italic\_ϕ - italic\_λ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT over^ start\_ARG ∇ end\_ARG caligraphic\_L start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( italic\_ϕ ; italic\_d , bold\_g )▷▷\triangleright▷ Minimize policy loss
11: end for
12: until converged
The dynamical distance learning (DDL) algorithm is described in [Algorithm 1](#alg1 "Algorithm 1 ‣ 4.4 Algorithm Summary ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). Our implementation uses soft actor-critic (SAC) Haarnoja et al. ([2018c](#bib.bib11)) as the policy optimizer, but one could also use any other off-the-shelf algorithm. In each iteration, DDL first samples a trajectory using the current policy, and saves it in a replay pool 𝒟𝒟\mathcal{D}caligraphic\_D. In the second step, DDL updates the distance function by minimizing the loss in [Equation 6](#S4.E6 "6 ‣ 4.2 Distance Evaluation ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). The distance function is optimized for a fixed number of Ndsubscript𝑁𝑑N\_{d}italic\_N start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT stochastic gradient steps. Note that this method requires that we use recent experience from 𝒟𝒟\mathcal{D}caligraphic\_D, so as to learn the distance corresponding to the current policy. In the third step,
DDL chooses a goal state from the recent experience buffer. We will describe two methods to choose these goal states in [Section 5](#S5 "5 Goal Proposals ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). In the fourth step, DDL updates the policy by taking Nπsubscript𝑁𝜋N\_{\pi}italic\_N start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT gradient steps to minimize the loss in [Equation 7](#S4.E7 "7 ‣ 4.3 Policy Improvement ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). The implementation of this step depends on the RL algorithm of choice. These steps are then repeated until convergence.
5 Goal Proposals
-----------------
In the previous section, we discussed how we can utilize a learned distance function to efficiently optimize a goal-reaching policy. However, a learned distance function is only meaningful if evaluated at states from the distribution it has been trained on, suggesting that the goal states should be chosen from the replay pool. Choosing a goal that the policy can already reach might at first appear strange, but it turns out to yield efficient directed exploration, as explained next.
Simple random exploration, such as ϵitalic-ϵ\epsilonitalic\_ϵ-greedy exploration or other strategies that add noise to the actions, can effectively cover states that are close to the starting state, in terms of dynamical distance. However, when high-reward states or goal states are far away from the start state, such naïve strategies are unlikely to reach them. From this observation, we can devise a simple and effective exploration strategy that leverages the learned dynamical distances: we first use the policy to reach a known goal as quickly as possible and then explore the vicinity of that goal. This way more time is left to randomly explore states far from the initial state and this way likely discovering useful states. We propose two different strategies for choosing the goals below.
###
5.1 Semi-Supervised Learning from Preferences
DDL can be used to learn to reach specific goals elicited from a user. The simplest way to do this is for a user to provide the goal state directly, either by specifying the full state, or selecting the state manually from the replay pool. However, we can also provide a more convenient way to elicit the desired state with preference queries. In this setting, the user is repeatedly presented with a small slate of candidate states from the replay pool, and asked to select the one that they prefer most. In practice, we present the user with a visualization of the final state in several of the most recent episodes, and the user selects the one that they consider closest to their desired goal.
For example, if the user wishes to train a legged robot to walk forward, they might pick the state where the robot has progressed the largest distance in the desired direction. The required user effort in selecting these states is minimal, and most of the agent’s experience is still unsupervised, simply using the latest user-chosen state as the goal. In our experiments, we show that this semi-supervised learning procedure, which we call dynamical distance learning from preferences (DDLfP) can learn to rotate a valve with real-world hand from just ten queries, and can learn simulated locomotion tasks using 100 simulated queries.
###
5.2 Unsupervised Exploration and Skill Acquisition
We can also use DDL to efficiently acquire complex behaviors, such as locomotion skills, in a completely unsupervised fashion. From the observation that many high-reward states are far away from the start state, we can devise a simple and effective exploration strategy that leverages our learned dynamical distances: we can simply select goals that are far from the initial state according to their estimated dynamical distance. We call this variant of our method “dynamical distance learning - unsupervised” (DDLUS).
Intuitively, this method causes the agent to explore the “frontier” of hard-to-reach states, either discovering shorter paths for reaching them and thus making them no longer be on the frontier, or else finding new states further on the fringe through additive random exploration. In practice, we find that this allows the agent to quickly explore distant states in a directed fashion. In [Section 6](#S6 "6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), we show that, by setting choose\_goal(𝒟)≡argmax𝐠∈𝒟dψπ(𝐬0,𝐠)choose\_goal𝒟subscriptargmax𝐠𝒟subscriptsuperscript𝑑𝜋𝜓subscript𝐬0𝐠\mathrm{choose\\_goal}(\mathcal{D})\equiv\operatorname\*{arg\,max}\_{\mathbf{g}\in\mathcal{D}}{d^{\pi}\_{\psi}({\mathbf{s}\_{0}},\mathbf{g})}roman\_choose \_ roman\_goal ( caligraphic\_D ) ≡ start\_OPERATOR roman\_arg roman\_max end\_OPERATOR start\_POSTSUBSCRIPT bold\_g ∈ caligraphic\_D end\_POSTSUBSCRIPT italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , bold\_g ), where 𝐬0subscript𝐬0{\mathbf{s}\_{0}}bold\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the initial state, we can acquire effective running gaits and pole balancing skills in a variety of simulated settings.
While this approach is not guaranteed to discover interesting and useful skills in general, we find that, on a variety of commonly used benchmark tasks, this approach to unsupervised goal selection actually discovers behaviors that perform better with respect to the (unknown) task reward than previously proposed unsupervised reinforcement learning objectives.
6 Experiments
--------------

(a) DCLaw
(hardware)

(b) DClaw
(simulation)

(c) Inverted
DoublePendulum

(d) Hopper

(e) HalfCheetah

(f) Ant
Figure 2: We evaluate our method both in simulation and on a real-world robot. We show that our method can learn to turn a valve with a real-world 9-DoF hand ([1(a)](#S6.F1.sf1 "1(a) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery")), and run ablations in the simulated version of the same task ([1(b)](#S6.F1.sf2 "1(b) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery")). We also demonstrate that our method can learn pole balancing ([1(c)](#S6.F1.sf3 "1(c) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery")) and locomotion ([1(d)](#S6.F1.sf4 "1(d) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), [1(e)](#S6.F1.sf5 "1(e) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), [1(f)](#S6.F1.sf6 "1(f) ‣ Figure 2 ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery")) skills in simulation.
Our experimental evaluation aims to study the following empirical questions:
(1) Does supervised regression provide a good estimator of the true dynamical distance?
(2) Is DDL applicable to real-world, vision-based robotic control tasks?
(3) Does DDL provide an efficient method of learning skills a) from user-provided preferences, and b) completely unsupervised?
We evaluate our method both in the real world and in simulation on a set of state- and vision-based continuous control tasks. We consider a 9-DoF real-world dexterous manipulation task and 4 standard OpenAI Gym tasks (Hopper-v3, HalfCheetah-v3, Ant-v3, and InvertedDoublePendulum-v2). For all of the tasks, we parameterize our distance function as a neural network, and use soft actor-critic (SAC) Haarnoja et al. ([2018b](#bib.bib10)) with the default hyperparameters to learn the policy. For state-based tasks, we use feed-forward neural networks and for the vision-based tasks we add a convolutional preprocessing network before these fully connected layers. The image observation for all the vision-based tasks are 3072 dimensional (32x32 RGB images). Further details are presented in Appendix [E](#A5 "Appendix E Technical Details ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery").
We study question (1) using a simple didactic example involving navigation through a two-dimensional S-shaped maze, which we present in Appendix [C](#A3 "Appendix C Didactic Example ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). The other two research questions are studied in the following sections.
###
6.1 Vision-Based Real-World Manipulation from Human Preferences

Figure 3: (Left) learning curves for the valve rotation task learned from state. (Right) Same task from vision. The curves correspond to the final distance (measured in radians) of the valve from the target angle during a rollout. Our method (DDLfP, orange) solves the task in 8 hours. Its performance is comparable to that of SAC with true rewards, and VICE with example outcome images. DDLfP only requires 10 preference queries, and learns without true rewards or outcome images. We compare our method in the simulated version of this task in [Figure 5](#S6.F5 "Figure 5 ‣ 6.2 Ablations, Comparisons, and Analysis ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery").
To study the question (2), we apply DDLfP to a real-world vision-based robotic manipulation task. The domain consists of a 9-DoF “DClaw” hand introduced by Ahn et al. ([2019](#bib.bib1)), and the manipulation task requires the hand to rotate a valve 180 degrees, as shown in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"). The human operator is queried for a preference every 10K environment steps. Both the vision- and state-based experiments with the real robot use 10 queries during the first 4 hours of an 8-hour training period. Note that, for this and all the subsequent experiments, DDLfP does not have access to the true reward, and must learn entirely from preference queries, which in this case are provided by a human operator.
[Figure 3](#S6.F3 "Figure 3 ‣ 6.1 Vision-Based Real-World Manipulation from Human Preferences ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") presents the performance over the course of training. DDLfP uses 10 preference queries to learn the task and its performance is comparable to that of SAC trained with a ground truth shaped reward function. We also show a comparison to variational inverse control with events (VICE) Singh et al. ([2019](#bib.bib25)), a recent classifier-based reward specification framework. Instead of preference queries, VICE requires the user to provide examples of the desired goal state at the beginning of training (20 images in this case). For vision-based tasks, VICE involves directly showing images of the desired outcome to the user, which requires physically arranging a scene and taking a picture of it. Preferences, on the other hand, require a user to simply select one state out of a small set, which can be done with a button press and done e.g. remotely, thus often making it substantially less labor-intensive than VICE. As we can see in the experiments, DDLfP achieves similar performance with substantially less operator effort, using only a small number of preference queries. The series of goal preferences queried from the human operator are shown in Appendix [D](#A4 "Appendix D Preference Queries for Real-World DClaw Experiment ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery").

Figure 4: Learning curves for MuJoCo tasks with DDLfP. The y-axis presents the true return of the task. We compare DDLfP to SAC trained directly from the true reward function, which provides an oracle upper bound baseline, and the prior method proposed by Christiano et al. ([2017](#bib.bib3)). The prior method uses an on-policy RL algorithm which typically requires more samples than off-policy algorithms, and thus we also plot its final performance after 20M training steps with red star. At the time of the submission, the Ant-v3 run is still in progress and the complete learning curve will be included in the final.
###
6.2 Ablations, Comparisons, and Analysis
Next, we analyze design decisions in our method and compare it to prior methods in simulation. First, we replace the cumulative objective in [Equation 7](#S4.E7 "7 ‣ 4.3 Policy Improvement ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") with objective that greedily minimizes the distance function trained with supervised loss. This objective is unable to learn the task from either state or vision observations. Next, we replace the supervised loss in [Equation 6](#S4.E6 "6 ‣ 4.2 Distance Evaluation ‣ 4 Dynamical Distance Learning ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") of our DDL method with a temporal difference (TD) Q-learning style update rule that learns dynamical distances with approximate dynamic programming. The results in [Figure 5](#S6.F5 "Figure 5 ‣ 6.2 Ablations, Comparisons, and Analysis ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") show that, all else being equal, the TD-based method fails to learn successfully from both low-dimensional state and vision observations. Figure [5](#S6.F5 "Figure 5 ‣ 6.2 Ablations, Comparisons, and Analysis ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") further shows a comparison between using the dynamical distance as the reward in comparison to a reward of -1 for each step until the goal is reached, which corresponds to hindsight experience replay (HER) with goal sampling replaced with preference goals Andrychowicz et al. ([2017](#bib.bib2)).
We see that dynamical distances allow the policy to reach the goal when learning both from state and from images, while HER is only successful when learning from low-dimensional states.

Figure 5: We compare DDL against alternative methods for learning distances on the simulated valve turning task, when learning from the underlying low-dimensional state (left) and from images (right). Dynamical distances used greedily (orange) or learned with TD (green) generally perform poorly. HER (red) can learn from low-dimensional states, but fails to learn from images. Our method, DDLfP (blue) successfully learns the task from either states or images.
These results are corroborated by prior results in the literature that have found that temporal difference learning struggles to capture the true value accurately Lillicrap et al. ([2015](#bib.bib16)); Fujimoto et al. ([2018](#bib.bib8)). Note that prior work work does not use the full state as the goal, but rather manually selects a low-dimensional subspace, such as the location of an object, forcing the distance to focus on task-relevant objects Andrychowicz et al. ([2017](#bib.bib2)). Our method learns distances between full image states (3072-dimensional) while HER uses 3-dimensional goals, a difference of two orders of magnitude in dimensionality. This difficulty of learning complex image-based goals is further corroborated in prior work Pong et al. ([2018](#bib.bib21)); Nair et al. ([2018](#bib.bib19)); Pong et al. ([2019](#bib.bib22)); Warde-Farley et al. ([2018](#bib.bib27)).
[Figure 4](#S6.F4 "Figure 4 ‣ 6.1 Vision-Based Real-World Manipulation from Human Preferences ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") presents results for learning from preferences via DDLfP (in green) on a set of continuous control tasks to further study the question (3,a). The plots show the true reward for each method on each task. DDLfP receives only sparse preferences as task-specific supervision, and the preferences in this case are provided synthetically, choosing the state that has progressed the largest distance from the initial state in the desired direction, i.e. the state with largest x-coordinate value. However, this still provides substantially less supervision signal than access to the true reward for all samples. We compare to Christiano et al. ([2017](#bib.bib3)), which also uses preferences for learning skills, but without the use of dynamical distances. The prior method is provided with 750 preference queries over the course of training, while our method uses 100 for all locomotion tasks, and only a single query for the InvertedDoublePendulum-v2, as the initial state and the goal states coincides.333In our case, one preference query amounts to choosing one of five states, whereas in Christiano et al. ([2017](#bib.bib3)) a query consists always of two state sequences. Note that Christiano et al. ([2017](#bib.bib3)) utilizes an on-policy RL algorithms, which is less efficient than SAC. However, DDLfP outperforms this prior method in terms of *both* final performance and learning speed on all tasks, except for the Hopper-v3 task.
Locomotion tasks like the ones considered here do not fit into DDL framework directly. In this particular case of locomotion tasks, we can fix the issue by considering a case where the “ultimate” task is to reach a specific goal, i.e. the operator would always choose the goal to be the state closest to the ”ultimate task goal”. In that case, we can see the locomotion task to be the limit case where the “ultimate goal” is as far as possibly reachable within the maximum episode length.
###
6.3 Acquiring Unsupervised Skills

Figure 6: (Top) Learning curves for DDLUS. The y-axis plots the environment return (not accessible during the training) for InvertedDoublePendulum-v3, and the L2-distance travelled from the origin for Hopper-v3, HalfCheetah-v3, and Ant-v3. (Bottom) Frequency histograms of skills learned with DDLUS (blue) and DIAYN (orange) Eysenbach et al. ([2018](#bib.bib4)) across different training runs, evaluated according to the travelled L2-distance from the origin.
Finally, we study question (3,b) in order to understand how well DDLUS can acquire skills without any supervision. We structure these experiments analogously to the unsupervised skill learning experiments proposed by Eysenbach et al. ([2018](#bib.bib4)), and compare to the DIAYN algorithm, another unsupervised skill discovery method, proposed in their prior work. While our method maximizes the complexity of the learned skills by attempting to reach the furthest possible goal, DIAYN maximizes the diversity of learned skills. This of course produces different biases in the skills produced by the two methods. [Figure 6](#S6.F6 "Figure 6 ‣ 6.3 Acquiring Unsupervised Skills ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery") shows both learning curves and histograms of the skills learned in the locomotion tasks with the two methods, evaluated according to how far the simulated robot in each domain travels from the initial state. Our DDLUS method learns skills that travel further than DIAYN, while still providing a variety of different behaviors (e.g., travel in different directions). This experiment aims to provide a direct comparison to the DIAYN algorithm Eysenbach et al. ([2018](#bib.bib4)), though a reasonable criticism is that maximizing dynamical distance is particularly well-suited for the criteria proposed by Eysenbach et al. ([2018](#bib.bib4)). We also evaluated DDLUS on the InvertedDoublePendulum-v2 domain, where the task is to balance a pole on a cart. As can be seen from [Figure 6](#S6.F6 "Figure 6 ‣ 6.3 Acquiring Unsupervised Skills ‣ 6 Experiments ‣ Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery"), DDLUS can efficiently solve the task without the true reward, as reaching dynamically far states amounts to avoiding failure as far as possible.
7 Conclusion
-------------
We presented dynamical distance learning (DDL), an algorithm for learning dynamical distances that can be used to specify reward functions for goal reaching policies, and support both unsupervised and semi-supervised exploration and skill discovery. Our algorithm uses a simple and stable supervised learning procedure to learn dynamical distances, which are then used to provide a reward function for a standard reinforcement learning method. This makes DDL straightforward to apply even with complex and high-dimensional observations, such as images. By removing the need for manual reward function design and manual reward shaping, our method makes it substantially more practical to employ deep reinforcement learning to acquire skills even with real-world robotic systems. We demonstrate this by learning a valve-turning task with a real-world robotic hand, using 10 preference queries from a human, without any manual reward design or other examples or supervision. One of the main limitations of our current approach is that, although it can be used with an off-policy reinforcement learning algorithm, it requires on-policy data collection for learning the dynamical distances. While the resulting method is still efficient enough to learn directly in the real world, the efficiency of our approach can likely be improved in future work by lifting this limitation. This would not only make learning faster but would also make it possible to pre-train dynamical distances using previously collected experience, potentially making it feasible to scale our method to a multi-task learning setting, where the same dynamical distance function can be used to learn multiple distinct skills.
#### Acknowledgments
We thank Vikash Kumar for the DClaw robot design, Nicolas Heess for helpful discussion, and Henry Zhu and Justin Yu for their help on setting up and running the hardware experiments. This research was supported by the Office of Naval Research, the National Science Foundation through IIS-1651843 and IIS-1700696, and Berkeley DeepDrive. |
0b302e37-32c4-4151-b399-592d369be94a | trentmkelly/LessWrong-43k | LessWrong | Covid 10/27/22: Another Origin Story
The big story this week was a new preprint claiming to show that Covid-19 had an unnatural origin. For several days, this was a big story with lots of arguing about it, lots of long threads, lots of people accusing others of bad faith or being idiots or not understanding undergraduate microbiology, and for some reason someone impersonating a virologist to spy on Kelsey Piper.
Then a few days later all discussion of it seemed to vanish. It wasn’t that everyone suddenly came to an agreement to move on. All sides simply decided that this was no longer the Current Thing. See the section for further discussion. In the end I did not update much, so I am mostly fine with this null result.
There’s also more Gain of Function research looking to create a new pandemic. There was a lot of consensus among the comments and those I know that this work must stop, yet little in the way of good ways to stop it. Several people gave versions of ‘have you considered violence or otherwise going outside the law?’ and my answer is no. While the dangers here are real, they are not at anything like the levels that would potentially justify such actions.
NOTE ON DELETED POST FROM THIS WEEK
Finally, I need to address the post that got taken down in a bit more detail. I want to thank Saloni in particular for quickly and efficiently making some of my mistakes clear to me both quickly and clearly, with links, so I could within about an hour realize I’d made a huge mistake and the whole post structure and conclusions no longer made sense, so I took the post down. Please disregard it. Everyone has been great about understanding that mistakes happen, and I want you to know I appreciate it, and hope it helps myself and others similarly address errors in the future.
How did the mistakes happen? Ultimately, it is 100% my fault, on multiple counts, no excuses. What are some of the things I did wrong, so I can hopefully minimize chances they happen again?
My logic was flawed. I wasn’t thinking abou |
c75b4dd8-d05f-478b-b534-353a02c8c8fe | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Building AGI: Promising Approaches, Remaining Milestones, and Likely Obstacles | Yoshua Bengio
I guess I have a few messages one of
them is current AI is very bad in terms
of health you know how much they
understands the world around us then
I'll explain more specifically some of
the things that I think are missing that
at mila we are working on and then i
have a few words but some things we're
doing that have to do with beneficial AI
maybe not beneficial AGI yet so well i
don't need to spend much time to
convince you that there's been huge
progress in AI in great part due to
progress and deep learning but maybe i
want to mention that the the neural nets
of today are very different from the
numinous of the 9 he's not just because
of depth but things like incorporating
attention mechanisms memory being able
to handle any data structure and not
just vectors and being able to generate
data in a way that you know in the 90s
we have no clue how to do or even until
recently and then of course lots of
applications this is the progress of
generative models over four years from
2014 to do 717 ok but but if you want to
be you know honnest about where we are
you know it's it's it's amazing how
superficial and how shallow in in in a
semantic sense the current models are
and you can see that in in the kinds of
mistakes they make of course many of you
know about adversarial examples that
you're seeing here and the fact that
most industrial applications are really
based on supervised learning which
require humans to define the high-level
concepts that the machines are supposed
to learn about and the other thing maybe
people realize a bit less is that if you
if you look at the kinds of mistakes
they make not just
with adversarial examples but other
kinds of more natural mistakes you see
that as soon as you take the current
models out of their training
distribution they they can fail in
pretty bad ways and and one reason why
this happens is because they they rely
on low-level they tend to rely on
low-level superficial properties like
here in the experiments we did in this
paper last year properties of the the
4yi distribution for the spectrum of the
images which can be manipulated in ways
that humans will not even you know care
about but will completely screw up the
classifiers in a natural language you
can also see lots of examples of failure
even with like the best say machine
translation models or the best language
models for example here you have an
example Google Translate today I just
pick recently from a queue coming from
Hofstadter if you try to translate from
English in their house everything comes
in pairs there's this car and her car
his doubles and her towels blah blah
blah in French the translation is is
wrong because it's just basically going
translating his Tusa and her to saw it
unfortunately in French is the same word
and so it completely loses the meaning
and you know in French you would have to
change the actual structure of the
sentence to get it right so so it's like
these these machines are very myopic and
they've they basically learn little
approximate rules that are all you know
stuck together but no overall
understanding of what the sentence
really means you can also see this by
looking at the performance of these
systems when you you ask pointy
questions about the meaning of course
when you when you measure like
statistical performance and tasks like
machine translation looks good in fact
some people claim oh I've just read
something recently oh we got machine
translation reached human level accuracy
this is right because you know
of course in numbers or even like
computer vision right we have not
reached
level computer vision in spite of what
you might read because of course if you
take a data set and you just count how
many errors it looks good but then if
you focus on the errors and you see the
kinds of errors that are made they're
very different from the ones human make
so for example if you try to train a
system to answer questions about like
what is they referring to in these
sentences humans will get at 100%
accurate because they understand the
meaning behind the words so the women
stop taking pills because they were
pregnant or carcinogenic then of course
they were forced to women or pills and
current state of the art systems that
are based on deep learning of course do
only barely better than chance I mean
they're able to pick up some cues but
and and in general if you try to compare
our best systems to babies one year olds
two year olds three year olds the the
the our systems don't even have the sort
of understanding of the world in terms
of intuitive physics or intuitive
psychology that that's more children
have now a bit more than a decade ago
young the current I would this paper
where we sort of laid out our objectives
for deep learning in terms of the notion
of disentangling the underlying factors
which explain the data and and the idea
is that if we can learn a hierarchy of
representations where at the top level
we really have figured out the important
factors that explain the data then we're
going to get much better generalization
because in in that space you can you can
you can learn new information much more
easily how to relate variables now
becomes a very low dimensional learning
problem more recently I've come to
realize that it's not just about
learning which variables matter to
explain the data but also how these
variables are related to each other
which could be in terms of thinking how
the data was generated or thinking about
how to perform computation in a way
that's gonna be efficient so there's a
notion of disentangling computation as
well factorizing knowledge either in in
the genitive sense or
in the computational sense and as I'll
try to argue later this has a lot to do
with the ability of models to figure out
causal structure in the world something
that we don't really know how to do yet
so so this notion of design tangling
representations we elaborated a bit more
in 2013 review paper with erinkoval and
Pascal Vincent and and the general idea
was that we'd like to transform the data
and a space where the relationships
between the variables are very simple
right and so because they're going to be
very simple they're going to be very
simple to learn as well so I mean the
extreme case of this is the the most
common assumption in many deep
generative models is what we call
marginal independence assuming that in
the right representation all of the
variables all those factors are
marginally independent of each other
I think that's too strong of an
assumption most interesting variables
that the kinds of concepts we manipulate
are not independent of each other but
the dependencies are simple and they may
be due to for example simple rule like
type of high-level knowledge that we
communicate at language
another thing that I've studied and I
think is important in relation to
causality is that the kinds of factors
we tend to talk about in in natural
language have to do with how we can
control the world or how other agents
can change things in the world right so
we we like to give names to things that
can be changed in a way that can be
understood easily so so this is a notion
of controllable factors and I'll mention
the conscious prior in a couple of
slides
so one one thing that helps understand
where we are versus where we want to go
is the distinction in cognitive tasks
between what Kahneman and others call
system 1 and system 2 tasks so system 1
tasks are the the tasks that we do using
basically are in
tuition we can do them really fast and
and the way that we solve these problems
are essentially heuristic they're not
perfect right and it's all happening at
the unconscious level we don't know how
to explain we do how we do these tasks
and and so it's it's it's not linguistic
and deep learning currently does really
well at this kinds of tasks but of
course there are other tasks like if I
ask you to add two large members you can
do it in your head or using paper and
it's gonna be slow you're gonna use
logic it's gonna be sequential and you
can do it consciously and explain it
linguistically and this is what
algorithms in computer science really
are about and this is also what
classically I was trying to deal with
obviously our brain is doing both of
these things like most of the work is
done by system one but we need we need
the two of them especially if we want to
have machines that can interact with us
in a natural language and so one of the
things I'm gonna say here as a
recommendation for the path towards AGI
is we can't continue doing the kind of
natural language research where we only
train on text data we have to build
systems that simultaneously can
understand words in sentences but also
what those words refer to in some
environment so this is called grounded
language learning and there is some work
in that direction but I think it's gonna
dominate in the future in the ability of
delivering systems that can actually
understand what the sentences mean so
recently with a group of my colleague at
CFR we laid out a plan for the next five
years about we're where we want are our
deep learning research to go next so let
me go through four of these points one
is you know move the focus from passive
observation to active agent so basically
agents of course we've been doing
reinforcement learning but up to now
it's been you know kind of a separate
thing in in machine learning conferences
and and what I'm saying is in order to
move towards edgy itis is gonna have to
be part of a lot more of the research
and
and all of these points are related to
each other related to that is well we
have to go beyond perception where we're
already doing pretty well to reasoning
and and planning which are these things
we're doing at the highest levels of
representations that I've been talking
about and and we have to do it in a way
that allows the learner to develop a
full I mean full but but incomplete
model of the world that captures causal
structure so we have a causal model of
the world in our head which allows us to
do things like imagine the situations
that have never happened and will never
happen so-called counterfactuals and
allow us to plan and reason even about
things that are unlikely this is not
something that current machine learning
does well and in doing all of these
things we want to continue on our path
of getting inspiration and producing
synergy with the research trying to
understand how humans are intelligent so
neuroscience cognitive science and so on
it's been very important for my work and
and and many of my colleagues to drive
our exploration of AI and and I believe
that there's a lot more that we can gain
and of course we can also give back to
neuroscience and cognitive science with
the models we're building to help them
understand how the brain actually works
so to summarize one of the things we
want to do is joining system 1 and
system 2 using grounded language
learning where we we build high-level
representations of what is going on but
that are anchored in in low-level
perception and and where the the the
meaning of the words is not just like
some word embedding but it is also
connected to actions and the level
perceptions so one of the practical like
nitty-gritty questions on that path is
should we like first learn a world model
think about like a baby which doesn't
act in the world and just or me the acts
in the world but does
speak sorry and that's in here language
and just tries to figure out the physics
of things and then later once we know
how the world works and we just tag
words on these things and we learn the
language model or should we do both
things at the same time so this is an
interesting question and and and my view
on this is that we should do both at the
same time although of course learning
the world model is like the the primary
thing that needs to drive things if you
think about babies when they're born
usually of course their knowledge of
language is zero and except they already
know a bit about the sounds and so the
main thing they initially learn is about
how things work in the world from a
physical this is a physical point of
view and how other human beings behave
some clues that this is the right thing
to do include things like well the fact
that if you train with supervised
learning these imagenet classifiers the
representations you're getting so you
know forget about the classifier itself
but just the representations you're
getting at the top hidden layer are
really really good and they allowed to
generalize to new categories that have
been have been never seen and if you
think about it in in the case of these
classifiers we are using human knowledge
of high-level categories by by doing
supervised learning I mean it's not like
a complicated language we're just saying
the names of things right and so by
doing that where I get actually getting
representations that are much better
than the representations we get if we do
pure unsupervised learning where we
don't provide any labels so providing
labels is like providing a mini language
to those learners and it actually helps
them discover better representations of
of the visual world right so so this is
a clue that you know we we really want
to have not purely ins provides learning
but eventually having language input to
drive these what are called BBE eyes
okay so so part of this is how do we
learn these models of how the world
works and and I think this is going to
be absolutely necessary to deal with the
current problem of systems that can be
fooled very easily so there's there's
also like a robustness and an even
safety issue with current systems I
think about you know self-driving cars
being in a very rare and dangerous
situation that has never been seen and
as statistically unlikely current
methods I think would tend to fail on
these situations but but humans can
imagine these situations even before
they occur and and and learn to behave
properly and and and we're able to do
that because and what happened with the
title but because we have good model of
the underlying causal mechanisms and in
this allows us to generalize in in a
non-trivial way so traditional machine
learning has been about IID Javas ation
so the training data and the test data
are supposed to come from the same
distribution
we've been living with this hypothesis
for decades but it's it's wrong it's not
like what humans do is not what the
system's we build for applications
actually mean because we train on data
from say a particular country and then
we want this to work on another country
so so that assumption leads to lack of
robustness and and people have thought
for a long time that well that's the
best we can do if we have that
assumption then we can get guarantees of
generalization if if the test data
doesn't come from the same distribution
as the training data and what guarantees
do we have right well so what I claim is
that we can we can get a more powerful
form of translation by relaxing the
assumption but still having some
assumption it's just weaker and the
assumption is that the weaker assumption
is is that the the the scenario where
we're going to be using this system
corresponds to a distribution that may
be different from the training one but
involves the same causal mechanisms and
the same underlying concepts right you
can have the same underlying concepts
and the same underlying causal mechanism
but the dispersion is very different
think about you know taking pictures
here on earth and then taking pictures
on the moon it's the same underlying
physics but we get very different
looking
right so what changed is it's the same
dynamical system like you know the
physical gotta make the system but but
initial conditions
maybe the gravity of the Moon is very
different and and maybe also actions
that agents take like you intervene and
you change things in the world are gonna
change particular variables in that
dynamical system leading to potentially
very very different observation
distribution okay so how do we get there
one direction that has already been
taken in our field and I think is gonna
be again central to reaching AGI is to
use simulated environments right so why
does it make sense even if our simulated
environments are really simple compared
to the real world actually I think we
can learn a lot from that and the reason
is that this is like a subtle thing
we're not trying to build AI or AGI
right now as like build the system that
understands how our world works
it looks like that's we want to do but
actually what we're trying to do in
machine learning research is fine the
learning procedure which if it was put
in our world could figure out how our
world works right and we can test those
learning procedures in simpler context
that we can analyze and break down into
gradually more complex requirements and
and that's what I and others are
proposing to do so I think eventually
will converge to some sort of benchmarks
which I think was tagging proposed to
call them AI Olympics where different
research groups would propose different
environments where each research group
would have to deliver learners that can
deal with all of these environments
right and you can think of this as a
more sophisticated Turing test that we
would grow the difficulty of these tasks
as we make progress in research so we've
actually started this with a project
called the baby AI platform and we found
that even with a very very simple
environment which is like a grid world
associated with language constructions
that have a compositional nature so
there's an almost infinite number of
potential instructions of the form
things like you know navigate to this
place and and put something next to
something else the the tasks currently
are extremely difficult even for the the
current best reinforcement learning and
imitation learning systems that we have
so these really really easy easy tasks
that humans can probably learn how to
times faster need hundreds of thousands
or millions of demonstrations to be
learned so so so in a sense I mean it's
it's bad news in the sense that oh we're
really very far from AGI and the good
news is that means we can study those
problems in setups that are simpler and
easier to analyze that don't require as
much computational power so what else is
missing so if you think about how
current machine learning and RL are are
modeling the world I think in addition
to the causality aspect there are there
many other issues so for example our
current typical model think of
predicting the future as a sequence of
one step prediction like I'm gonna
predict the next frame and then given
that and the previous ones I'm gonna
predict the next one and so on so we
typically have these decompositions by
conditional probabilities of the next
thing and then the next thing and the
next thing and that's not at all how
humans build visions of the future right
we can project ourselves at arbitrary
points in the future we don't need to to
have like a little movie in our head of
all the intermediate time steps we could
say something like tomorrow I will do
this and I don't need to predict all of
the intermediate time steps in addition
when I say tomorrow I will go to the
beach I'm just talking about the beach I
don't need to talk about all of the
other aspects of the future state of
what will how the world will be tomorrow
and you know the world is in streaming
complex and has maybe you know tens of
thousands of interesting variables I
could talk about in
language but I'm choosing to only talk
about one or two or three in that
sentence and and we do that all the time
when we project ourselves into the
future planning or imagining or
constructing hypothetical situations so
so this notion of focusing on just a few
variables at a time in our mental
projections is something that's missing
in current machine learning and this is
something I proposed last year and the
paper called the consciousness prior
which which actually considers that this
ability that we have to talk about the
world using just a few variables at a
time is actually a an important
statistical advantage and and so that's
the reason I call it a prior because
it's assuming something about the world
it's assuming that only looking at a few
variables like tomorrow it will rain I
can actually make statements that have a
high probability of being true this is
very very different from if I have to
make a prediction about all of the
pixels you know tomorrow and if if I
were to to try to make these kinds of
predictions in the wrong space in the
space of pixels for example I wouldn't
be able to make very good predictions if
I'm trying to predict one pixel given
three other pixels even if I pick those
pixels well it's not gonna work very
well right if I'm trying to predict how
things will be tomorrow but but if I can
talk in the high-level language of say
it's gonna rain tomorrow seeing I think
it will go a little bit away and
actually then I'm actually able to make
these these very powerful statements and
and that's what we are exploiting when
we say things about the world in
language this is also what classically I
was exploiting with you know the notions
of rules and the variables that are
associated with that so so so now what
this is saying is in addition to this
notion of finding the right space where
we we want to make those statements we
also need to learn those statements
which are those rules right and you can
also think of them as the causal
mechanisms
which relate the variables together in a
graphical model sense all right I mean
let me skip that um so because I use the
word consciousness and this is something
that interests people here let me sort
of now sneak in my view on consciousness
so I think we're confounding I mentioned
it yesterday
we're confounding lots of different
things that computationally are just you
know different they're related when we
talk about consciousness and and none of
those here include the notion of moral
agent moral agent and moral patient
which is actually the notion that we
care about when we say are we what
should we do if we have a conscious
machine should it have rights and things
like that I think this is one aspect
which has to do with social interactions
and social contract but but really there
are computational aspects which are
which can be studied separately and
build in machines in ways that I don't
think are so complicated so
self-consciousness is just having a
notion of my state within the bigger set
of my model of the world access
consciousness is what I'm exploding in
the implementations of the consciousness
prior that I talked about where we only
focus on a few things at a time right
this is attention and this is the things
we're focusing on are these thoughts
that we have at a particular moment
emotions well I mean emotions are just
shortcut calculations to predict the
value of some future state or expected
value of future States in socially
context-dependent
setups and and those those sort of
predictions are very powerful for us the
influence what goes in our memory the
things we choose unconsciously to
remember and in and how our next action
are gonna be decided and then finally
maybe the one that's the most
problematic is the notion of subjective
experience or subjective perception and
I believe this is an overblown issue I
think that even with current Neil Nets
that learn too
the world what they have is actually
subjective perception each of these
neural Nets is trained on different data
and and develops different
representations of the world so if you
think about the color red right well for
those in your Nets the color red is not
the wavelength it's it's these
high-level representations of red things
which for some neural net may be
associated with positive experiences
there for some other neuron that may be
some negative experiences and may be
associated with particular aspects of
the world that could be different from
one individual to the other quickly AI
progress comes and the path to AGI comes
with all kinds of hopes and dangers I
don't need to maybe go through that list
I mean just mention a few things that
we're doing at Mila so we we've been
part of the work on the montreal
declaration for the responsible
development of AI which you can think of
as a more elaborated version of the
Asilomar principles there are ten
principles each of them has things like
your well-being equity diversity
prudence responsibility each of them is
then you know broken down into sub
principles which I think go beyond just
AI but also the values that we care
about as was the case for the Asimov
principles we were also involved in
organizing workshops and events like the
AI for a good workshop at the last nibs
and we have an upcoming workshop on or
school like I'd say on bias and
discrimination in the eye and one thing
I really care about as AI we believe
that it's important to Democrat eyes AI
around the planet and in particular to
make sure that developing countries can
can build their expertise in the eye so
we have a program to bring in students
undergrad or grad from developing
countries like Africa and and spend a
few months at Mila we have projects in
of course applied AI in things like
healthcare and now climate change let me
mention one project which
I find cool that we just started yeah
because I think it's related to some of
the issues discussed here about global
governance and and how to how to make
people on earth and move towards the
better collective decisions so so one
issue with climate change is as people
don't really understand emotionally what
the importance of this because because
what we're seeing in the media and in
papers is very abstract it's happening
you know 50 or 100 years into the future
and even if there's gonna be hundreds of
millions of people dying because of
these things it's hard to make it
personal and emotionally grounded and so
what we're proposing is to use machine
learning in a in a positive way in an
ethically correct way to convince people
of the importance of climate change in a
way that's gonna be emotionally grounded
so for example we can use variations on
Gans to present images of your house or
of your descendants in 50 years or
hundred years as a consequence of the
current decisions about about climate
change the last thing I want to mention
this is really my last slide so this is
a little bit more I know many people
might disagree with this here but the
about existential risk so I do think
that AG AI and AI in general comes with
lots of risks but one that I'm less
confident about less convinced of is the
the risk of really really fast sort of
exponential self-improvement once we
reach AGI I don't think that there's
gonna be like one cutoff point oh we we
reach human-level AI and then suddenly
it's sort of shoots off I don't I don't
I don't believe that
why so so if if the future area is
building machine learning and the kind
of machine learning I understand now and
I can imagine in the future then the
change like the self improvement is not
something that can happen very quickly
it's something that changes very slowly
through learning right
and you know by sort of regent of a
based optimization the other thing is
learning involves actually diminishing
returns in terms of you know the amount
of knowledge obtained as you collect
more data so even if I multiply the
amount of data by a thousand I don't get
a thousand times sort of more efficient
classifiers in fact you-you-you have a
sort of exponential wall here that's
well understood in learning theory and
and similar and similarly yeah so so
rather the the competence probably grows
something like log of the number of
examples and another clue is that the
size of the brain doesn't seem to be
such a big factor like even if we had a
brain that was ten times bigger I'm not
sure we would be sort of ten times
smarter consider you know the fact that
whale brains are much bigger than ours
and I think the main difference between
Wales and us is that we have a culture
and and that allows us to gather much
more knowledge in the same size brain
and the other last point is if you look
at at computer science theory and
machine learning theory you find many
many exponential walls so the opposite
of exponential improvement is basically
things become exponentially hard as you
try to to solve some problems because of
things like the curse of dimensionality
and and all kinds of exponential growth
that make for example search
exponentially hard as you consider
harder and harder search problems all
right I'm closing here thanks
[Applause]
I'm very surprised would you say I'm
very surprised oh um you talked about
the diminishing returns of more data
yeah but I think that's actually when
we're talking in this regime of ID data
on a single pass sure I agree I agree
that active learning we can do a lot
better but we don't have really proof
that active learning except in the
simplest setting oh but I'm not talking
about learning I'm talking about
multitask learning and horizontal
transfer yeah but that's only here it's
linear well it's actually I would argue
it's at least linear which is much
better than logarithmic and potentially
super linear because we see positive
transfer right now when you train on
multiple tasks so I guess if you had an
AI that was very competent on one task I
wouldn't be surprised if it could
rapidly become competent on a whole host
of tasks for instance everything that
humans can do and more I agree but I
don't think you get the kind of
exponential growth that people are
afraid of that so of course I'm all for
multitask learning it's very important
other question I'm sure you do
thank you for your wonderful speech and
I have a short question and a
two-question a first question is if
consciousness player is a kind of access
consciousness yes
and second question is in this is right
after yourself infirmity is a deep spot
you think but if the AI get some kind of
common sense
then they can a program other AI who IND
of course well there are some
possibility to explain self what do you
think about it well humans are really
really bad at programming actually so if
we get human level performance I mean
ability it still sucks in terms of the
ability of programming things so i i'm
i'm open to demonstrations that i should
be much more concerned and i've already
voiced that to a number of people and
and you know i'm glad to engage in that
discussion because I do care that we
don't make tragic mistakes but I haven't
seen a convincing argument yet that
being said I think that where there's a
common ground between my view and many
people here is that you know whatever we
think about the existential risk coming
from super intelligence there's a large
risk coming from abuse of AI from humans
who could control it and potentially
even accidental you know destruction of
people and wealth by people and
especially nefarious use hi yeah you
seem to insist on this idea of a
spectrum of consciousness that's right
which I agree with but don't you think
there is a phase transition that should
that could could arise like being like
solid liquid gas and then consciousness
is maybe just like many different kind
of gases but is there a phase transition
that is sharp and really cut off there
could be so because I have decomposed
consciousness a number of aspects and
for some of those aspects I can see like
gradual improvement like you get better
and better representations or you can
model more and more aspects of the world
or you can
search more efficiently right so there
are many ways in which you can get
progress and maybe some of these are
like qualitative jobs that's quite
possible yeah thank you
we've talked about a number of these
points and it says so I really agree but
on the causality thing yeah so human
intuition about causality is heavily
linguistically based and it's heavily
from embodied distances of of agency yes
whereas you know there's these you know
sort of mathematical causal graphical
models right notion of causality which
is not necessarily the same thing very
somewhat related but humans are actually
kind of bad at this kind of problem you
know right
Kahneman is that right right no wonder
you you know on which of these you mean
well so so we're bad because we're not
good at search but what we're good at is
that figuring out the right variables so
and those variables are the ones we're
talking about that the ones we're giving
names in language great talk i wanted to
dive into some of the things you said
earlier about learning the benefits of
learning about the environment and
learning language at the same time yes
you can get these higher level
abstractions more quickly yeah certainly
seems true in short but what are the
cost might be that you learn the what
say my extractions that we know about
and my understanding is person is when
when machines are learned to diagnose
cancer from different images they
sometimes come up with new kinds of
abstractions that none of us had thought
of before right so maybe talk a little
bit about the trade-off of locking into
our pretty good but not perfect set and
maybe there are other ones especially
since they have other tense days that
they might be able to draw so I believe
in gradual progress in science and I'd
be very happy if we have machines that
already have our intelligence and now if
you wanted to be more ambitious and and
have machines that have a lot more
intelligence then you might want to free
them a bit more from the constraints if
you want of our language and our culture
although keep in mind that these are not
hard constraint
it's more like you're booted up with
that knowledge and now you know humans
can discover new things and those
machines could discover new things as
well so we could explore both
[Music] |
246c6cd1-1deb-4aaf-9e78-62d2ffbd4add | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on March 25th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* Ann Arbor Meetup - singing: 26 March 2016 07:00PM
* European Community Weekend: 02 September 2016 03:35PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* [Melbourne] "What is the Intelligence Explosion?" (and more...) LW Melbourne dojo: 02 April 2016 02:00PM
* New Hampshire Meetup: 12 April 2016 07:00PM
* Vienna Meetup: 16 April 2016 02:00PM
* Washington, D.C.: Singing: 27 March 2016 03:30PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your m |
88a18e77-8818-4e60-ac37-a8cd09568a15 | trentmkelly/LessWrong-43k | LessWrong | Join ASAP! (AI Safety Accountability Programme) 🚀
Edit - this group was an experiment which I consider to have mostly been unsuccessful, and the group is no longer very active. I think the two main reasons for this were: (1) lacking a single thing which people were congregating around (e.g. the TransformerLens library in the case of the Open Source Mech Interp Slack group, or the ARENA course material in the case of the ARENA Slack), and (2) top-down rather than bottom-up design of the Slack group and its features.
TL;DR
I'm creating a Slack group for people who are interested in working in AI safety at some point in the future, but who aren't working on it right now[1], and would like extra accountability and motivation while they pursue their goals.
Join with this link!
Why am I creating this?
I just spent an awesome summer in Berkeley doing MLAB, surrounded by people who are really passionate about AI safety, and this definitely had a positive impact on my level of motivation. I think trying to recreate some (even much weaker) version of that would be really valuable. An accountability system is the most basic version of this, because making commitments to other people is a really nice way of motivating yourself to get shit done!
I've spoken to a few people from MLAB, and several seem to agree (at least five participants have mentioned to me that they'd like to join a group like this).
How will this work?
(Note - this might all be changed depending on how many people join, and their suggestions & preferences. Hopefully by the end of next week, the group will be larger and we will have made many improvements to this basic design!)
The core mechanism of the group will be everyone posting regular short updates (might be Slack message, or filling out a Google Form)[2] (maybe once per 2 weeks) summarising what they've done over that period. For instance:
* Books you've read, or courses you've taken, or progress in structured self-study like this
* Blog posts you've written
* Projects you've done, or are |
295b9a8c-9af4-4dcc-bee5-75920e8f23a2 | trentmkelly/LessWrong-43k | LessWrong | Introducing the Existential Risks Introductory Course (ERIC)
Crossposted to the EA Forum.
Introduction
We (the Cambridge Existential Risks Initiative) ran an Existential Risks Introductory Course (ERIC) in the first quarter of 2022, aiming to introduce the field of existential risks, without being explicitly associated with any particular philosophy. We expect the programme to be most useful to people who are new to this field, and we hypothesised that we may be able to reach a different target audience by not explicitly branding it as EA.
The full curriculum we used for the programme, along with exercises and organisation spotlights, can be found here. This was primarily designed by Callum McDougall, with some inputs from the rest of the CERI team.
If you are interested in joining the next iteration of the course in Winter 2022 (either as a participant or as a facilitator), please fill out this interest form.
This post contains an overview of the course, which is followed by an abbreviated version of the syllabus for the ease of gathering feedback. The weekly summaries may also be helpful for community builders looking for summaries of any of the core readings from our syllabus.
We welcome any feedback on the content, exercises or anything else pertaining to the course, either here publicly on the Forum, or you can also reach out to us privately if you prefer that.
Course overview
The course consists of 8 weeks of reading (split into core and applied). Some weeks also include exercises, which participants are encouraged to complete and discuss in the session. Each week, participants will meet for 1.5 hour sessions where they will discuss the material and exercises with a facilitator.
The topics for each week are as follows:
* Week 1: Introduction to Existential Risks
Provides an introduction to x-risks, why they might be both highly important and neglected, and introduces some important terminology.
* Week 2: Natural & Anthropogenic Risks
Discusses natural risks, and risks from nuclear war and cl |
8338eaa4-aeaf-4545-9ce2-367ce8306b11 | trentmkelly/LessWrong-43k | LessWrong | Main actors in the AI race
Which actors (countries, companies, individuals) are currently taking part in the AI race? Who may be, for any reason, the most problematic? I tried to find posts about it, but ended up not being very successful.
I'm pretty new to the field and would like to understand the full picture better.
Thanks all! |
ecbb47af-6831-46e4-aedd-bfbe642c2a3d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Radical Probabilism
*This is an expanded version of* [*my talk*](https://www.lesswrong.com/posts/ZM63n353vh2ag7z4p/radical-probabilism-transcript)*. I assume a high degree of familiarity with Bayesian probability theory.*
[Toward a New Technical Explanation of Technical Explanation](https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation) -- an attempt to convey the practical implications of logical induction -- was one of my most-appreciated posts, but I don't really get the feeling that very many people have received the update. Granted, that post was speculative, sketching what a new technical explanation of technical explanation *might* look like. I think I can do a bit better now.
If the implied project of that post had really been completed, I would expect new practical probabilistic reasoning tools, explicitly violating Bayes' law. For example, we might expect:
* A new version of information theory.
+ An update to the "[prediction=compression](https://www.lesswrong.com/posts/hAvGi9YAPZAnnjZNY/prediction-compression-transcript-1)" maxim, either repairing it to incorporate the new cases, or explicitly denying it and providing a good intuitive account of why it was wrong.
+ A new account of concepts such as mutual information, allowing for the fact that variables have behavior over thinking time; for example, variables may initially be very correlated, but lose correlation as our picture of each variable becomes more detailed.
* New ways of thinking about epistemology.
+ One thing that my post did manage to do was to spell out the importance of "making advanced predictions", a facet of epistemology which Bayesian thinking does not do justice to.
+ However, I left aspects of the problem of old evidence open, rather than giving a complete way to think about it.
* New probabilistic structures.
+ Bayesian Networks are one really nice way to capture the structure of probability distributions, making them much easier to reason about. Is there anything similar for the new, wider space of probabilistic reasoning which has been opened up?
Unfortunately, I still don't have any of those things to offer. The aim of this post is more humble. I think what I originally wrote was too ambitious for didactic purposes. Where the previous post aimed to communicate the insights of logical induction by sketching broad implications, I here aim to communicate the insights *in themselves*, focusing on the detailed differences between classical Bayesian reasoning and the new space of ways to reason.
Rather than talking about logical induction directly, I'm mainly going to explain things in terms of a very similar philosophy which Richard Jeffrey invented -- apparently starting with his phd dissertation in the 50s, although I'm unable to get my hands on it or other early references to see how fleshed-out the view was at that point. He called this philosophy **radical probabilism**. Unlike logical induction, radical probabilism appears not to have any roots in worries about logical uncertainty or bounded rationality. Instead it appears to be motivated simply by a desire to generalize, and a refusal to accept unjustified assumptions. Nonetheless, it carries most of the same insights.
Radical Probabilism has not been very concerned with computational issues, and so constructing an actual algorithm (like the logical induction algorithm) has not been a focus. (However, there have been some developments -- see historical notes at the end.) This could be seen as a weakness. However, for the purpose of communicating the core insights, I think this is a strength -- there are fewer technical details to communicate.
A terminological note: I will use "radical probabilism" to refer to the new theory of rationality (treating logical induction as merely a specific way to flesh out Jeffrey's theory). I'm more conflicted about how to refer to the older theory. I'm tempted to just use the term "Bayesian", implying that the new theory is non-Bayesian -- this highlights its rejection of Bayesian updates. However, radical probabilism is Bayesian in the most important sense. Bayesianism is not about Bayes' Law. Bayesianism is, at core, about the subjectivist interpretation of probability. Radical probabilism is, if anything, *much more* subjectivist.
However, this choice of terminology makes for a confusion which readers (and myself) will have to carefully avoid: confusion between Bayesian probability theory and Bayesian updates. The way I'm using the term, a Bayesian need not endorse Bayesian updates.
In any case, I'll default to Jeffrey's term for the opposing viewpoint: ***dogmatic probabilism**.* (I will occasionally fall into calling it "classical Bayesianism" or similar.)
What Is Dogmatic Probabilism?
=============================
Dogmatic Probabilism is the doctrine that the conditional probability P(B|A).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
is *also* how we *update* probabilistic beliefs: any rational change in beliefs should be explained by a Bayesian update.
We can unpack this a little:
1. (**Dynamic Belief:**) A rational agent is understood to have different beliefs over time -- call these P1,P2,P3,...
2. (**Static Rationality:**) At any one time, a rational agent's beliefs Pn are probabilistically coherent (obey the Kolmogorov axioms, or a similar axiomatization of probability theory).
3. (**Empiricism:**) Reasons for changing beliefs *across* time are *given entirely by observations* -- that is, propositions which the agent learns.
4. (**Dogmatism of Perception:**) Observations are believed with probability one, once learned.
5. (**Rigidity:**) Upon observing a proposition A, conditional probabilities P(B|A) are unmodified.
The assumptions minus *empiricism* imply that an update on observing A is a Bayesian update: if we start with Pn and update on A to get Pn+1, then Pn+1(A) must equal 1, and Pn+1(B|A)=Pn(B|A). So we must have Pn+1(B)=Pn(B|A). Then, *empiricism* says that this is the *only* kind of update we can possibly have.
What Is Radical Probabilism?
============================
Radical probabilism accepts assumptions #1 and #2, but rejects the rest. (Logical Induction need not follow axiom #2, either, since beliefs at any given time only approximately follow the probability laws -- however, it's not necessary to discuss this complication here. Jeffrey's philosophy did not attempt to tackle such things.)
Jeffrey seemed uncomfortable with updating to 100% on anything, making *dogmatism of perception* untenable. A similar view [is already popular on LessWrong](https://www.lesswrong.com/posts/ooypcn7qFzsMcy53R/infinite-certainty), but it seems that no one here took the implication and denied Bayesian updates as a result. (Bayesian updates [have been questioned for other reasons](https://www.lesswrong.com/posts/W6nXfmKTrgaiaLSRg/why-and-why-not-bayesian-updating), of course.) This is a bit of an embarassment. But fans of Bayesian updates reading this are more likely to accept that zero and one are probabilities, rather than give up Bayes.
Fortunately, this isn't actually the crux. Radical probabilism is a pure generalization of orthodox Bayesianism; you can have zero and one as probabilities, and still be a radical probabilist. The real fun begins not with the rejection of *dogmatism of perception*, but with the rejection of *rigidity* and *empiricism*.
This gives us a view in which a rational update from Pn to Pn+1 can be almost anything. (You still can't update from Pn(A)=0 to Pn+1(A)>0.) Simply put, *you are allowed to change your mind*. This doesn't make you irrational.
Yet, there are still *some* rationality constraints. In fact, we can say a lot about how rational agents think in this model. In place of assumptions #3-#5, we assume *rational agents cannot be Dutch Booked*.
Radical Probabilism and Dutch Books
===================================
Rejecting the Dutch Book for Bayesian Updates
---------------------------------------------
At this point, if you're familiar with the philosophy of probability theory, you might be thinking: wait a minute, isn't there a Dutch Book argument for Bayesian updates? If radical probabilism accepts the validity of Dutch Book arguments, shouldn't it thereby be forced into Bayesian updates?
No!
As it turns out, there is a major flaw in the Dutch Book for Bayesian updates. The argument *assumes that the bookie knows how the agent will update*. (I encourage the interested reader to [read the SEP section on diachronic Dutch Book arguments](https://plato.stanford.edu/entries/dutch-book/#DiacDutcBookArgu) for details.) Normally, a Dutch Book argument requires the bookie to be *ignorant*. It's no surprise if a bookie can take our lunch money by getting us to agree to bets *when the bookie knows something we don't know*. So what's actually established by these arguments is: ***if you know how you're going to update, then your update had better be Bayesian.***
Actually, that's not quite right: the argument for Bayesian updates also still assumes *dogmatism of perception*. If we relax that assumption, all we can really argue for is *rigidity*: ***if you know how you are going to update, then your update had better be rigid.***
This leads to a generalized update rule, called **Jeffrey updating** (or Jeffrey conditioning).
Generalized Updates
-------------------
Jeffrey updates keep the rigidity assumption, but reject *dogmatism of perception*. So, we're changing the probability of some sentence A to P(A)=c, without changing any P(B|A). There's only one way to do this:
Pn+1(B)=c⋅Pn(B|A)+(1−c)⋅Pn(B|¬A)In other words, the Jeffrey update interpolates linearly between the Bayesian update on A and the Bayesian update on ¬A. This generalizes Bayesian updates to allow for uncertain evidence: we're not sure we just saw someone duck behind the corner, but we're 40% sure.
If this way of updating seems a bit arbitrary to you, Jeffrey would agree. It offers only a small generalization of Bayes. Jeffrey wants to open up much broader space:
 Classifying updates by assumptions made.As I've already said, the rigidity assumption can only be justified *if the agent knows how it will update*. Philosophers like to say the agent *has a plan* for updating: "If I saw a UFO land in my yard and little green men come out, I would believe I was hallucinating." This is something we've worked out ahead of time.
A non-rigid update, on the other hand, means you don't know how you'd react: "If I saw a convincing proof of P=NP, I wouldn't know what to think. I'd have to consider it carefully." I'll call non-rigid updates ***fluid updates***.
For me, fluid updates are primarily about *having longer to think, and reaching better conclusions as a result.* That's because *my* main motivation for accepting a radical-probabilist view is logical uncertainty. Without such a motivation, I can't really imagine being very interested. I boggle at the fact that Jeffrey arrived at this view without such a motivation.
**Dogmatic Probabilist:** *All I can say is: why??*
**Richard Jeffrey:** *I've explained to you how the Dutch Book for Bayesian updates fails. What more do you want? My view is simply what you get when you remove the faulty assumptions and keep the rest.*
**Dogmatic Probabilist (DP):** *I understand that, but why should anyone be interested in this theory? OK, sure, I CAN make Jeffrey updates without getting Dutch Booked. But why ever would I? If I see a cloth in dim lighting, and update to 80% confident the cloth is red, I update in that way **because of the evidence which I've seen, which is itself fully confident**. How could it be any other way?*
**Richard Jeffrey (RJ):** *Tell me one peice of information you're absolutely certain of in such a situation.*
**DP:** *I'm certain I had that experience, of looking at the cloth.*
**RJ:** *Surely you aren't 100% sure you were looking at cloth. It's merely very probable.*
**DP:** *Fine then. The experience of looking at ... what I was looking at.*
**RJ:** *I'll grant you that tautologies have probability one.*
**DP:** *It's not a tautology... it's the fact that I had an experience, rather than none!*
**RJ:** *OK, but you are trying to defend the position that **there is some observation, which you condition on, which explains your 80% confidence the cloth is red.** Conditioning on "I had an experience, rather than none" won't do that. What proposition are you confident in, which explains your less-confident updates?*
**DP:** *The photons hitting my retinas, which I directly experience.*
**RJ:** *Surely not. You don't have any detailed knowledge of that.*
**DP:** *OK, fine, the individual rods and cones.*
**RJ:** *I doubt that. Within the retina, before any message gets sent to the brain, these get put through an opponent process which sharpens the contrast and colors. You're not perceiving rods and cones directly, but rather a probabilistic guess at light conditions based on rod and cone activation.*
**DP:** *The output of that process, then.*
**RJ:** *Again I doubt it. You're engaging in **inner-outer hocus pocus.\*** There is no clean dividing line before which a signal is external, and after which that signal has been "observed". The optic nerve is a noisy channel, warping the signal. And the output of the optic nerve itself gets processed at V1, so the rest of your visual processing doesn't get direct access to it, but rather a processed version of the information. And all this processing is noisy. Nowhere is anything certain. Everything is a guess. If, anywhere in the brain, there were a sharp 100% observation, then the nerves carrying that signal to other parts of the brain would rapidly turn it into a 99% observation, or a 90% observation...*
**DP:** *I begin to suspect you are trying to describe human fallibility rather than ideal rationality.*
**RJ:** *Not so! I'm describing how to rationally deal with uncertain observations. The source of this uncertainty could be anything. I'm merely giving human examples to establish that the theory has practical interest for humans. The theory itself only throws out unnecessary assumptions from the usual theory of rationality -- as we've already discussed.*
**DP:** *(sigh...) OK. I'm still never going to design an artificial intelligence to have uncertain observations. It just doesn't seem like something you do on purpose. But let's grant, provisionally, that rational agents could do so and still be called rational.*
**RJ:** *Great.*
**DP:** *So what's this about giving up rigidity??*
**RJ:** *It's the same story: it's just another assumption we don't need.*
**DP:** *Right, but then how do we update?*
**RJ:** *However we want.*
**DP:** *Right, but how? I want a constructive story for where my updates come from.*
**RJ:** *Well, if you precommit to update in a predictable fashion, you'll be Dutch-Bookable unless it's a rigid fashion.*
**DP:** *So you admit it! Updates need to be rigid!*
**RJ:** *By no means!*
**DP:** *But updates need to come from somewhere. Whether you know it or not, there's some mechanism in your brain which produces the updates.*
**RJ:** *Whether you know it or not is a critical factor. Updates you can't anticipate need not be Bayesian.*
**DP:** *Right, but... the point of epistemology is to give guidance about forming rational beliefs. So you should provide some formula for updating. But any formula is predictable. So a formula has to satisfy the rigidity condition. So it's got to be a Bayesian update, or at least a Jeffrey update. Right?*
**RJ:** *I see the confusion. But epistemology does not have to reduce things to a strict formula in order to provide useful advice. Radical probabilism can still say many useful things. Indeed, I think it's **more** useful, since it's closer to real human experience. Humans can't always account for **why** they change their minds. They've updated, but they can't give any account of where it came from.*
**DP:** *OK... but... I'm sure as hell never designing an artificial intelligence that way.*
I hope you see what I mean. It's all terribly uninteresting to a typical Bayesian, especially with the design of artificial agents in mind. Why consider uncertainty about evidence? Why study updates which don't obey any concrete update rules? What would it even *mean* for an artificial intelligence to be designed with such updates?
In the light of logical uncertainty, however, it all becomes well-motivated. Updates are unpredictable not because there's no rule behind them -- nor because we lack knowledge of what exactly that rule is -- but because we can't always anticipate the results of computations before we finish running them. There are updates without corresponding evidence because we can think longer to reach better conclusions, and doing so does not reduce to Bayesian conditioning on the output of some computation. This doesn't imply uncertain evidence in exactly Jeffrey's sense, but it does give us cases where we update specific propositions to confidence levels other than 100%, and want to know how to move other beliefs in response. For example, we might apply a heuristic to determine that some number is very very likely to be prime, and update on this information.
Still, I'm very impressed with Jeffrey for reaching so many of the right conclusions without this motivation.
Other Rationality Properties
----------------------------
So far, I've emphasized that fluid updates "can be almost anything". This makes it sound as if there are essentially no rationality constraints at all! However, this is far from true. We can establish some very important properties via Dutch Book.
**Convergence**
No *single* update can be condemned as irrational. However, if you keep changing your mind again and again without ever settling down, *that* is irrational. Rational beliefs are required to eventually move less and less, converging to a single value.
*Proof:* If there exists a point p which your beliefs forever oscillate around (that is, your belief falls above p+c infinitely often, and falls below p−c infinitely often, for some c>0) then a bookie can make money off of you as follows: when your belief is below p−c, the bookie makes a bet in favor of the proposition in question, at p:(1−p) odds. When your belief is above p+c, the bookie offers to cancel that bet for a small fee. The bookie earns the fee with certainty, since your beliefs are sure to swing down eventually (allowing the bet to be placed) and are sure to swing up some time after that (allowing the fee to be collected). What's more, the bookie can do this again and again and again, turning you into a money pump.
If there exists no such p, then your beliefs must converge to some value. □
Caveat: this is the proof in the context of logical induction. There are other ways to establish convergence in other formalizations of radical probabilism.
In any case, this is *really important.* This isn't just a nice rationality property. *It's a nice rationality property which dogmatic probilists don't have.* Lack of a convergence guarantee is **one of the main criticisms Frequentists make of Bayesian updates.** And it's a good critique!
Consider a simple coin-tossing scenario, in which we have two hypotheses: h13 posits that the probability of heads is 13, and h23 posits that the probability of heads is 23. The prior places probability 12 on both of these hypotheses. The only problem is that the true coin probability is 12. What happens? The probabilities P(h13) and P(h23) will oscillate forever without converging.
*Proof:* The quantity heads−tails will take a random walk as we keep flipping the fair coin. A random walk returns to zero infinitely often (a phenomenon known as [gambler's ruin](https://en.wikipedia.org/wiki/Random_walk#One-dimensional_random_walk)). At each such point, evidence is evenly balanced between the two hypotheses, so we've returned to the prior. Then, the next flip is either heads or tails. This results in a probability of 13 for one of the hypotheses, and 23 for the other. This sequence of events happens infinitely often, so P(h13) and P(h23) keep experiencing changes of size at least 16, never settling down. □
Now, the objection to Bayesian updates here isn't *just* that oscillating forever looks irrational. Bayesian updates are *supposed to help us predict the data well*; in particular, you might think they're *supposed to help us minimize log-loss*. But here, we would be doing much better if beliefs would converge toward P(h13)=P(h23)=12. The problem is, Bayes takes each new bit of evidence *just as seriously* as the last. Really, though, a rational agent in this situation should be saying: "Ugh, this again! If I send my probability up, it'll come crashing right back down some time later. I should skip all the hassle and keep my probability close to where it is."
In other words, a rational agent should be looking out for Dutch Books against itself, including the non-convergence Dutch Book. Its probabilities should be adjusted to avoid such Dutch Books.
**DP:** *Why should I be bothered by this example? If my prior is as you describe it, I assign **literally zero probability** to the world you describe -- I **know** the coin isn't fair. I'm fine with my inference procedure displaying pathological behavior in a universe I'm absolutely confident I'm not in.*
**RJ:** *So you're fine with an inference procedure which performs abysmally in the real world?*
**DP:** *What? Of course not.*
**RJ:** *But the real world cannot possibly be in your hypothesis space.* [*It's too big.*](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh/p/efWfvrWLgJmbBAs3m) *You can't explicitly write it down.*
**DP:** *Physicists seem to be making good progress.*
**RJ:** *Sure, but those aren't hypotheses which you can directly use to anticipate your experiences. They require too much computation. Anything that can fit in your head, can't be the real world.*
**DP:** *You're dealing with human frailty again.*
**RJ:** *On the contrary. Even idealized agents can't fit inside a universe they can perfectly predict. To see the contradiction, just let two of them play rock-paper-scissors with each other. Anything that can anticipate what you expect, and then do something else, can't be in your hypothesis space. But let me try a different angle of attack. Bayesianism is supposed to be the philosophy of subjective probability. Here, you're arguing as if the prior represented an objective fact about how the universe is. It isn't, and can't be.*
**DP:** *I'll deal with both of those points at once. I don't really need to assume that the **actual universe** is within my hypothesis space. Constructing a prior over a set of hypotheses guarantees you this: **if there is a best element in that class, you will converge to it.** In the coin-flip example, I don't have the objective universe in my set of hypotheses unless I can perfectly predict every coin-flip. But the subjective hypothesis which treats the coin as fair is the best of its kind. In the rock-paper-scissors example, rational players would similarly converge toward treating each other's moves as random, with*13*probability on each move.*
**RJ:** *Good. But you've set up the punchline for me: **if there is no best element, you lack a convergence guarantee.***
**DP:** *But it seems as if good priors usually do have a best element. Using* [*Laplace's rule of succession*](https://en.wikipedia.org/wiki/Rule_of_succession)*, I can predict coins of any bias without divergence.*
**RJ:** *What if the coin lands as follows: 5 heads in a row, then 25 tails, then 125 heads, and so on, each run lasting for the next power of five. Then you diverge again.*
**DP:** *Ok, sure... but if the coin flips might not be independent, then I should have hypotheses like that in my prior.*
**RJ:** *I could keep trying to give examples which break your prior, and you could keep trying to patch it. But we have agreed on the important thing: good priors should have the convergence property. At least you've agreed that this is a desirable property not always achieved by Bayes.*
**DP:** *Sure.*
In the end, I'm not sure who would win the counterexample/patch game: it's quite possible that there general priors with convergence guarantees. [No computable prior has convergence guarantees for "sufficiently rich" observables](https://www.lesswrong.com/posts/5bd75cc58225bf067037518c/all-mathematicians-are-trollable-divergence-of-naturalistic-logical-updates) (ie, observables including logical combinations of observables). However, that's a theorem with a lot of caveats. In particular, Solomonoff Induction isn't computable, so might be immune to the critique. And we can certainly get rid of the problem by restricting the observables, EG by [conditioning on their sequential order rather than just their truth](https://www.lesswrong.com/posts/CvKnhXTu9BPcdKE4W/an-untrollable-mathematician-illustrated). Yet, [I suspect all such solutions will either be really dumb, or uncomputable](https://www.lesswrong.com/posts/fhJkQo34cYw6KqpH3/thinking-about-filtered-evidence-is-very-hard).
So there's work to be done here.
But, in general (ie *without any special prior which does guarantee convergence for restricted observation models*), a Bayesian [relies on a realizability (aka grain-of-truth) assumption for convergence, as it does for some other nice properties](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/efWfvrWLgJmbBAs3m). Radical probabilism demands these properties without such an assumption.
So much for technical details. Another point I want to make is that convergence points at a notion of "objectivity" for the radical probabilist. Although the individual updates a radical probabilist makes can go all over the place, the beliefs must eventually settle down to something. The goal of reasoning is to settle down to that answer as quickly as possible. Updates may appear arbitrary from the outside, but internally, they are always moving toward this goal.
This point is further emphasized by the next rationality property: conservation of expected evidence.
**Conservation of Expected Evidence**
[The law of conservation of expected evidence](https://www.lesswrong.com/posts/zTfSXQracE7TW8x4w/mistakes-with-conservation-of-expected-evidence-1) is a dearly beloved Bayesian principle. You'll be glad to hear that it survives unscathed:
Pn(X)=EnPm(X)In the above, Pn(X) is your current belief in some proposition X; Pm(X) is some future belief about X (so I'm assuming m>n); and En is the expected value operator according to your current beliefs. So what the equation says is: your current beliefs equal your expected value of your future beliefs. This is just like the usual formulation of no-expected-net-update, except we no longer take the expectation *with respect to evidence*, since a non-Bayesian update may not be grounded in evidence.
*Proof:* Suppose Pn(X)≠EnPm(X). One of the two numbers is higher, and the other lower. Suppose EnPm(X) is the lower number. Then a bookie can buy a certificate paying $Pm(X) on day m; we will willingly sell the bookie this for $EnPm(X). The bookie can also sell us a certificate paying $1 if X, for a price of $Pn(X). At time m, the bookie gains $Pm(X) due to the first certificate. It can then buy the second certificate back from us for $Pm(X), using the winnings. Overall, the bookie has now paid $EnPm(X) to us, but we have paid the bookie $Pn(X), which we assumed was greater. So the bookie profits the difference.
If Pn(X) is the lower number instead, the same strategy works, reversing all buys and sells. □
The key idea here is that both a direct bet on X and a bet on Pm(X) will be worth Pm(x) later, so they'd better have the same price now, too.
I see this property as being even more important for a radical probabilist than it is for a dogmatic probabilist. For a dogmatic probabilist, it's a consequence of Bayesian conditional probability. For a radical probabilist, it's a basic condition on rational updates. With updates being so free to go in any direction, it's an important anchor-point.
Another name for this law is *the martingale property*. This is a property of many stochastic processes, such as Brownian motion. From [wikipedia](https://en.wikipedia.org/wiki/Martingale_(probability_theory)):
> In [probability theory](https://en.wikipedia.org/wiki/Probability_theory), a **martingale** is a [sequence](https://en.wikipedia.org/wiki/Sequence) of [random variables](https://en.wikipedia.org/wiki/Random_variable) (i.e., a [stochastic process](https://en.wikipedia.org/wiki/Stochastic_process)) for which, at a particular time, the [conditional expectation](https://en.wikipedia.org/wiki/Conditional_expectation) of the next value in the sequence, given all prior values, is equal to the present value.
>
>
It's important that a sequence of rational beliefs have this property. Otherwise, future beliefs are different from current beliefs in a predictable way, and we would be better off updating ahead of time.
Actually, that's not immediately obvious, right? The bookie in the Dutch Book argument doesn't make money by updating to the future belief faster than the agent, but rather, by playing the agent's beliefs off of each other.
This leads me to a stronger property, which has the martingale property as an immediate consequence (**strong self trust**):
Pn(X|Pm(X)=y)=yAgain I'm assuming m>n. The idea here is supposed to be: *if you knew your own future belief, you would believe it already*. Furthermore, you believe X and Pm(X) are perfectly correlated: the only way you'd have high confidence in X would be if it were very probably true, and the only way you'd have low confidence would be for it to be very probably false.
I won't try to prove this one. In fact, be wary: this rationality condition is a bit too strong. The condition holds true in the radical-probabilism formalization of [*Diachronic Coherence and Radical Probabilism* by Brian Skyrms](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.194.482&rep=rep1&type=pdf), so long as Pn(Pm(X)=y)>0 (see section 6 for statement and proof). However, [*Logical Induction*](https://arxiv.org/abs/1609.03543)argues persuasively that this condition is undesirable in specific cases, and replaces it with a slightly weaker condition (see section 4.12).
Nonetheless, for simplicity, I'll proceed as if *strong self trust* were precisely true.
At the end of the previous section, I promised that the current section would further illuminate my remark:
> The goal of reasoning is to settle down to that answer as quickly as possible. Updates may appear arbitrary from the outside, but internally, they are always moving toward this goal.
>
>
The way radical probabilism allows *just about any change* when beliefs shift from Pn to Pn+1 may make its updates seem irrational. How can the update be *anything*, and still be called rational? Doesn't that mean a radical probabilist is open to garbage updates?
No. A radical probabilist doesn't *subjectively* think all updates are equally rational. A radical probabilist *trusts the progression of their own thinking*, and also *does not yet know the outcome of their own thinking*; this is why I asserted earlier that a fluid update can be just about anything (barring the transformation of a zero into a positive probability). However, this does not mean that a radical probabilist would accept a psychedelic pill which arbitrarily modified their beliefs.
Suppose a radical probabilist has a sequence of beliefs P1,P2,P3,P4,…,Pn. If they thought hard for a while, they could update to Pn+1. On the other hand, if they took the psychedelic pill, their beliefs would be modified to become Q. The sequence would be abruptly disrupted, and go off the rails: P1,P2,P3,…,Pn,Q,R,S,…
The radical probabilist does not trust *whatever they believe next*. Rather, the radical probabilist has a concept of *virtuous epistemic process*, and is willing to believe the next output of such a process. Disruptions to the epistemic process do not get this sort of trust without reason. (For those familiar with *The Abolition of Man*, this concept is very reminiscent of his "Tao".)
On the other hand, a radical probabilist *could* trust a different process. One person, P, might trust that another person, Q, is better-informed about any subject:
Pn(X|Qn(X)=y)=yThis says that P trusts Q on any subject if they've had the same amount of time to think. This leaves open the question of what P thinks if Q has had longer to think. In the extreme case, it might be that P thinks Q is better *no matter how long*P*has to think:*
∀m,nPm(X|Qn(X)=y)=yOn the other hand, P and Q can both be perfectly rational by the standards of radical probabilism *and not trust each other at all*. P might not trust Q's opinion no matter how long Q thinks.
(Note, however, that you *do* get eventual agreement on matters where good feedback is available -- much like in dogmatic Bayesianism, it's difficult for two Bayesians to disagree about *empirical predictions* for long.)
This means you can't necessarily replace one "virtuous epistemic process" with another. P1,P2,P3,… and Q1,Q2,Q3,… might both be perfectly rational by the standards of radical probabilism, and yet the disrupted sequence P1,P2,P3,Q4,Q5,Q6,… would not be, because P3 does not necessarily trust Q4 or subsequent Qs.
Realistically, we can be in this kind of position *and not even know what constitutes a virtuous reasoning process by our standards*. We generally think that we can "do philosophy" and reach better conclusions. But we don't have a clean specification of our own thinking process. We don't know exactly what counts as a virtuous continuation of our thinking vs a disruption.
This has some implications for AI alignment, but I won't try to spell them out here.
**Calibration**
One more rationality property before we move on.
One could be forgiven for reading Eliezer's [A Technical Explanation of Technical Explanation](https://www.lesswrong.com/posts/afmj8TKAqH6F2QMfZ/a-technical-explanation-of-technical-explanation) and coming to believe that Bayesian reasoners are calibrated. Eliezer goes so far as to suggest that we *define* probability in terms of calibration, so that *what it means* to say "90% probability" is that, in cases where you say 90%, the thing happens 9 out of 10 times.
However, the truth is that calibration is a neglected property in Bayesian probability theory. Bayesian updates do not help you learn to be calibrated, any more than they help your beliefs to be convergent.
We can make a sort of Dutch Book argument for calibration: if things happen 9-out-of-ten times when the agent says 80%, then a bookie can place bets with the agent at 85:15 odds and profit in the long run. (Note, however, that this is a bit different from typical Dutch Book arguments: it's a strategy in which the bookie risks some money, rather than just getting a sure gain. What I can say is that Logical Induction treats this as a valid Dutch Book, and so, we get a calibration property in that formalism. I'm not sure about other formalisations of Radical Probabilism.)
The intuition is similar to convergence: even lacking a hypothesis to explain it, a rational agent should eventually notice "hey, when I say 80%, the thing happens 90% of the time!". It can then improve its beliefs in future cases by adjusting upwards.
This illustrates "meta-probabilistic beliefs": a radical probabilist can have informed opinions *about the beliefs themselves*. By default, a classical Bayesian doesn't have beliefs-about-beliefs except as a result of learning about the world and reasoning about themselves as a part of the world, which is [problematic in the classical Bayesian formalism](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh/p/efWfvrWLgJmbBAs3m). It is possible to add second-order probabilities, third-order, etc. But calibration is a case which collapses all those levels, illustrating how the radical probabilist can handle all of this more naturally.
I'm struck by the way calibration *is something Bayesians obviously want*. The set of people who advocate applying Bayes Law and the set of people who look at calibration charts for their own probabilities has a *very significant overlap*. Yet, Bayes' Law does not give you calibration. It makes me feel like more people should have noticed this sooner and made a bigger deal about it.
Bayes From a Distance
=====================
Before any more technical details about radical probabilism, I want to take a step back and give one intuition for what's going on here.
We can see radical probabilism as *what a dogmatic Bayesian looks like if you can't see all the details*.
The Rationality of Acquaintances
--------------------------------
Imagine you have a roommate who is perfectly rational in the dogmatic sense: this roommate has low-level observations which are 100% confident, and performs a perfect Bayesian update on those observations.
However, observing your roommate, you can't track all the details of this. You talk to your roommate about some important beliefs, but you can't track every little Bayesian update -- that would mean tracking every sensory stimulus.
From your perspective, your roommate has constantly shifting beliefs, which can't quite be accounted for. If you are particularly puzzled by a shift in belief, you can discuss reasons. "I updated against getting a cat because I observed a hairball in our neighbor's apartment." Yet, none of the evidence discussed is itself 100% confident -- it's at least a little bit removed from low-level sense-data, and at least a little uncertain.
Yet, this is not a big obstacle to viewing your roommate's beliefs as rational. You can evaluate these beliefs on their own merits.
I've heard this model called *Bayes-with-a-side-channel*. You have an agent updating via Bayes, but part of the evidence is hidden. You can't give a formula for changes in belief over time, but you can still assert that they'll follow conservation of expected evidence, and some other rationality conditions.
What Jeffrey proposes is that we allow these dynamics without necessarily positing a side-channel to explain the unpredictable updates. This has an anti-reductionist flavor to it: updates do not have to reduce to observations. But why should we be reductionist in that way? Why would subjective belief updates *need* to reduce to observations?
*(Note that Bayes-with-a-side-channel does not imply conditions such as convergence and calibration; so, Jeffrey's theory of rationality is more demanding.)*
Wetware Bayes
-------------
Of course, Jeffrey would say that our relationship with ourselves is much like the roommate in my story. Our beliefs move around, and while we can often give some account of why, we can't give a full account in terms of things we've learned with 100% confidence. And it's not simply because we're a Bayesian reasoner who lacks introspective access to the low-level information. The nature of our wetware is such that there isn't really any place you can point to and say "this is a 100% known observation". Jeffrey would go on to point out that there's no clean dividing line between external and internal, so you can't really draw a boundary between external event and internal observation-of-that-event.
(I would remark that Jeffrey doesn't exactly give us a way to *handle* that problem; he just offers an abstraction which doesn't chafe on that aspect of reality so badly.)
Rather than imagining that there are perfect observations somewhere in the nervous system, we can instead imagine that a sensory stimulus exerts a kind of "evidential pressure" which can be less than 100%. These evidential pressures can also come from within the brain, as is the case with logical updates.
But Where Do Updates Come From?
-------------------------------
Dogmatic probabilism raises the all-important question "where do priors come from?" -- but once you answer that, everything else is supposed to be settled. There have been many debates about what constitutes a rational prior.
> **Q. How can I find the priors for a problem?**
> **A.** Many commonly used priors are listed in the *Handbook of Chemistry and Physics.*
>
> **Q. Where do priors** ***originally*** **come from?**
> **A.** Never ask that question.
>
> **Q. Uh huh. Then where do scientists get their priors?**
> **A.** Priors for scientific problems are established by annual vote of the AAAS. In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations. This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time. No one is really sure.
>
> **Q. I see. And where does everyone else get their priors?**
> **A.** They download their priors from Kazaa.
>
> **Q. What if the priors I want aren't available on Kazaa?**
> **A.** There's a small, cluttered antique shop in a back alley of San Francisco's Chinatown. *Don't ask about the bronze rat.*
>
> *--* Eliezer Yudkowsky, [*An Intuitive Explanation of Bayes' Theorem*](https://yudkowsky.net/rational/bayes)
>
>
Radical probabilists put less emphasis on the prior, since a radical probabilist can effectively "decide to have a different prior" (updating their beliefs as if they'd swapped out one prior for another). However, they face a similarly large problem of where *updates* come from.
We are given a picture in which beliefs are like a small particle in a fluid, reacting to all sorts of forces (some strong and some weak). Its location gradually shifts as a result of Brownian motion. Presumably, the interesting work is being done behind the scenes, by whatever is *generating* these updates. Yet, Jeffrey's picture seems to mainly be about the dance of the particle, while the fluid around it remains a mystery.
A full answer to that question is beyond the scope of this post. (Logical Induction offers *one* fully detailed answer to that question.) However, I do want to make a few remarks on this problem.
* It might at first seem strange for beliefs to be so radically malleable to external pressures. But, actually, this is already the familiar Bayesian picture: everything happens due to externally-driven updates.
* Bayesian updates don't really answer the question of where updates come from, either. They take it as given that there are some "observations". Radical probabilism simply allows for a *more general* sort of feedback for learning.
* An orthodox probabilist might answer this challenge by saying something like: when we design an agent, we design sensors for it. These are connected in such a way as to feed in sensory observations. A radical probabilist can similarly say: when we design an agent, we get to decide what sort of feedback it uses to improve its beliefs.
The next section will give some practical, human examples of non-Bayesian updates.
Virtual Evidence
================
Bayesian updates are path-independent: it does not matter in what order you apply them. If you first learn A and then learn B, your updated probability distribution is P3(X)=P2(X|B)=P1(X|A&B). If you learn these facts the other way around, it's still P3(X)=P2(X|A)=P1(X|A&B).
Jeffrey updates are path-dependent. Suppose my probability distribution is as follows:
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 30% | 20% |
| ¬B | 20% | 30% |
I then apply the Jeffrey update P(B)=60%:
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 36% | 24% |
| ¬B | 16% | 24% |
Now I apply P(A)=60%:
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 41.54% | 20% |
| ¬B | 18.46% | 20% |
Since this is asymmetric, but the initial distribution was symmetric, obviously this would turn out differently if we had applied the Jeffrey updates in a different order.
Jeffrey considered this to be a bug -- although he seems fine with path-dependence under some circumstances, he used examples like the above to motivate a *different* way of handling uncertain evidence, which I'll call ***virtual evidence***. (Judea Pearl strongly advocated virtual evidence over Jeffrey's rule near the beginning of Probabilistic Reasoning in Intelligent Systems (Section 2.2.2 and 2.3.3), in what can easily be read as a critique of Jeffrey's theory -- if one does not realize that Jeffrey is largely in agreement with Pearl. I thoroughly recommend Pearl's discussion of the details.)
Recall the basic anatomy of a Bayesian update:
The idea of virtual evidence is to use evidence 'e' which is not an event in our event space. We're just acting *as if* there were evidence 'e' which justifies our update. Terms such as P(e), P(e&h), P(e|h), P(h|e), and so on are not given the usual probabilistic interpretation; they just stand as a convenient notation for the update. ***All we need to know is the likelihood function for the update.*** We then multiply our probabilities by the likelihood function as usual, and normalize. P(e) is easy to find, since it's just whatever factor makes everything sum to one at the end. This is good, since it isn't clear what P(e) would mean for a virtual event.
Actually, we can simplify even further. All we *really* need to know is the likelihood *ratio:* the ratio between the two numbers in the likelihood function. (I will illustrate this with an example soon). However, it may sometimes be easier to find the whole likelihood function in practice.
Let's look at the path-dependence example again. As before, we start with:
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 30% | 20% |
| ¬B | 20% | 30% |
I want to apply a Jeffrey update which makes P(B)=60%. However, let's represent the update via virtual evidence this time. Currently, P(B)=50%. To take it to 60%, we need to see virtual evidence with a 60:40 likelihood ratio, such as P(B|E)=60%, P(¬B|E)=40%. This gives us the same update as before:
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 36% | 24% |
| ¬B | 16% | 24% |
(Note that we would have gotten the same result with a likelihood function of P(B|E)=3%, P(¬B|E)=2%, since 60:40 is the same as 3:2. That's what I meant when I said that only the ratio matters.)
But now we want to apply the same update to A as we did to B. So now we update on virtual evidence P(A|E)=60%, P(¬A|E)=40%. This gives us the following (approximately):
| | | |
| --- | --- | --- |
| | A | ¬A |
| B | 43% | 19% |
| ¬B | 19% | 19% |
As you can see, the result is quite symmetric. In general, virtual evidence updates will be path-independent, because multiplication is commutative (and the normalization step of updating doesn't mess with this commutativity).
So, virtual evidence is a reformulation of Jeffrey updates with a lot of advantages:
* Unlike raw Jeffrey updates, virtual evidence is path-independent.
* You don't have to decide right away what you're updating *to*; you just have to decide the strength and direction of the update.
* I don't fully discuss this here, but Pearl argues persuasively that it's easier to tell when a virtual-evidence update is appropriate than when a Jeffrey update is appropriate.
Because of these features, virtual evidence is much more useful for *integrating information from multiple sources.*
Integrating Expert Opinions
---------------------------
Suppose you have an ancient artefact. You want to know whether this artefact was made by ancient aliens. You have some friends who are also curious about ancient aliens, so you enlist their help.
You ask one friend who is a metallurgist. After performing experiments (the details of which you don't understand), the metallurgist isn't sure, but gives 80% that the tests would turn out that way if it were of terrestrial origin, and 20% for metals of non-terrestrial origin. (Let's pretend that ancient aliens would 100% use metals of non-Earth origin, and that ancient humans would 100% use Earth metals.)
You then ask a second friend, who is an anthropologist. The anthropologist uses cultural signs, identifying the style of the art and writing. Based on that information, the anthropologist estimates that it's half as likely to be of terrestrial origin as alien.
How do we integrate this information? According to Jeffrey and Pearl, we can apply the virtual evidence formula *if we think the two expert judgements are independent*. What 'independence' means for virtual evidence is a bit murky, since the evidence is not part of our probability calculus, so we can't apply the usual probabilistic definition. However, Pearl argues persuasively that this condition is easier to evaluate in practice than the *rigidity* condition which governs the applicability of Jeffrey updates. (He also gives an example where rigidity is violated, so a naive Jeffrey update gives a nonsensical result but where virtual evidence can still be easily applied to get a correct result.)
The information provided by the anthropologist and the metallurgist seem to be quite independent types of information (at least, if we ignore the fact that both experts are biased by an interest in ancient aliens), so let's apply the virtual evidence rule. The likelihood ratio from the metallurgist was 80:20, which simplifies to 4:1. The likelihood ratio from the anthropologist was 1:2. That makes the combined likelihood vector 2:1 in favor of terrestrial origin. We would then combine this with our prior; for example, if we had a prior of 3:1 in favor of a terrestrial origin, our posterior would be 6:1 in favor.
(Note that we also have to think that the virtual evidence is independent *of our prior information.)*
So, virtual evidence offers a practical way to integrate information when we cannot quantify exactly what the evidence was -- a condition which is especially likely when consulting experts. This illustrates the utility of the bayes-with-a-side-channel model mentioned earlier; we are able to deal effectively with evidence, even when the exact nature of the evidence is hidden to us.
A few notes on how we gathered expert information in our hypothetical example.
* We asked for likelihood ratios, rather than posterior probabilities. This allows us to combine the information as virtual evidence.
* In the case of the metallurgist, it makes sense to ask for likelihood ratios, since the metallurgist is unlikely to have good prior information about the artefact. Asking only for likelihoods allows us to factor out any effect from this poor prior (and instead use our own prior, which may still be poor, but has the advantage of being ours).
* In the case of the anthropologist, however, it doesn't make as much sense -- if we trust their expertise, we're likely to think the anthropologist has a good prior about artefacts. It might have made more sense to ask for the anthropologist's posterior, take it as our own, and *then* apply a virtual-evidence update to integrate the metallurgist's report. (However, if we weren't able to properly communicate our own prior information to the anthropologist, it would be ignored in such an approach.)
* In the case of the metallurgist, it felt more natural to give a full likelihood function, rather than a likelihood ratio. It makes sense to know the probability of test result given a particular substance. It would have made even more sense if the likelihood function were *a function of each metal the artefact could be made of*, rather than just "terrestrial" or "extraterrestrial" -- using broad categories allows the metallurgist's prior about specific substances to creep in, which might be unfortunate.
* In the case of the anthropologist, however, it didn't make sense to give a full likelihood function. "The probability that the artefact would look exactly the way it looks assuming that it's made by humans" is very very low, and seems quite difficult and unnatural to evaluate. It seems much easier to come up with a likelihood *ratio,* comparing the probability of terrestrial and extraterrestrial origin.
Why did Pearl devote several sections to virtual evidence, in a book which is otherwise a bible for dogmatic probabilists? I think the main reason is the close analogy to the mathematics of Bayesian networks. The message-passing algorithm which makes Bayesian networks efficient is almost exactly the virtual evidence procedure I've described. If we think of each node as an expert trying to integrate information from its neighbors, then the efficiency of Bayes nets comes from the fact that they can use virtual evidence to update on likelihood functions rather than needing to know about the evidence in detail. This may have even been one source of inspiration for Pearl's belief propagation algorithm?
Can Dogmatic Probabilists Use Virtual Evidence?
-----------------------------------------------
OK, so we've put Jeffrey's radical updates into a more palatable form -- one which borrows the structure and notation of classical Bayesian updates.
Does this mean orthodox Bayesians can join the party, and use virtual evidence to accomplish everything a radical probabilist can do?
No.
**Virtual evidence abandons the ratio formula.**
One of the longstanding axioms of classical Bayesian thought is the ratio formula for conditional probability that Bayes himself introduced:
P(A|B)=P(A&B)P(B)Virtual evidence, as an updating practice, holds that P(A|B) can be usefully defined in cases where the ratio P(A&B)/P(B) ***cannot*** be usefully defined. Indeed, virtual evidence treats Bayes' Law (which is usually a derived theorem) as more fundamental than the ratio formula (which is usually taken as a definition).
Granted, dogmatic probabilism *as I defined it at the beginning of this post* does not explicitly assume the ratio formula. But the assumption is so ingrained that I assume most readers took P(A|B) to mean the ratio.
Still, even so, we can *consider* a version of dogmatic probabilism which rejects the ratio formula. Couldn't they use virtual evidence?
**Virtual evidence requires probability functions to take arguments which aren't part of the event space.**
Even abandoning the ratio formula, still, it's hard to see how a dogmatic probabilist could use virtual evidence without abandoning the Kolmogorov axioms as the foundation of probability theory. The Kolmogorov axioms make probabilities a function of events; and events are taken from a pre-defined event space. Virtual evidence constructs new events at will, and does not include them in an overarching event space (so that, for example, virtual evidence V can be defined -- so that P(X|V) is meaningful for all X from the event space --without events like X&V being meaningful, as would be required for a sigma-algebra).
I left some wiggle room in my definition, saying that a dogmatic probabilist might endorse the Kolmogorov axioms "or a similar axiomatization of probability theory". But even the Jeffrey-Bolker axioms, which are pretty liberal, don't allow enough flexibility for this!
Representing Fluid Updates
--------------------------
A final point about virtual evidence and Jeffrey updates.
Near the beginning of this essay, I gave a picture in which Jeffrey updates generalize Bayesian updates, but *fluid* updates generalize things even further, opening up the space of possibilities when rigidity does not hold.
However, I should point out that *any update is a Jeffrey update on a sufficiently fine partition.*
So far, for simplicity, I've focused on binary partitions: we're judging between H and ¬H, rather than a larger set such as H1,H2,H3. However, we can generalize everything to arbitrarily sized partitions, and will often want to do so. I noted that a larger set might have been better when asking the metallurgist about the artefact, since it's easier to judge the probability of test results given specific metals rather than broad categories.
If we make a partition large enough to cover every possible combination of events, then a Jeffrey update is now just a completely arbitrary shift in probability. Or, alternatively, we can represent arbitrary shifts via virtual evidence, by converting to likelihood-ratio format.
So, these updates are completely general after all.
Granted, there might not be any *point* to seeing things that way.
Non-Sequential Prediction
=========================
One advantage of radical probabilism is that it offers a more general framework for statistical learning theory. I already mentioned, briefly, that it allows one to do away with the realizability/grain-of-truth assumption. This is very important, but not what I'm going to dwell on here. Instead I'm going to talk about non-sequential prediction, which is a benefit of logical induction which I think has been under-emphasized so far.
Information theory -- in particular, algorithmic information theory -- in particular, Solomonoff induction -- is restricted to a *sequential prediction* frame. This means there's a very rigid observation model: observations are a sequence of tokens and you always observe the *nth* token after observing tokens one through *n-1*.
Granted, you can fit lots of things into a sequential prediction model. However, it is a flaw the otherwise close relationship between Bayesian probability and information theory. You'll run into this if you try to relate information theory and logic. Can you give an information-theoretic intuition for the laws of probability that deal with logical combinations, such as P(A or B) + P(A and B) = P(A) + P(B)?
I've [complained about this before](https://www.alignmentforum.org/posts/fhJkQo34cYw6KqpH3/thinking-about-filtered-evidence-is-very-hard), offering a theorem which (somewhat) problematizes the situation, and suggesting that people should notice whether or not they're making sequential-prediction style assumptions. I almost included related assumptions in my definition of dogmatic probabilism at the beginning of this post, but ultimately it makes more sense to contrast radical probabilism to the more general doctrine of Bayesian updates.
Sequential prediction cares only about the accuracy of beliefs *at the moment of observation*; the accuracy of the full distribution over the future is reduced to the accuracy about each next bit as it is observed.
If information is coming in "in any old way" rather than according to the assumptions of sequential prediction, then we can construct problematic cases for Solomonoff induction. For example, if we condition the *nth* bit to be 1 (or 0) when a theorem prover proves (or refutes) the *nth* sentence of Peano arithmetic, then Solomonoff induction will never assign positive probability to hypotheses consistent with Peano arithmetic, and will therefore do poorly on this prediction task. This is despite the fact that there are *computable* programs which do better at this prediction task; for example, the same theorem prover running just a little bit faster can have highly accurate beliefs at the moment of observation.
In non-sequential prediction, however, we care about accuracy *at every moment*, rather than just at the moment of observation. Running the same theorem prover, just one step faster, doesn't do very well on that metric. It allows you to get things right just in time, but you won't have any clue about what probabilities to assign before that. We don't just want the right conclusion; we want to get there as fast as possible, and (in a subtle sense) via a rational path
Part of the difficulty of non-sequential prediction is how to score it. Bayes loss applied to your predictions at the moment of observation, in a sequential prediction setting, seems quite useful. Bayes loss *applied to all your beliefs, at every moment* does not seem very useful.
Radical probabilism gives us a way to evaluate the rationality of non-sequential predictions -- namely, how vulnerable the sequence of belief distributions was to losing money via some sequence of bets.
Sadly, I'm not yet aware of any appropriate generalization of information theory -- at least not one that's very interesting. (You can index information by time, to account for the way probabilities stift over time... but that does not come with a nice theory of communication or compression, which are fundamental to classical information theory.) This is why I [objected to prediction=compression in the discussion section of Alkjash's talk](https://www.lesswrong.com/posts/hAvGi9YAPZAnnjZNY/prediction-compression-transcript-1).
To summarize, sequential prediction makes three critical assumptions which may not be true in general:
* It assumes observations will always inform us about one of a set of observable variables. In general, Bayesian updates can instead inform us about any event, including complex logical combinations (such as "either the first bit is 1, or the second bit is 0").
* It assumes these observations will be made in a specific sequence, whereas in general updates could come in in any order.
* It assumes that what we care about is the accuracy of belief *at the time of observation;* in general, we may care about the accuracy of beliefs at other times.
The only way I currently know how to get theoretical benefits similar to those of Solomonoff induction while avoiding all three of these assumptions is radical probabilism (in particular, as formalized by logical induction).
*(The connection between this section and radical probabilism is notably weaker than the other parts of this essay. I think there is a lot of low-hanging fruit here, fleshing out the space of possible properties, the relationship between various problems and various assumptions, trying to generalize information theory, clarifying our concept of observation models, et cetera.)*
Making the Meta-Bayesian Update
===============================
In *Pascal's Muggle* ([long version](https://www.lesswrong.com/posts/Ap4KfkHyxjYPDiqh2/pascal-s-muggle-infinitesimal-priors-and-strong-evidence), [short version](https://www.lesswrong.com/posts/KDzXTWSTg8ArwbhRR/pascal-s-muggle-short-version)) Eliezer discusses situations in which he would be forced to make a non-Bayesian update:
> But if I actually see strong evidence for something I previously thought was super-improbable, I don't just do a Bayesian update, I should also question whether I was right to assign such a tiny probability in the first place - whether the scenario was really as complex, or unnatural, as I thought. In real life, you are not ever supposed to have a prior improbability of 10-100 for some fact distinguished enough to be written down, and yet encounter strong evidence, say 1010 to 1, that the thing has actually happened. If something like that happens, you don't do a Bayesian update to a posterior of 10-90. Instead you question both whether the evidence might be weaker than it seems, *and* whether your estimate of prior improbability might have been poorly calibrated, because rational agents who actually have well-calibrated priors should not encounter situations like that until they are ten billion days old. Now, this may mean that I end up doing some non-Bayesian updates: I say some hypothesis has a prior probability of a quadrillion to one, you show me evidence with a likelihood ratio of a billion to one, and I say 'Guess I was wrong about that quadrillion to one thing' rather than being a Muggle about it.
>
>
At the risk of being too cutesy, I want to make two related points:
* At the object level, radical probabilism offers a framework in which we can make these sorts of non-Bayesian updates. We can encounter something which makes us question our whole way of thinking. It also allows us to significantly revise that way of thinking, without modeling the situation as something extreme like self-modification (or even something very out of the ordinary).
* At the meta level, updating to radical probabilism *is itself* one of these non-Bayesian updates. Of course, if you were really a hard-wired dogmatic probabilist at core, you would be unable to make such an update (except perhaps if we model it as self-modification). But, since you *are already* using reasoning which is actually closer in spirit to radical probabilism, you can start to model yourself in this way and using radical-probabilist ideas to guide future updates.
So, I wanted to use this penultimate section for some advice about making the leap.
It All Adds Up to Normality
---------------------------
Radical Probabilism is not a license to update however you want, nor even an invitation to massively change the way you update. It is primarily a new way to understand what you are already doing. Yes, it's possible that viewing things through this lense (rather than the more narrow lense of dogmatic probabilism) will change the way you see things, and as a consequence, change the way you do things. However, you are not (usually) making some sort of mistake by engaging in the sort of Bayesian reasoning you are familiar with -- there is no need to abandon large portions of your thinking.
Instead, try to notice ordinary updates you make which are not perfectly understood as Bayesian updates.
* Calibration corrections are not well-modeled as Bayesian updates. If you say to yourself "I've been overconfident in similar situations", and lower your probability, your shift is better-understood as a fluid update.
* Many instances of "outside view" are not well-modeled in a Bayesian update framework. You've probably seen outside view explained as prior probability. However, you often take the outside view on one of your own arguments, e.g. "I've often made arguments like this and been wrong". This kind of reflection doesn't fit well in the framework of Bayesian updates, but fits fine in a radical-probabilist picture.
* It is often warranted to downgrade the probability of a hypothesis without having an alternative in mind to upgrade. You can start to find a hypothesis suspicious without having any better way of predicting observations. For example, a sequence of surprising events might stick out to you as evidence that your hypothesis is wrong, even though your hypothesis is still the best way that you know to try and predict the data. This is hard to formalize as a Bayesian update. Changes in probability between hypotheses always remain balanced. It's true that you move the probability to a "not the hypotheses I know" category which balances the probability loss, but *it's not true that this category earned the increased probability by predicting the data better*. Instead, you used a set of heuristics which have worked well in the past to decide when to move probabilities around.
Don't Predictably Violate Bayes
-------------------------------
Again, this is not a license to violate Bayes' Rule whenever you feel like it.
A radical probabilist should obey Bayes' Law in expectation, in the following sense:
If some evidence E or ¬E is bound to be observed by time m>n, then the following should hold:
En(Pm(H)|E)=Pn(H|E)And the same for ¬E. In other words, you should not expect your updated beliefs to differ from your conditional probabilities on average.
(*You should suspect from the fact that I'm not proving this one that I'm playing a bit fast and loose -- whether this law holds may depend on the formalization of radical probabilism, and it probably needs some extra conditions I haven't stated, such as P(E)>0.*)
And remember, every update is a Bayesian update, with the right virtual evidence.
Exchange Virtual Evidence
-------------------------
Play around with the epistemic practice Jeffrey suggests. I suspect some of you already do something similar, just not necessarily calling it by this name or looking so closely at what you're doing.
Don't Be So Realist About Your Own Utility Function
---------------------------------------------------
Note that the picture here is quite compatible with what I said in [*An Orthodox Case Against Utility Functions*](https://www.alignmentforum.org/posts/A8iGaZ3uHNNGgJeaD/an-orthodox-case-against-utility-functions)*.* Your utility function need not be computable, and there need not be something in your ontology which you can think of your utility as a function of. All you need are utility *expectations*, and the ability to update those expectations. Radical Probabilism adds a further twist: you don't need to be able to predict those updates ahead of time; indeed, you probably can't. Your values aren't tied to a function, but rather, are tied to your trust in the ongoing process of reasoning which refines and extends those values (very much like the self-trust discussed in the section on conservation of expected evidence).
Not So Radical After All
------------------------
And remember, every update *is* a Bayesian update, with the right virtual evidence.
Recommended Reading
===================
[Diachronic Coherence and Radical Probabilism](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.194.482&rep=rep1&type=pdf), Brian Skyrms
* This paper is really nice in that it constructs Radical Probabilism from the ground up, rather than starting with regular probability theory and relaxing it. It provides a view in which diachronic coherence is foundational, and regular one-time-slice probabilistic coherence is derived. Like logical induction, it rests on a market metaphor. It also briefly covers the argument that radical-probabilism beliefs must have a convergence property.
[Radical Probabilism and Bayesian Conditioning](http://personal.lse.ac.uk/bradLEYR/pdf/rad.prob4.pdf), Richard Bradley
* This is a more thorough comparison of radical probabilism to standard bayesian probabilism, which breaks down the departure carefully, while covering the fundamentals of radical probabilism. In addition to Bayesian conditioning and Jeffrey conditioning, it introduces Adams conditioning, a new type of conditioning which will be valid in many cases (for the same sort of reason as why Jeffrey conditioning or Bayesian conditioning can be valid). He contends that there are, nonetheless, many more ways to update beyond these; and, he illustrates this with a purported example where none of those updates seems to be the correct one.
[Epistemology Probabilized](http://www.princeton.edu/~bayesway/IPMU.pdf), Richard Jeffrey
* The man himself. This essay focuses mainly on how to update on likelihood ratios rather than directly performing Jeffrey updates (what I called virtual evidence). The motivations are rather practical -- updating on expert advice when you don't know precisely what observations lead to that advice.
[I was a Teenage Logical Positivist (Now a Septuagenarian Radical Probabilist)](https://www.princeton.edu/~bayesway/KC.tex.pdf), Richard Jeffrey.
* Richard Jeffrey reflects on his life and philosophy.
Probabilistic Reasoning in Intelligent Systems, Judea Pearl.
* See especially chapter 2, especially 2.2.2 and 2.3.3.
[Logical Induction](https://arxiv.org/abs/1609.03543), Garrabrant et al.
\*: Jeffrey actually used this phrase. See *I was a Teenage Logical Positivist*, linked above. |
9dcae5fb-cf26-4373-b326-ed6955e01ff9 | trentmkelly/LessWrong-43k | LessWrong | Google Gemini Announced
Google just announced Gemini, and Hassabis claims that "in each of the 50 different subject areas that we tested it on, it's as good as the best expert humans in those areas"
> State-of-the-art performance
> We've been rigorously testing our Gemini models and evaluating their performance on a wide variety of tasks. From natural image, audio and video understanding to mathematical reasoning, Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.
> With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.
> Our new benchmark approach to MMLU enables Gemini to use its reasoning capabilities to think more carefully before answering difficult questions, leading to significant improvements over just using its first impression.
It also seems like it can understand video, which is new for multimodal models (GPT-4 cannot do this currently). |
7922c843-2e02-46fb-811c-76eba2f9cb37 | trentmkelly/LessWrong-43k | LessWrong | Open thread, Dec. 19 - Dec. 25, 2016
If it's worth saying, but not worth its own post, then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting. |
47d79e7d-b64f-4704-bfb2-d282786c71c9 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Inframeasures and Domain Theory
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
*Math prerequisites for this post: A Whole Lot of topology, some poset/lattice knowledge, have read "*[*Basic Inframeasure Theory*](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/YAa4qcMyoucRS2Ykr)*", just enough category theory to be familiar with functors, and other minor bits and pieces.*
Domain theory is the branch of math that's like "so, what sort of topological space is a Haskell type, anyways? How do we provide a semantics for computation?"
And Inframeasure theory is about a generalization of measures which seems to hold promise for addressing several alignment issues.
Accordingly, it would be nifty if inframeasures were the sort of thing that could be worked with in Haskell. I don't know Haskell. But I do know a decent chunk of domain theory, so I can at least check how well inframeasures interface with that branch of math, as a proxy for actual computer implementation.
The major implications from this post are:
1: Yes, inframeasures do interface with domain theory quite nicely, and land in the part of it that might be feasible to implement on a computer (the ω-BC domains)
2: You can have inframeasure-like uncertainty over **any Haskell type**. This took some substantial generalization work, because Haskell types, as topological spaces, are very very different from the sorts of topological spaces we were working with before. Perfect for representing uncertainty about computations!
3: "take the space of inframeasures over this other type" is a well-behaved type constructor, so whatever crazy self-referential type signature you want to whip up with it (and the other usual type constructors), it'll work perfectly well. Oh, it's also a monad.
This post was cranked out in minimal time without editing, feel free to get in contact with me if there's something that needs clarification (there are probably things which need clarification).
### **Notation Conventions:**
This section is best to refer back to if you're like "I know that symbol was defined *somewhere*, but it's hard to find where". This mostly just covers the stuff introduced at one point in this post and repeatedly reused. If a symbol gets introduced for a paragraph and doesn't get reused, it won't show up here.
For ordering, I reserve ≥ and ≤ for comparing numbers in R (or [0,1] or [−∞,∞] or (−∞,∞]).
For orderings on posets which are *not* the set of numbers, I use ⊑. ⊓ is infinima, ⊔ is suprema. ⨆↑ is supremum of a directed set. As is usual, ⊤ and ⊥ are the top and bottom elements of a poset if they exist.
≪ is used for the approximation relation on a poset. x≪y if all directed sets with a supremum of y or more contain an element that lies above x. More on this concept later. It's like a more restrictive version of ⊑.
D,E are used for domains or posets, while X and Y are used for generic topological spaces that may not be equipped with a partial order.
For type signatures, f:X→Y is a function from the space X to the space Y, assumed continuous by default unless we specifically say otherwise. If we say "f is a measurable function X→Y" or something like that, it might not be continuous, but if it's not otherwise specified, always assume continuity.
(X→Y) is the space of continuous functions from X to Y. This is often used for something like (X→R) and (X→[0,1]), where we're equipping R and [0,1] with the usual topology.
The notation [D→E], or [X→[0,1]], or stuff like that, has the brackets meaning we're treating the function space in a *specifically* domain-theoretic way. The target space E or [0,1] or (−∞,∞] or whatever *must* be equipped with a partial order. There's a topology on posets called the Scott-topology, the target space is equipped with it. The Scott-topology on [0,1] and the ordinary topology on [0,1] are different (same goes for R and other intervals like that) so elements of [X→R] may *not* be elements of (X→R), because R is equipped with different topologies in the two cases, altering the notion of "continuous function". Also, the function space is equipped with a partial order. Since the target space E or [0,1] or whatever is a poset, the functions get the ordering f′⊒f iff ∀x:f′(x)⊒f(x).
So, to recap, the brackets mean we're equipping the target space with a partial order and the Scott-topology, and equipping the space of continuous functions with a partial order and the Scott-topology as well.
For numbers, p is generally used for numbers in [0,1], q is generally used for an arbitrary real number, and a is used for nonnegative real numbers.
cq is the constant function which maps every input to the number q.
There's a part where we use P and P. The slightly fancier P is a subset of a domain D, the simpler P is the subset equipped with the order inherited from D to make it into its own distinct poset.
ψ is used for inframeasures, interpreted as a function. They've got type signatures like (X→R)→R, or (X→[0,1])→[0,1] (usual topology on the numbers), or [X→(−∞,∞]]→[−∞,∞], or [X→[0,1]]→[0,1] (Scott-topology on the numbers), varying by context. They've also got some other properties besides that.
### **So what is an Inframeasure?**
Most of the motivation for why these things are important is contained in the [Introduction to this Sequence](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/zB4f7QqKhBHa5b37a). So we'll just focus on what sort of creature an inframeasure is, mathematically, trying to emphasize the aspects of it which generalize to our new setting.
For probability distributions μ over some space X, the expectation of a function f:X→R is defined as:
Eμ[f]:=∫fdμ
There's an alternate way of thinking of probability distributions where the most important attribute of a probability distribution is that it's the sort of thingy that you can take expectations of functions with. Under this view, a probability distribution is just a functional of type (X→R)→R, mapping a function to its expectation. Said functional must fulfill the following properties, and then it can be called a probability distribution.
**1:** It must be monotone. Ie, larger functions must be mapped to higher expected values. The order on functions is that f′ is larger than f iff ∀x:f′(x)≥f(x).
**2:** It must be linear.
**3:** It must map the constant-1 function to 1.
And, as it turns out, monotone linear functionals which map the constant-1 function to 1 are exactly those which can be written as taking an expectation w.r.t. a probability distribution. If you wanted to generalize to measures instead of probability distributions, just drop that third condition and keep the first two.
Now, in order to build up to inframeasures, we'll need to generalize to a-measures (short for "affine measures"). If you don't do this, you lose the ability to deal with updates in the infra-Bayes setting, which critically rely on this. An a-measure is a pair of a measure and a nonnegative number, (m,b). That b term is like "guaranteed utility". Again, we can take expectations w.r.t. them, defined as
E(m,b)[f]:=∫fdm+b
Said expectation functional (X→R)→R has the properties:
**1:** It's monotone.
**2:** It's affine. For p∈[0,1], E(m,b)[pf+(1−p)f′]=pE(m,b)[f]+(1−p)E(m,b)[f′]
All affine functions to R look like a linear function to R plus a constant.
**3:** It must map the constant-0 function to 0 or more.
And, lo and behold, all expectation functionals of this form (skipping over some fiddly issues) can be identified with an a-measure.
Inframeasures are sets of a-measures. Sets of probability distributions (which, in our terminology, are called "crisp infradistributions") are already extensively studied in the field of [Imprecise Probability](https://en.wikipedia.org/wiki/Imprecise_probability), but generalizing to sets of a-measures is necessary for maximum generality and properly capturing updates. We can define expectations w.r.t these sets of a-measures. You don't know what a-measure (or probability distribution, if you prefer to think of sets of probability distributions) reality is going to pick from its set of available options Ψ, and you want to be prepared for the worst case, so expectations w.r.t. a set of a-measures are defined as:
EΨ[f]:=inf(m,b)∈Ψ∫fdm+b
Just take the worst-case expectation w.r.t. the available set of a-measures.
Taking the view where the expectation functional of type (X→R)→R is the most fundamental object, several different sets of a-measures might have the exact same expectations, so you get an equivalence relation on sets of a-measures. Usually, a set of a-measures must fulfill certain properties like closure and convexity to be called an "inframeasure", but the bulk of those properties are *actually* the condition for a set of a-measures to be the largest set in its equivalence class, the "canonical representative" of the equivalence class.
So, for thinking of an inframeasure, you can think of it as a equivalence class of sets of a-measures. Or you can think of it as the "canonical representative", the largest set of a-measures in its equivalence class. Or you can completely neglect the view of it as a set of a-measures and just consider the expectation functional, of type (X→R)→R. That last view, where we just view an inframeasure as a special sort of function of that type signature, will be used through the rest of this post as it generalizes the best. The expectation functional for an inframeasure, denoted by ψ, is:
**1:** Monotone (as usual)
**2:** Concave. If p∈[0,1], then pψ(f)+(1−p)ψ(f′)≤ψ(pf+(1−p)f′)
**3:** Lipschitz. There's some finite constant λ⊙ s.t, for all f,f′, |ψ(f)−ψ(f′)|supx|f(x)−f′(x)|≤λ⊙
**4:** The constant-0 function is mapped to 0 or higher.
And this is an inframeasure. Any functional of type (X→R)→R, or (X→[0,1])→[0,1], fulfilling those four properties. If you want to add extra properties to this list to get a more restricted class of these things, you can. They seem to be highly useful for dealing with nonrealizable environments, getting nice decision-theoretic behavior for free, and a few other things.
Well, it's a little more complicated than that. Some complications are:
So far, we've only been able to define them for spaces X which are Polish (a space is Polish if it can be equipped with a metric, is closed under taking limits, and there's a dense countable subset).
They can only take the expectations of continuous bounded functions X→R, unlike probability distributions which can handle measurable functions in general.
There's a complicated fifth condition related to compactness which shows up if your space X isn't compact.
We'll be neglecting this more complex stuff for now.
### **So what about Domain Theory?**
Domain theory focuses on partially ordered sets with the ordering ⊑ being related to information content somehow, which is usually tied to computation in lazy programming languages or partial functions or stuff like that. The least informative points go on the bottom, the most informative stuff go on the top. If your poset fulfills enough nice properties, it's called a domain. Haskell types are domains.
If you're wondering about the properties to call something a domain, there's three of them, by my count. Different authors disagree on what the word "domain" refers to, but I'll go with the three most basic conditions which kept showing up over and over and over again in my [domain theory textbook](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf), and which were almost always assumed.
**Condition 1:** Suprema exist for all directed sets.
A directed set is a generalization of a chain. Using D for our poset, and A for a subset of it, A is a directed set when, for any two points in A, there's an upper bound of those two points which is also in A. If you want, you can just think "chain" whenever you see "directed set", the intuition won't lead you astray. So, this is essentially saying that if you've got a sequence of more-and-more-informative points, there must be a completion of that sequence. A most informative point that contains all the information in said sequence, and nothing else. Supremum for directed sets is denoted ⨆↑ instead of ⨆, because it's handy to notate when you're taking a suprema which is actually guaranteed to exist.
**Condition 2:** There must be a bottom point below everything.
Compare this with how, in Haskell, all types contain ⊥. It's usually used to represent nonterminating computations or undefinedness or the least informative thing. It's also handy for defining partial functions. Just map every input without a well-defined output to ⊥.
**Condition 3:** The poset must be continuous.
This one is rather complicated. Along with the usual information ordering, x⊑y, there's also an approximation relation, x≪y (x approximates y), which is stricter than ⊑. x≪y iff all directed sets with a supremum of y or higher contain an element z s.t x⊑z. Put another way, x≪y if x is an irreducible component of the information in y. If x≪y, then it's impossible to make a sequence of more and more information with y (or something more informative) as a limit, without having that sequence pass by/get more informative than x at some point.
A poset D is continuous if, for all y, y is the supremum of the approximants to y.
∀y∈D:y=⨆↑{x∈D|x≪y}
Pretty much, all pieces of information y should be capable of being created by combining the "substantially less informative" pieces of information which are guaranteed to show up in any attempt to build y from below (the approximants to y). You need this for the topology on your poset to behave nicely.
So, if your poset has got suprema of directed sets (chains), a bottom point, and you can build every point with approximants from below, it's a domain.
**Exercise:** Why is the following poset not continuous?

**Haskell Example**
For Haskell semantics, points near the bottom of the domain/type are computations with a lot of unevaluated thunks, basically parts of the computation that haven't finished computing yet. Maximal points in the domain are completed computations, fully specified pieces of data. If you evaluate a thunk, finding out some more about what your piece of data is, you get a higher point in the information ordering. And there's always a bottom.
As an example, the type/domain of infinite bitstrings, under the information ordering, looks like an infinite binary tree.
There's ⊥ as the root node, representing a nonterminating computation where you can't get even a single digit out of it, complete ignorance of what the bitstring is. There's a bunch of branch nodes that are of the form "string of finitely many digits, then a thunk". These elements are like "ok, I can tell you the first six digits of the binary string, but hang on I'm still computing the rest...". And then there's all the leaf nodes, fully computed infinite bitstrings.
**Functions in Domain Theory**
For a function from a domain to a domain to be continuous, it needs two properties. First, it needs to be monotone. x⊑y→f(x)⊑f(y). More informative inputs lead to more informative outputs.
Second, it must preserve suprema of directed sets/chains. f(⨆↑A)=⨆↑a∈Af(a) for all directed sets A. See the picture for a non-continuous function.
Btw, terminology note for later. [D→E] or [X→[0,1]] or something like that signifies the space of continuous functions from one space to another, but treated in a domain-theory way. The brackets specify we're doing this. More precisely, the target space (E or [0,1] or whatever) must be a poset, and equipped with the Scott-topology (a topology you can put on posets). This tells you what the continuous functions are, and then you equip the continuous functions with the ordering f′⊒f iff ∀x∈D:f′(x)⊒f(x).
This is relevant because the space of continuous functions (X→[0,1]) is *not* the space of continuous functions [X→[0,1]]. In the former, we're equipping [0,1] with its usual topology, determining what the continuous functions are. In the latter, we're equipping [0,1] with the Scott-topology, so different functions count as continuous now. And we're also going on to equip the function space with a partial order.
**Type Constructor Closure**
One of the most striking features of domains is that certain subcategories of domains are very good at being closed under type constructors. Some type constructors are +,×, and function space, letting you paste together existing spaces into more complex spaces. Pretty much any topological property worth its salt will be closed under finite products and sums and some other stuff, but function space is way harder to get.
Given spaces X and Y with some nice topological property, the space of continuous functions (X→Y) *very rarely* has said nice property. If X and Y are compact, you can't guarantee the function space is compact. If X and Y are Polish, you can't guarantee that the function space is Polish. But, given domains D and E in some "nice" category of domains, the domain of continuous functions [D→E] is going to be just as "nice".
The most important operation that the "nice" categories of domain are closed under is the bilimit. In domain theory, the category-theoretic notion of limit and colimit coincide (well, kind of, it's complicated), and the resulting thing is called the bilimit. It's how you take a sequence of ever-more-complicated domains and complete the sequence, in a sense. And it's the construction underlying Haskell's ability to define a type in terms of itself. It's very special because it means that, with the tools of domain theory, it's possible to cook up whatever sort of wacky fixtype you want.
We'll work through a very simple example of how this works. The construction process to solve D≅D+D. Stage 1 is starting with a single point. Addition of two domains/posets is "add a single new bottom element below both existing posets", so stage 2 would be 1+1, the poset that looks like a V, the domain of booleans. Stage 3 would be V+V, or a 2-level binary tree with 4 leaves. Stage 4 would be a 3-level binary tree with 8 leaves (two copies of stage 3, with a new root node). And if you take the bilimit of that sequence, you'd end up with the infinite binary tree, the domain of infinite bitstrings. And clearly, if you take two copies of an infinite binary tree, and add a new root node below both of them, you get an infinite binary tree again, so it's isomorphic to adding two copies of itself together.
The general way that domain theory makes fixtypes is starting with a single point at stage 1. Then, substitute the single point into the defining equation, to get a more complicated domain at stage 2. Then substitute the stage 2 domain into the defining equation, to get a more complicated domain at stage 3. There's a sequence of embeddings of each stage into the next, more complex stage, so you can take a bilimit of the sequence, and bam, you have your fixtype.
**Exercise:** Sum, D+E, is "add a new bottom point below both posets". Coalesced sum, D⊕E, is "glue both posets together at their bottom point". Lifting, D⊥, is "introduce one new bottom point". 1 is the domain of a single point, and 2 is the domain of two points, a top and a bottom. Using these type constructors, how would you define the following domains in terms of themselves in such a way that the fixtype construction process makes the indicated poset? Like how the domain of infinite bitstrings/the infinite binary tree is the solution to D≅D+D.
We aren't just limited to the sum types, there's also products and function spaces. For instance, the solution to D≅[D→D]⊥ (take the function space from itself to itself, add a new bottom point) is the space of terms in the lazy lambda calculus.
If, say, you wanted to whip up a fixtype for
D≅[D→D]+E
E≅[E×E→D]×2
Or even something much crazier than that, domain theory handles it with no problem. Start with D,E being single points, substitute them in to get stage 2 domains for both of them, working in this way you can build a sequence of more-and-more-detailed D's and E's, take the bilimit of both sequences, and you have a simplest solution!
### **Putting It Together**
Inframeasures also have something that could be described as an information ordering. Restricting to just crisp infradistributions/closed convex sets of probability distributions for the time being, the maximal elements with the most information would be individual probability distributions. Elements further down are larger sets of probability distributions, representing more and more uncertainty about what probability distribution accurately captures events. And the bottom node would be complete uncertainty about what probability distribution describes reality, ie, the set of all probability distributions over your space.
So, that's the first inkling that something's up. Inframeasures naturally make a poset, unlike probability distributions. Suprema (for crisp infradistributions) would be intersection of the sets of probability distributions. Infinima would be closed convex hull of the union of the sets of probability distributions. Using ψ for the expectation functionals, we have
ψ′⊒ψ↔∀f:ψ′(f)≥ψ(f)
(⊒ is used for the ordering on domains, ≥ is used specifically when we're dealing with numbers)
This would line up with the type signature for inframeasures (in domain theory) being something like the space [[X→R]→R]. Though this particular type signature won't work, as we'll see later. The monotonicity property (bigger functions get bigger expectations) on inframeasures would fall out for free because continuous functions from a poset to a poset in domain theory are always monotone. And suprema/infinima of inframeasures, from [Less Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9), seemed to behave suspiciously like suprema and infinima in domain theory.
There is a very important note here. The information ordering on inframeasures is flipped from the usual ordering inframeasures are equipped with. The usual ordering has an equivalence between: Infinimum (∧), logical and (also denoted by ∧), and set intersection ∩ (of the corresponding sets of probability distributions). And there's an equivalence between supremum (∨), logical or (also ∨), and convex hull of set union ∪.
But, under the information ordering, set intersection would be *supremum*. The sets get smaller and thus have less uncertainty and more information. Accordingly, the notation ⊑ will be reserved for the information ordering, and ⊔,⊓ for domain-theory suprema and infinima (intersection and union), respectively. This is so ⪯,∧ and ∨ can be reserved for the usual ordering, avoiding the notational nightmare of using the same symbol for both the ordering and the reverse ordering.
It's very boring to just go "ooh, infradistributions make a domain!", though. You can't get much out of that. The ultimate goal would be to figure out how to take a domain D, and make a domain of inframeasures over the space D.
If you did this cleanly enough, then "map a type D to the type of inframeasures over D" would end up being a nicely-behaved type constructor in domain theory, and you could whip up whatever sort of weird fixtypes you want with it, and probably cram it all into Haskell somehow, and have inframeasure-y uncertainty over any Haskell type.
This is a big deal because it's a surprisingly big pain in the ass to make the "map a type D to the type of probability distributions over D" type constructor behave sensibly.
There *is* a domain-theory construction which builds the space of probability distributions over another domain. It's called the probabilistic powerdomain. The issue is, probabilistic powerdomain is an operation which leads you outside of the most commonly used cartesian-closed categories of domains (which you need to stay within to cook up the fixtype solutions), and the cartesian-closed categories of domains are exhaustively classified. The problem of showing whether or not there's *some* lesser-used category of domains that's closed under probabilistic powerdomain is called the Jung-Tix problem, after the authors of the [original paper.](https://www.cs.bham.ac.uk/~axj/pub/papers/Jung-Tix-1998-The-troublesome-probabilistic-powerdomain.pdf)
There's a generalization of domains called [QCB spaces](https://ncatlab.org/nlab/files/BattenfeldDomains.pdf), and apparently a sub-category of them is closed under probabilistic powerdomain and has strong enough closure properties to support the fixtype construction process. So domain theory (broadly construed) has pretty much figured out how to handle probabilistic choice in computations since the original Jung-Tix paper was written. But you have to go rather far afield to do it.
The key problem with probabilistic powerdomain is that it destroys all the lattice-like structure present in a domain, and the best-behaved categories of domains with the strongest connections to computation are very lattice-like. Fortunately, the space of inframeasures *does* have a lot of lattice-like structure which is lacked by the space of measures. So, one might hope that one of the nice and commonly used categories of domain *is* closed under the "space of inframeasures" type constructor, for practical implementation.
As it turns out, yes, this can be done. There's problems to overcome along the way, though.
### **Problems and Plans**
**Problem 1:** The type signature of an inframeasure over X, in a domain-theory sense, would be something like an element of the space [[X→[0,1]]→[0,1]] or [[X→R]→R]. Up till now, inframeasures could only take expectations of bounded continuous functions X→R (or X→[0,1]), where R or [0,1] is equipped with the usual topology. Domain theory would equip [0,1] or R with the Scott-topology when making the function space, thus changing the notion of "continuous function". And also, there's no good support for the notion of "bounded function" in domain theory. And *also*, R isn't a legit domain. Closure under suprema of chains demands that ∞ be added, and the existence of a bottom element demands that −∞ be added. We need to sort out fiddly issues of what sorts of functions we can even take the expectation of in the first place.
**Problem 2:** Inframeasures have only been defined on Polish spaces up till now. Polish spaces are the usual setting for probability theory. And domains are *very very* not-Polish. They're T0 spaces, about as badly behaved as a topological space can be, while Polish spaces are T6, about as nicely behaved as a topological space can be. This is going to take some serious generalization work.
**Problem 3:** Not *all* the elements of that function space are suitable to be an inframeasure. Inframeasures are functionals which have some additional properties like concavity, which wouldn't show up for an arbitrary continuous function. If we wanted to build a space of inframeasures and show it was nicely behaved from a domain-theory standpoint, we'd have to craft it as some sort of subset or quotient of the full function space, which is more finicky to pull off than it appears at first.
**Problem 4:** There's a lot of extra properties inframeasures can have, if you want to restrict to particularly nicely-behaved subsets of them. We could restrict our attention to infradistributions (which must map the constant-0 function to 0 and constant-1 function to 1). And there's also properties you can stack on top of that like 1-Lipschitzness, homogenity, C-additivity, cohomogenity, crispness, and sharpness (definitions not relevant here, they're present in [Less Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9)). We could just as well imagine building domains corresponding to those, to make type constructors like "build the type of crisp infradistributions over this other type". Plus, there's the issue of whether we're using the [[X→[0,1]]→[0,1]] space, or the [[X→R]→R] space. We *really* don't want to have to duplicate all the work ten times over in proving that these more restricted processes all make nice domains.
The game plan here is:
**Step 1:** Inframeasures take as input continuous bounded functions in X→[0,1] (or R) where [0,1] has the usual topology. The domain-theory space of continuous functions [X→[0,1]] (or R), has [0,1] or R equipped with the Scott-topology, and doesn't talk about boundedness. This is quite different! Sort that part out somehow, to address problem 1.
**Step 2:** Conjecture what sort of domain a domain of inframeasures would be, so we know which category of domains we're headed for. It should be one of the smaller categories of domains that's more closely linked to what computers work on. Let's call these "nice" domains, as a placeholder.
**Step 3:** Come up with some sort of criteria for a "good" topological space to define inframeasures over that manages to include both the usual sorts of Polish spaces we were looking at before, *and* "nice" domains, so you can apply inframeasures to domains.
**Step 4:** Verify we solved steps 2 and 3 correctly by showing that if X is "good", then [X→[0,1]] is a "nice" domain. Then, since, by step 3, a "nice" domain should be a "good" space, we can apply our result a second time to conclude that [[X→[0,1]]→[0,1]] is a "nice" domain. Where "nice" and "good" are placeholders for something more formal. Also, generalize this to the R type signature. Steps 3 and 4 together would address problem 2, thus extending the notion of "inframeasure" to the sorts of objects studied in domain theory.
**Step 5:** Now that [[X→[0,1]]→[0,1]] (or the same thing with R) is a "nice" domain, see if we can get some sort of master theorem for precisely when a subset of a "nice" domain is also a "nice" domain. All the various sorts of inframeasures and infradistributions are subsets of that whole function space, so it'd be nice to only check the conditions of the master theorem to go "and this subset of it is also a "nice" domain" instead of having to do all the work from scratch each time. This would address problem 4 by making one single theorem we can reuse.
**Step 6:** Show that for inframeasures, and all the more specialized types of them, they all fulfill the conditions from the master theorem in step 5, so they all make "nice" domains. This would solve problem 3, the last of them.
By now, steps 1-6 show that the category of "nice" domains is closed under the "take the space of inframeasures" type constructor, and all the more restrictive variants of it like "take the space of crisp infradistributions". This is because, by step 3, all "nice" domains should be "good" spaces. By step 4, the function space [[D→[0,1]]→[0,1]] (or the R variant) will also be "nice". And then, by steps 5 and 6, the various subsets of the full function space fulfilling various desired properties are also "nice". So we're just on detail cleanup from here on.
**Step 7:** Do a quick routine verification that "take the space of inframeasures" is a locally continuous functor from the category of "nice" domains to itself, as that's the property you have to check to freely use it as a type constructor in fixtype shenanigans with no worries. Also, annoyingly enough, [my domain theory textbook](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf) doesn't verify closure of the various categories of domains under *all* the other usual type constructors that made an appearance there (lifting, sum, coalesced sum, product, smash product, function space, strict function space, bilimit), only some of them, so get those squared away.
And then we'd be done! By step 7 (the category of "nice" domains is closed under all these type constructors), we can just throw chapter 5 of the [Abramsky textbook](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf) at it to get that we can build whatever weird type signature we want from these basic building blocks and it'll work out.
Of course, not all these steps will interest the reader, and we'll be going into an awful lot of detail on how each one works, so if you've made it this far, feel free to skip straight to whichever one piques your interest.
### **Step 1: Fix function problems**
Inframeasures are defined over Polish spaces with no issues, so let's start there. Our goal is to extend inframeasures over Polish spaces X from taking expectations of bounded continuous functions X→[0,1] (or R) to taking expectations of Scott-continuous functions X→[0,1] (or R).
The Scott-topology opens on [0,1] are (besides the whole space and the empty set) all of the form (q,1] for q∈[0,1), the sets of the form "everything strictly above this number". Similar thing for R. This is fewer opens than the usual topology on [0,1] (or R), so it's *easier* for a function X→[0,1] to count as continuous, since the criteria "every preimage of an open is open" is easier to fulfill when you've got less opens in the target space.
Going to probability distributions as a toy example to play with, if you had previously invented probability distributions in such a way that you could only take expectations of bounded continuous functions with them, and were trying to extend the expectations to more general functions than that, how could it be done?
Well, for probability distributions, there's a nifty result called [Lebesgue's Dominated Convergence Theorem](https://en.wikipedia.org/wiki/Lebesgue%27s_dominated_convergence_theorem) which (skipping over technical requirements) says that if a sequence of measurable functions fn:X→R limits pointwise to f∗:X→R, then
limn→∞Eμ(fn)=Eμ(f)
So, if you could prove a result like the Dominated Convergence Theorem, then you could go "hey, we can get sensible expectations for all functions which are pointwise limits of continuous functions, since no matter what sequence of functions you pick to get there, the limits of the expectation values will agree!". And then you'd have expectations for the Baire class 1 functions (pointwise limits of continous functions). Apply the theorem again and you get Baire class 2 functions (pointwise limits of Baire class 1 functions). Keep going, take unions when you get to limit ordinals, and by the time you've hit the ordinal ω1, you're done, you've figured out how to take expectations of any measurable function.
Can we do something like this for inframeasures? No. There's no analogue of Lebesgue's Dominated Convergence Theorem. In fact, it explicitly fails, and not way off in the upper realms of the ordinals, it fails down at the start.
**Proposition 1:** *There is an infradistribution*Ψ*on the space*[0,1]*and a sequence of continuous functions*fn:[0,1]→[0,1]*which limit pointwise to a function*f∗*, yet*limn→∞EΨ[fn]=0*and*EΨ[f∗]=1*.*
Proof: Let Ψ be the set of all probability distributions on [0,1]. The expectation function is f↦minx∈[0,1]f(x), because you can always consider the probability distribution which has all its measure on the point where the function is lowest.
Function fn is defined to dip down in a V pattern, from 1 at 0, to 0 at 2−n, back up to 1 at 2−n+1, and is 1 for all larger numbers. This sequence limits pointwise to the constant-1 function. Since all the fn have a minimum value of 0, they have expectation values of 0, and the constant-1 function has a minimum value of 1, so it has an expectation value of 1.
But we *can* get an analogue of [Beppo Levi's Monotone Convergence Theorem](https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Beppo_Levi's_lemma), for inframeasures!
The Monotone Convergence Theorem says that if you've got a sequence of lower-bounded measurable functions fn:X→(−∞,∞] s.t. fn+1≥fn, then the pointwise limit of them, f∗:X→(−∞,∞], has
limn→∞Eμ[fn]=Eμ[f∗]
There's a bit of a roadblock though. If you start off with bounded continuous functions, then taking pointwise limits of ascending sequences like that gets you... all the lower-bounded lower-semicontinuous functions. And then, taking pointwise limits of sequences like *that*, you can get... lower-bounded lower-semicontinouous functions. That's as far as we can go with this result!
Also, more excitingly, we can prove another theorem that *all* lower-bounded lower-semicontinuous functions f∗:X→(−∞,∞] have an ascending sequence of bounded continuous functions fn:X→R that limit to f∗. So if we've got an analogue of the Monotone Convergence Theorem for inframeasures, that should let you extend expectations from the bounded continuous functions to the lower-bounded lower-semicontinuous functions.
**Proposition 2:** *If*X*is a metrizable space, then the pointwise limit of a sequence of lower-bounded lower-semicontinuous functions*fn:X→(−∞,∞]*with*fn+1≥fn*is lower-bounded and lower-semicontinuous.*
**Proposition 3:** *If*X*is a metrizable space, then given any lower-bounded lower-semicontinuous function*f∗:X→(−∞,∞]*, there exists a sequence of bounded continuous functions*fn:X→R*s.t.*fn+1≥fn*and the*fn*limit pointwise to*f∗*.*
**Theorem 1/Monotone Convergence Theorem For Inframeasures:** *Given*X*a Polish space,*Ψ*an inframeasure set over*X*,*f∗*a lower-bounded lower-semicontinuous function*X→(−∞,∞]*, and*{fn}n∈N*an ascending sequence of lower-bounded lower-semicontinuous functions*X→(−∞,∞]*which limit pointwise to*f∗*, then*limn→∞EΨ[fn]=EΨ[f∗]
Proposition 3 and Theorem 1 imply that you can extend expectations to lower-bounded lower-semicontinuous functions X→(−∞,∞], not just bounded continuous functions. The bounded continuous functions all have their expectation values well-defined. By Proposition 3, you can take any lower-bounded lower-semicontinuous function and find continuous bounded functions to limit to it from below. By Theorem 1, this lets you uniquely define the expectation value because no matter how you limit to it from below with continuous functions (with well-defined values), you've got the same limit value.
Well... why do we care about this? As it amazingly turns out, lower-semicontinuous functions X→[0,1] or X→(−∞,∞] are *exactly* the Scott-continuous functions X→[0,1] or X→(−∞,∞]! So we were able to generalize precisely far enough to take expectations of the sorts of functions that domain theory says we should be able to take expectations of.
In fact, it's even better than that! Remember how inframeasures (as we'd attempt to define them in domain theory) would be in [[X→[0,1]]→[0,1]], or [[X→R]→R] (with the Scott-topology) or something like that?
Well, [X→[0,1]] and [X→R] are posets. And the interval [0,1] (or R) is a poset. The Scott-continuous functions from one poset to another are those which are monotone, and preserve suprema of chains.
So, we can ask "are inframeasures Scott-continuous functions?" Monotonicity is one of the defining conditions on an inframeasure, so we've got that. And Theorem 1 is effectively saying "oh hey, your inframeasure function ψ:[X→[0,1]]→[0,1] (or the other type signature) happens to preserve suprema of chains of functions". So yes, inframeasures *must* be Scott-continuous.
It's a perfect match! We can extend expectations exactly far enough to catch the Scott-continuous functions X→[0,1] (or the other type signature), and our Monotone Convergence Theorem for inframeasures is saying precisely that classically-defined inframeasures happen to be Scott-continuous functions in [[X→[0,1]]→[0,1]]. A lovely match to domain theory.
These results *do* point towards the function space [[X→R]→R] being the wrong choice for domain theory, however. Instead you'd want the function space that inframeasures lie in to be [[X→(−∞,∞]]→[−∞,∞]]. Your functions you're taking expectations of must have a lower bound, so −∞ is not legit. But also, domains need bottom elements and suprema of chains, so that explains why −∞ and ∞ show up, they make it so the function space has a bottom and a top.
### **Step 2: Defining Nice Domains**
Of the various categories of domains, let's narrow down what we might mean by "nice".
First up, there's a classification theorem (Theorem 4.3.3 in [Abramsky](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf)) that says that any cartesian-closed category of domains is a subcategory of either the FS-domains or the L-domains. We won't get into definitions here, but roughly, if you make a function space between two domains [D→E], it might *not* be a domain. But if D is a FS-domain then the whole function space will be a FS-domain, and if E is an L-domain, then the whole function space will be a L-domain. Since our space of inframeasures over a topological space X would be a subset of [[X→[0,1]]→[0,1]] (or the other type signature), clearly, our best bet is not the FS-domains, as their good behavior relies on being the input space. We want the L-domains, as their good behavior relies on being the output space. And L is short for "lattice-like", and [0,1] (or [−∞,∞]) is a complete lattice.
Ok, so L-domains. The lattice-like domains. Maybe we can hope for some nice property that makes it easier to work with on a computer? There's two of those. ω-continuity, and algebraicity.
Algebraicity (which won't be explained here) won't work because [0,1] and [−∞,∞] aren't algebraic, and that contaminates everything else.
Ok, what about ω-continuity? What's that? Well, continuity was
∀y∈D:y=⨆↑{x∈D|x≪y}
You can make all points from below using the stuff that approximates it. ω-continuity says you can do this with a *countable* subset. If there's a countable set B⊆D s.t.
∀y∈D:y=⨆↑{x∈B|x≪y}
then D is ω-continuous. It's a "you only need countably many pieces of information to build every other point from below" condition. Which is essential for working with a domain on a computer, since computers can only handle finite bitstrings and there's only countably many of those to go around!
It's highly instructive to compare this to separability. Separability of a metric space X is "there's a B⊆X s.t. B is countable, and every point in X can be built as a limit of points from B". Think the real number line and the rationals for this.
Ok, so ω-continuity is a yes, algebraicity is a no, we're down to the category of ω-L domains. The ω prefix means those domains are ω-continuous.
Still more choices to make, though. L-domains have a weak lattice-like property, but the BC-domains are more lattice-like than that. And then there LAT, the continuous complete lattices, which are as lattice-y as possible.
The category LAT is a bad choice though, because + (add a new bottom element below both of your posets) doesn't preserve being a complete lattice. And + is a pretty damn important type constructor to have. So that one's out.
Accordingly, our notion of "nice" domain will be ω-BC domains. They've got a countable basis so they're highly suitable for working with on a computer, are as lattice-like as we can get without forbidding sum types, and all the Haskell constructions lie in here. They're perfect! We'll use them.
Er... what does it even mean to be an BC-domain though? Well, BC stands for bounded-complete. So here's the deal. If some A⊆D (A isn't necessarily a directed set here) has an upper bound, or A is a chain, then the supremum (least upper bound) of A exists. This is where the name "bounded complete" comes from. If there's an upper bound to your set, it has a supremum. Also, if A is nonempty, then the infinimum of A must exist (supremum of lower-bounds). Compare to a complete lattice having all suprema and infinima exist.
Pretty much, BC-domains all look like you took a complete lattice where you can't build ⊤ from below with suprema of chains, and ripped the top point off of it. This doesn't affect any infinima except the infinima of the empty set, which is the top point. So that's the only infinima that stops existing. And it doesn't affect the existence of any suprema except the sets where the supremum is ⊤. But if there's an upper bound that's not ⊤, then that upper bound is still hanging around and you can make a least upper bound.
### **Step 3: Defining Good Spaces**
Next up, coming up with a definition for a "good" space. It's very hard to give a linear account of why it's a good definition for a "good" space, because the starting attempt was heavily motivated by fiddly details of how the approximation order works for function spaces in domain theory, and then I went through a few iterations of finding out that my definitions got me pretty far, but not far enough, so I had to make things stronger. I'll give the motivation my best shot.
Given a topological space X, what properties must it have to turn [X→[0,1]] into an ω-BC domain? Our function space will be either [[X→[0,1]]→[0,1]] or [X→(−∞,∞]→[−∞,∞]], and [−∞,∞] is topologically equivalent to [0,1], so this is a very important desiderata to have.
The hardest part of this to get, it turns out, is the ω-continuity part. We need a countable collection of functions in the function space which are rich enough to build every other function from below.
From riffling through the [trusty textbook](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf), (start of Chapter 4), the way this would work in the function space [D→[0,1]] (if D was a domain) is as follows: You've got a nice countable collection of open sets O⊆D, and a countable subset of [0,1] which lets you build everything from below (the rational numbers). The countable collection of functions that you can build everything from are of the form (O↘q), the function that maps the open set O to the rational number q, and maps everything else to 0. Oh, and the finite suprema of such.
As it turns out, this can be adapted to arbitrary topological spaces X. The key part is the "countable collection of open sets".
So, our first restriction on X to be "good" will be second-countability. There should be countably many open sets s.t. you can build every open set in X from unioning those basic ones together. That's what second-countability is. All ω-domains are second-countable, so are Polish spaces, we're doing good here.
When you dig into the proof a little more, it turns out that second-countability isn't enough, you also really need local compactness. The problem with local compactness is that if X isn't Hausdorff (and domains aren't Hausdorff), then the various definitions of local compactness which used to be equivalent stop being equivalent.
The specific variant of local compactness you need is that for all open sets O and points x∈O, there's an open set U and compact set K s.t. x∈U⊆K⊆O. Every neighborhood of a point (the O) contains a compact neighborhood of that point (the K)
And... Polish spaces aren't locally compact in general. Ouch. Domains always are. Importantly, this does *not* mean there's a problem defining inframeasures on Polish spaces in general. It *does* mean there's problems with turning them into a nice pretty domain.
Ok, so our two criteria are "second-countable" and "locally compact". Anything else?
Well, this gets you pretty far, but not far enough. As it turns out, if you mandate that X is compact, it solves about 3 different fatal problems at the same time. Some of them would be hard to explain, but there's an easy one to explain.
Remember how, back in Step 1, we could only handle lower-bounded Scott-continuous functions X→(−∞,∞]? Well, it's possible that you could have a function that's unbounded below, but still Scott-continuous. But if X is a compact space, then your function can't possibly be unbounded below, you get boundedness for free, and we really needed that. If X is compact, then the space [X→(−∞,∞]] just has exactly the bounded-below Scott-continuous/lower-semicontinuous functions in it, which are exactly the ones that our Step 1 results justify us taking the expectations of.
Confusingly enough, compactness does *not* imply local compactness in the sense we've defined it.
Domains are, fortunately, compact. This is because all open sets in a domain are closed upwards, and domains have a bottom element. So any open cover of a domain must have an open set which includes the bottom element, which then contains the rest of the set since open sets are closed up. Bam, that's a finite subcover, the domain is compact.
And, well, I guess we're down to compact Polish spaces now, as the only Polish spaces which fulfill these three conditions. Those are the really nice ones we started off our journey with in [Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/YAa4qcMyoucRS2Ykr) though. No problems here.
So, second-countability, local compactness, and compactness. Is that enough for a space to be "good"?
*Possibly.* It might very well be. But when I was grinding through the theorems, there was this one key step regarding concavity that didn't work with those three properties alone, and I needed something *else*. It's possible that that one key step of the proof might not require anything other than those three properties. But I had to assume a fourth property to make it work, which is a stronger version of local compactness.
It's not any topological property I've ever seen before, so I'm going to call it LHC, for locally hull-compact.
**Definition 1: LHC-space**
A space X is LHC, locally hull compact, if there's a base B closed under finite intersection, and a function K:B→K(X), mapping an open set from the base to a compact subset of X (the "compact hull"), which fulfills the following three properties.
**1:**∀U∈B:U⊆K(U)
**2:** ∀U1,U2∈B:U1⊆U2→K(U1)⊆K(U2)
**3:** For any open set O and point x∈O, ∃U∈B:x∈U⊆K(U)⊆O
Those first two properties are two of the three for a [hull operator](https://en.wikipedia.org/wiki/Closure_operator). The third property of a hull operator K(K(U))=K(U), can be dropped because we're only applying K to the open sets from the base, not to everything.
Intuitively, K(U) is a compact set that fits tightly around the open set U from the base. We can't just go "K(U) is the smallest compact set with U as a subset" because when we go out of Hausdorff spaces, the intersection of compact sets might not be compact. But K(U) should be bigger than U. And if you pick a bigger set from the base, the compact hull should be bigger. And K should be a "tight enough fit" around U that this compact hull witnesses local compactness. Given a point in an open O, you should always be able to find an open from the topology base that contains x and is small enough that its compact hull also fits in O.
And so, we get LHC-spaces. Locally hull-compact. A space is LHC if it has a base closed under finite intersection, and a compact hull operator K on the base which can be used to witness local compactness.
As it turns out, BC domains are LHC-spaces. And compact Polish spaces are LHC-spaces as well, because you can just take K to mean "closure".
So, that's how "good" spaces, where the space of infradistributions on them is an ω-BC domain, finally cash out. A space that's compact, second-countable, and LHC (locally hull-compact).
**Proposition 4:** *All compact Polish spaces are compact second-countable LHC spaces.*
**Proposition 5:** *All open subsets of*ω*-BC domains are second-countable LHC spaces.*
**Corollary 1:** *All*ω*-BC domains are compact second-countable LHC spaces.*
Woo! We've got a definition for "good space" now! Compact second-countable LHC spaces. And it includes the most essential sorts of Polish spaces that we use all the time (the compact ones), *and* all the domains we're trying to define inframeasures over!
### **Step 4: the Function Space is an**ω**-BC Domain**
Ok, so we should be able to prove that the function space [[X→[0,1]]→[0,1]] is an ω-BC domain if X is compact second-countable LHC, right? Yes! With a bit more work, we can even extend it to the function space [[X→(−∞,∞]]→[−∞,∞]]
**Theorem 2:** *If*X*is second-countable and locally compact, then*[X→[0,1]]*is an*ω*-BC domain, as is*[X→[−∞,∞]]*.*
**Corollary 2:** *The space*[[X→[0,1]]→[0,1]]*is an*ω*-BC domain with top element when*X*is second-countable and locally compact.*
**Proposition 6:** *The space*[[X→(−∞,∞]]→[−∞,∞]]*is an*ω*-BC domain with top element when*X*is compact, second-countable, and locally compact.*
Theorem 2 is where the bulk of the work is. Most of showing the space is an ω-BC domain is that ω-continuity part, which requires that second-countability and local compactness property to properly invoke the proof path from the domain theory textbook about how to approximate all the functions with only countably many simple functions, which takes a while. The [−∞,∞] case holds because it's topologically identical to [0,1], so we only need to prove things once.
Corollary 2 is easy because, by Theorem 2, [X→[0,1]] is an ω-BC domain. By Corollary 1, this is a compact second-countable LHC space, so we can just invoke Theorem 2 a second time.
The proof path for Proposition 6 (dealing with the R-like type signature) is a little more tricky.
By our arguments in step 1, we're only justified in taking expectations of functions in (−∞,∞] that are bounded below. Having [−∞,∞] there instead of (−∞,∞] would permit some rather pathological inframeasures, so we do need that restriction to (−∞,∞]. Assuming X is compact gets us that any continuous function is bounded below, and also lets us deploy some stronger approximation arguments to carry through the rest of the proof.
The essential core of proving Proposition 6 is managing to show that [X→(−∞,∞]] is an open subset of [X→[−∞,∞]]. Once that's done, we can invoke Proposition 5 to know that [X→(−∞,∞]] fulfills the preconditions as a topological space to apply Theorem 2 to get the whole result.
Corollary 2 and Proposition 6 specify that there's a top element in said domain, because it will be relevant later.
### **Step 5: Master Retraction Theorem**
Our goal now is to figure out when a subset of an ω-BC domain is an ω-BC domain, so we have a condition to shoot for to establish that the space of inframeasures (or whatever else) is an ω-BC domain (since it'd be a nicely-behaved subset of the whole function space). It'll turn out that things go smoother if we've got a top element present, which is why it's important that Corollary 2 and Proposition 6 reminded us that there was a top element present (making the function space into a complete lattice).
So, if D is an ω-BC domain with a top element, and P is some subset of D, then when is P (the set P equipped with the order inherited from D, P for property) an ω-BC domain?
Well, there are these things called section-retraction pairs. When you've got a small domain P and a big domain D, there's one function that specifies how to embed P in D (the section), and another function that specifies how to crunch D down to P, which acts like taking a quotient (the retraction).
As long as the section and retraction are both continuous, P will end up inheriting a bunch of nice properties from D. Specifically, retractions of ω-BC domains are ω-BC domains. So this is what we're looking for! Subsets of the full function space where we can build a section-retraction pair between the subset and the full function space.
The section s:P→D (the embedding of your subset in the full function space) is something you always have. For it to be continuous, what you need is that your subset P of the full function space is closed under suprema of chains/directed sets. So that's one condition needed.
Getting the retraction r:D→P to be continuous is harder. You need to take some arbitrary element of D and find an element of your subset P that it corresponds to, and do so continuously.
There's effectively only two ways to do this. If your subset P of the full space D is closed under arbitrary nonempty suprema (as computed in D) and contains the bottom point, then you can just map an arbitrary element in D to the greatest element of P that lies *below* it.
But this wouldn't work for inframeasures, their supremum doesn't behave nicely enough to do this. And besides, you'd have to check *exactly* how this function works, you don't get a for-free guarantee that it's continuous.
The second way is that, if your subset P of the full space D is closed under arbitrary nonempty infinima and contains the top point, you can map an arbitrary element in D to the least element of P that lies *above* it. This, by contrast, is automatically a continuous function. And it *should* work for inframeasures, their infinimum behaves well.
So, that's it. Any subset of the whole space D that's closed under suprema of directed sets, arbitrary nonempty infinima, and contains ⊤, is a retraction of D and inherits all its nice properties.
Well, some special sorts of inframeasures (like crisp infradistributions) might not contain the top point of the function space, though! But, y'know how BC-domains are like a continuous lattice but with the top point ripped off? You can just take your subset P which *doesn't* include the top point of D, and add the top point in. Then, P⊤ (your set of interest, plus a top point) will inherit the nice properties of D since it's a retraction. Finally, you just rip the top point off, and you'll still have an ω-BC domain. So, in general, we get
**Theorem 3/Retraction Theorem:** *If*D*is an*ω*-BC domain with a top element, and*∅≠P⊆D*, then*P*, which is the set*P*equipped with the order inherited from*D*, is an*ω*-BC domain if the following two conditions hold.*
*1:*P*is closed under directed suprema.*
*2:*P*is closed under arbitrary nonempty infinima.*
This is a suitable master theorem! We now just need to check that our inframeasures (as a subset of the function space) are closed under directed suprema (easy) and arbitrary infinima (actually quite hard, because infinite infinima of continuous functions does *not* behave like you'd naively think). If we do that, then we know that they make an ω-BC domain.
And if we want to show that some special restricted sorts of inframeasures *also* make an ω-BC domain, even those we haven't invented yet, we just need to check that their defining property is preserved under directed suprema and arbitrary nonempty infinima, and it'll automatically work.
### **Step 6: Inframeasures Make Nice Domains**
The basic properties of an inframeasure over some Polish space were: The constant-0 function gets mapped to 0 or higher. The expectation functional must be monotone. The expectation functional must be Lipschitz. The expectation functional must be concave. The expectation functional must fulfill the CAS property (compact-almost-support, detailed in [Less Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9))
Which of these properties can we preserve or discard? Well, for (Scott-)continuous functions from one poset to another, they will always be monotone. We get this property for free. Since we're assuming our space of interest is compact, CAS is fulfilled for free. Concavity remains critically important, and "constant-0 function goes to 0 or higher" is an easy condition to have.
And then, with Lipschitzness, things go off the rails. In particular, by our Retraction Theorem, we need closure under arbitrary infinima. And Lipschitzness is *not* a property preserved under arbitrary infinima, only finite infinima. Easy way to see this: Taking the infinite infinima of Lipschitz functions might be discontinuous. Taking the infinite infinima of, say, functions with a Lipschitz constant of 4 or less will have a Lipschitz constant of 4 or less.
The way to solve this is to restrict to a particular fixed Lipschitz constant. I picked 1-Lipschitzness, as that one seems to be the best motivated. [Less Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9) (LBIT) says a lot of operations on inframeasures preserve 1-Lipschitzness, and you also need it for the infinite semidirect product (implicit in Proposition 22 of LBIT). Also, for distance metrics on inframeasures (in LBIT), the Lipschitz constant seems to play a similar role to the norm of of a point in a vector space.
So, it's possible that the normalization (to turn an inframeasure into an infradistribution) with the best properties is *not* the "map the constant-0 function to 0, and the constant-1 function to 1" normalization, but instead the "map the constant-0 function to 0, and scale the Lipschitz constant of your function to 1" normalization.
Anyways, our final list of the three properties for an element of the function space [[X→(−∞,∞]]→[−∞,∞]] (or [[X→[0,1]]→[0,1]]) to be an inframeasure is:
**Definition 2: Inframeasure (for Domain theory)**
**0:** An element of the space of continuous functions [[X→[0,1]]→[0,1]] or [[X→(−∞,∞]]→[−∞,∞]] , for X a compact second-countable LHC space.
**1:** The constant-0 function isn't negative. ψ(c0)≥0.
**2:** Concavity. ∀p∈[0,1],f,f′:ψ(pf+(1−p)f′)≥pψ(f)+(1−p)ψ(f′)
**3:** 1-Lipschitzness. ∀f,f′:|ψ(f)−ψ(f′)|≤supx|f(x)−f′(x)|
1-Lipschitzness and concavity require some conventions about how to deal with functions which might be ∞ for some inputs, but it's all quite addressable.
Btw, this coincides with the preexisting definition for "1-Lipschitz inframeasure" in compact Polish spaces. The old criteria were monotonicity, the constant-0 function moving up, concavity, 1-Lipschitzness, and the CAS property. Clearly, for the new definition, three of the criteria are fulfilled. The CAS property is automatically fulfilled in compact spaces. And Scott-continuous functions (by condition 0) are automatically monotone. so all old criteria are fulfilled.
As for the new criteria defined above, the only one that isn't assumed in the old definition is Scott-continuity. Scott-continuity is monotonicity plus preservation of directed suprema. Monotonicity is one of the old conditions, and Theorem 1 shows preservation of directed suprema, so Scott-continuity can be derived.
Are these three defining conditions for an inframeasure preserved under directed suprema and arbitrary infinima in order to invoke our Retraction Theorem? Well, it's a bit frustrating to show, *especially* the part about preserving concavity under arbitrary intersection (which was the only place I really needed LHC spaces instead of just local compactness), but yes. So we get:
**Theorem 4:** *The subset of*[[X→(−∞,∞]]→[−∞,∞]]*(or*[[X→[0,1]]→[0,1]]*) consisting of inframeasures is an*ω*-BC domain when*X*is compact, second-countable, and LHC.*
There's a bunch of other sorts of infradistributions, like homogenous, crisp, sharp, and so on, detailed in [Less Basic Inframeasure Theory, Section 2](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/idP5E5XhJGh9T5Yq9). Are *they* preserved under directed suprema and arbitrary infinima?
For the most part yes. Well, possibly not sharpness, that's a bit complicated, more on it in a bit.
**Theorem 5:** *The following properties are all preserved under arbitrary infinima and directed supremum for both*R*and*[0,1]*-inframeasures, if*X*is compact, second-countable, and LHC.*
ψ(c0)=0
ψ(c1)≥1
ψ(c1)≤1
ψ(c1)=1
∀q∈R:ψ(cq)=q*(supernormalization)*
∀a∈[0,∞):ψ(af)=aψ(f)*(homogenity)*
∀a∈[0,∞):ψ(1−af)=1−a+aψ(f)*(cohomogenity)*
∀q∈R:ψ(f+cq)=ψ(f)+q*(C-additivity)*
∀q∈R,a∈[0,∞):ψ(af+cq)=aψ(f)+q*(crispness)*
Pairing this result (and boy is it a long one to show) with the Retraction Theorem, we get that virtually any subclass of inframeasures you want, like crisp infradistributions, *also* makes an ω-BC domain, regardless of the type signature we're using.
Now, here's the problem with sharp infradistributions. An infradistribution is sharp if it's a function of the form f↦minx∈Kf(x), where K is a compact set. It's complete uncertainty over a compact set. For stuff like Hausdorff spaces, the arbitrary inf of sharp infradistributions is also sharp, but I don't know what occurs in general. The basic sketch of the problem is that in compact Hausdorff spaces, any set has a smallest compact superset because the intersection of compact sets is compact, but this doesn't hold in general topological spaces.
### **Step 7: Type Constructor Cleanup**
Ok, given any ω-BC domain D (including Haskell types), the space of inframeasures (or crisp infradistributions, or c-additive inframeasures, or...) over it is also an ω-BC domain. Let's use □MD for the space of inframeasures on D. Though, if you want, you can consider "space of crisp infradistributions" or some type constructor like that, since Theorems 3-5 give you a heck of a lot of freedom to restrict to the space of inframeasures which most interest you.
In order to check whether "take the space of inframeasures" is the sort of type constructor we can build fixtypes with, we first need to verify it's a functor from the category ωBC to ωBC. Yup, we know that if D is an ω-BC domain, then □MD is, that's what all the previous work was about.
But, if we've got a continuous function/morphism g:D→E, what continuous function/morphism □MD→□ME do we get? Well, it's exactly the pushforward along the function g (as discussed in [Basic Inframeasure Theory](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa/p/YAa4qcMyoucRS2Ykr)) Given some g:D→E, with ψ:□MD and f:E→(−∞,∞], then the pushforward g∗:□MD→□ME is the function
λψ.λf.ψ(f∘g)
Ie, it maps an input inframeasure ψ to the function λf.ψ(f∘g), which takes an input function and tells you what expectation value it has using ψ,f, and g.
**Proposition 7:** *The functor mapping*D*to*□MD*and*g*to*g∗*is indeed a functor*ωBC→ωBC*.*
This is pretty easy to show, we just need to verify that the pushforward of a continuous function g:D→E is indeed a continuous function from the space of inframeasures on D to the space of inframeasures on E, and the usual "preserve identity" and "preserve function composition" properties of a functor. If you want to generalize this to more restricted sorts of inframeasures (like crisp infradistributions), you can do it on your own, it's pretty easy to do. You just need to check the defining properties are preserved through pushforward, which pretty much everything is.
And, there's *one* last technical property we need to check before we can use "space of inframeasures" as a type constructor. Local continuity. Local continuity of a functor F is that the function
g↦F(g):[D→E]→[F(D)→F(E)]
is continuous. In our case it'd be mapping g to the pushforward g∗. This is routine and simple to verify.
**Proposition 8:** *The "space of inframeasures" functor is locally continuous.*
Oh, one last bit of cleanup. The textbook I was using didn't verify that the category of ω-BC domains was closed under all the usual operations of:
lifting (add a new bottom element to your poset)
+ (take the two posets and add a new bottom element below both of them)
⊕ (glue your two poset together at their bottom point)
× (cartesian product of posets)
⊗ (cartesian product but all pairs of the form (x,⊥) or (⊥,x) are identified with the bottom point)
Function space (make the domain of continuous functions from one domain to the other)
Strict function space (make the domain of just the continuous functions from one domain to the other with the property that f(⊥)=⊥)
Bilimit (The limit/colimit of domain theory)
So we'll need to clean that up.
**Proposition 9:** *The category of*ω*-BC domains is closed under lifting, +,*⊕*,*×*,*⊗*, function space, strict function space, and bilimit.*
And then, by chapter 5 of the trusty [domain theory textbook](https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf) from Abramsky and Jung, any fixtype equation built with these type constructors (and the "space of inframeasures" type constructor since we verified local continuity) has a unique least solution and we're done! Any weird type you want is now in your grasp, and it'll be an ω-BC domain, which are the sort amenable to computer implementation. Have fun!
EDIT: The "space of inframeasures" type constructor also makes a monad. Just found this out, thanks to Daniel Filan for conjecturing it.
The "unit" function, with type D→□MD, is defined by: (unit(x))(f):=f(x). It works out the value of a function by going "hey, function, what expectation value do you assign to this point?". Another way of thinking about it is that it's mapping a point in a space to the dirac-delta distributon on said point.
And the "bind" function, with type □MD×[D→□ME]→□ME, is the infrakernel pushforward, defined by (f has type E→(−∞,∞] here)
(bind(ψ,K))(f):=ψ(λx.K(x)(f))
And so we have a monad! It's routine to verify the monad identities.
Proofs for the post are [here](https://www.alignmentforum.org/posts/H5zo4L7yv4bnBgexQ/infra-domain-proofs-1) and [here](https://www.alignmentforum.org/posts/fLRgddjMTBnpbMeiM/infra-domain-proofs-2). |
f4e9b5be-1195-4318-a134-65a9e42a9e7b | trentmkelly/LessWrong-43k | LessWrong | Scaffolded LLMs as natural language computers
Crossposted from my personal blog.
Recently, LLM-based agents have been all the rage -- with projects like AutoGPT showing how easy it is to wrap an LLM in a simple agentic loop and prompt it to achieve real-world tasks. More generally, we can think about the class of 'scaffolded' [1] LLM systems -- which wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt. The idea of scaffolded LLMs is not new, however with GPT4, we have potentially reached a threshold of reliability and instruction following capacity from the base LLM that agents and similar approaches have become viable at scale. What is missing, and urgent, however, is an understanding of the larger picture. Scaffolded LLMs are not just cool toys but actually the substrate of a new type of general-purpose natural language computer.
Take a look at, for instance, the 'generative agent' architecture from a recent paper. The core of the architecture is an LLM that receives instructions and executes natural language tasks. There is a set of prompt templates that specify these tasks and the data for the LLM to operate on. There is a memory that stores a much larger context than can be fed to the LLM, and which can be read to and written from by the compute unit. In short, what has been built looks awfully like this:
What we have essentially done here is reinvented the von-Neumann architecture and, what is more, we have reinvented the general purpose computer. This convergent evolution is not surprising -- the von-Neumann architecture is a very natural abstraction for designing computers. However, if what we have built is a computer, it is a very special sort of computer. Like a digital computer, it is fully general, but what it operates on is not bits, but text. We have a natural language computer which operates on units of natural language text to produce other, more process |
b5ea5dc6-f83b-4394-b454-cae008a7a508 | trentmkelly/LessWrong-43k | LessWrong | Calculance: A "Core" Ability
There has been a long-standing gap in the English language for a single word representing the specific ability to perform effective logical operations. Introducing "calculance" to fill this void.
We could posit a priori (or reckon) that intelligence fundamentally arises from two core components: A goal and the ability to "world simulate", termed "calculance".
The English language lacks a dedicated term, equivalent to the German "Rechenfähigkeit", for the capacity to perform calculations and logical operations. Calculance fills this crucial gap.
(I have to stress that "Rechenfähigkeit" doesn't translate to computational power, "Rechenleistung".)
For better understanding, it would losely correspond to something like this:
Experts often awkwardly refer to FLOPS or computational power, followed by a kind of unspoken understanding, as if to say, "You know what I mean, the good FLOPS!" Although these concepts are related, FLOPS et al don't necessarily indicate the quality of computation. "Calculance," on the other hand, better represents the functional dimension of a computing system, beyond just the number of operations per second.
But again, it is not a computer science term, but a concept, akin to how the word "deep" in Deep Learning captures a performance metric.
Whether exploring human minds or designing intelligent machines, embracing the term could unlock a new level of clarity and precision in describing the inherent interplay of purpose and calculation that produces intelligence.
"The new update to the physics engine has less calculance, allowing it to run smoothly on lower-end hardware."
"The new update to the physics engine reduced its computational demands, allowing it to run smoothly on lower-end hardware."
Just an idea.
(A final note: I mostly like the word because of its sound. It could also serve as a vessel for a concept that would benefit from having its own unique term.) |
736f684b-57c0-4344-8325-14336b9da02a | trentmkelly/LessWrong-43k | LessWrong | Flashcards for AI Safety?
I sometimes struggle to remember the contents of all the articles I've read on AI Safety. Spaced repetition might be helpful, but this requires someone to write flashcards. For Anki, I've found 2 decks, titled "Superintelligence" and "AI Policy".
Do more AI Safety relevant decks exist?
What would be a good strategy for generating useful flashcards? |
832aba9c-26cb-47f4-b0ba-7817ea40e2d4 | trentmkelly/LessWrong-43k | LessWrong | Writing Causal Models Like We Write Programs
Clunc
We’ll start with a made-up programming language called Clunc. The distinguishing feature of clunc is that it combines classes and functions into a single type, called a clunc. It looks like this:
quad = Clunc {
x = 4
constant = 3
linear = 2*x + constant
result = x*x + linear
}
We could then go to a terminal:
>>> quad.result
27
>>> quad.linear
11
In order to use this clunc like a function, we apply the do() operator. For instance,
>>> quad3 = do(quad, x=2)
… creates a new clunc which is just like quad, except that x is 2 rather than 4:
>>> quad3
Clunc {
x = 2
constant = 3
linear = 2*x + constant
result = x*x + linear
}
When we query fields of quad3, they reflect the new x-value:
>>> quad3.result
11
>>> quad3.linear
7
There’s no designated “input” or “output”; we can use the do() operator to override any values we please. For instance
>>> quad_zero_linear = do(quad, linear=0)
>>> quad_zero_linear
Clunc {
x = 4
constant = 3
linear = 0
result = x*x + linear
}
>>> quad_zero_linear.result
16
A few quick notes:
* Clunc is purely clunctional: everything is immutable, and each variable can only be written once within a clunc. No in-place updates.
* Clunc is lazy.
* Variables can be set randomly, e.g. “x = random.normal(0, 1)”.
* The do() operator creates a new clunc instance with the changes applied. If there are any random variables, they are re-sampled within the new clunc. If we want multiple independent samples of a randomized clunc M, then we can call do(M) (without any changes applied) multiple times.
To make this whole thing Turing complete, we need one more piece: recursion. Cluncs can “call” other cluncs, including themselves:
factorial = Clunc {
n = 4
base_result = 1
recurse_result = do(factorial, n=n-1).result
result = (n == 0) ? base_result : n * recurse_result
}
… and that’s where things get interesting.
Causal Models
Hopefully the mapping from clunc to probabilistic caus |
e8f6abff-4fd4-4de4-ac42-a681c2b54f40 | trentmkelly/LessWrong-43k | LessWrong | The Lifespan Dilemma
One of our most controversial posts ever was "Torture vs. Dust Specks". Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE. (Yay student!)
In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.
I'll start by briefly introducing Parfit's Repugnant Conclusion, sort of a little brother to the main dilemma. Parfit starts with a world full of a million happy people - people with plenty of resources apiece. Next, Parfit says, let's introduce one more person who leads a life barely worth living - but since their life is worth living, adding this person must be a good thing. Now we redistribute the world's resources, making it fairer, which is also a good thing. Then we introduce another person, and another, until finally we've gone to a billion people whose lives are barely at subsistence level. And since (Parfit says) it's obviously better to have a million happy people than a billion people at subsistence level, we've gone in a circle and revealed inconsistent preferences.
My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to comm |
1af67c65-aa02-47d3-a13c-52ee603011c0 | trentmkelly/LessWrong-43k | LessWrong | AXRP Episode 4 - Risks from Learned Optimization with Evan Hubinger
YouTube link
This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I (Daniel Filan) have conversations with researchers about their papers. We discuss the paper and hopefully get a sense of why it's been written and how it might reduce the risk of artificial intelligence causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential.
In machine learning, typically optimization is done to produce a model that performs well according to some metric. Today's episode features Evan Hubinger talking about what happens when the learned model itself is doing optimization in order to perform well, how the goals of the learned model could differ from the goals we used to select the learned model, and what would happen if they did differ.
Daniel Filan: Today, I'll be talking to Evan Hubinger about risks from learned optimization. Evan is a research fellow at the Machine Intelligence Research Institute, was previously an intern at OpenAI working on theoretical AI safety research with Paul Christiano. He's done software engineering work at Google, Yelp and Ripple, and also designed the functional programming language, Coconut. We're going to be talking about the paper "Risks from Learned Optimization in Advanced Machine Learning Systems". The authors are Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse and Scott Garrabrant. So Evan, welcome to AXRP.
Evan Hubinger: Thank you for having me Daniel.
Daniel Filan: So the first thing I wanted to say about this paper is it has a nice glossary at the end with what various terms mean. And I think this is unusual in machine learning papers. So just straight off the bat, I wanted to firstly thank you for that and let potential readers of the paper know that it's maybe easier to read than they might think.
Evan Hubinger: Thank you. We worked hard on that. I think one of the things that we set out to do in the paper was to defi |
0d55d08a-8e39-49fa-b4b0-1a8a02179829 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI impacts and Paul Christiano on takeoff speeds
[Paul Christiano](https://sideways-view.com/2018/02/24/takeoff-speeds/) and [AI Impacts](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) release posts arguing that a gradual improvement in AI capability is more likely than a discontinuous improvement. |
05479bc2-8d53-49cf-a89e-9a5c490e1789 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AI Risk and Survivorship Bias - How Andreessen and LeCun got it wrong
*Purpose of this post/TL;DR: Many people who argue against the existence of large-scale risks associated with AI, including Marc Andreessen or Yann LeCun, use the following argument template (call this the Argument from Overestimation, AFO): "since people were wrong in estimating high risk for \*insert safe technology that we use today\*, AI is probably safe too". I was surprised that this argument is being used by some of the leading figures in the AI Safety debate because as I argue in this paper, its logic relies on the survivorship bias. This, in turn, means that the argument begs the question by grouping AI with a skewed sample of only safe technologies, making it a flawed argument. I conclude that for the sake of improving the current debate on AI Safety, this argument should be abandoned.*
*Epistemic status: While I tend to be more on the “x-risks side” in AI Safety debates (to make this clearer, I was more in agreement with Tegmark and Bengio in the debate I reference), I attempted to give the most charitable reconstructions and strongest responses for AFO and gave concrete examples of the argument in action to avoid the straw-man fallacy. Note also that just because a particular argument in favour of AI safety fails, we are not warranted to conclude that AI is unsafe. The main point of this post is simply that we should not assume safety/unsafety without providing further arguments. I invite feedback and potential improvements on my position in the comments!*
**Argument from Overestimation**
--------------------------------
Consider the following passage from [Marc Andreessen’s essay about concerns regarding AI risk](https://a16z.com/2023/06/06/ai-will-save-the-world/), titled ‘Why AI will Save the World’:
“The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. [Recounts the myth of Prometheus about the technology of fire as an example.] The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies – fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most – all? – new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.”
A similar sentiment was also expressed by [Yann LeCun in the recent Munk Debate on AI](https://munkdebates.com/debates/artificial-intelligence), listing historical instances where warnings about a new technology’s negative effects were wide of mark:
“Socrates was against writing, he thought people are going to lose their memory. The Catholic church was against the printing press, saying they would lose control of the Dogma. [...] The Ottoman Empire banned the printing press and according to some historian [sic] this is what accelerated their decline.”
Later on, LeCun spells out his argument when reacting to Yoshua Bengio’s statement that AI is an unprecedented technology in its unique capabilities to design and produce its own improved copies:
“That very argument was made for computers, you know, 50 years ago. This is not a new issue. [References a website called Pessimist Archives which keeps a record of newspaper clips with wrong and sometimes absurd predictions regarding the effects of new technologies]. [Take the example of] the train: you’re not gonna take the train if it’s going 50 km/h, you can't breathe at that speed. [...] Everybody has said this kind of thing about every time there was a technological evolution or cultural evolution.”
It seems that the implicit argument in these passages is meant to be something like the following. Call it the Argument from Overestimation (AFO):
**P1** People usually overestimate the risks associated with new technologies.
**P2** AI is a new technology.
**C1** People probably overestimate the risks associated with AI.
The argument may seem inductively strong at first, but take a look at what all the technologies listed above have in common: they all turned out to be safe. This creates problems for LeCun's argument, because it highlights that the logic behind his argument rests on fallacious reasoning, specifically, it falls prey to the survivorship bias.
**Objection from Survivorship Bias**
------------------------------------
Survivorship bias is a cognitive shortcut that occurs when a more easily noticeable subgroup which passes a hidden selection process is mistaken as the entire group (read more [here](https://en.wikipedia.org/wiki/Survivorship_bias) and [here](https://thedecisionlab.com/biases/survivorship-bias)). A hypothetical example can be taken from a medical experiment attempting to estimate the effectiveness of a treatment intervention. Even an intervention which exhibits relatively high success rates (e.g. high survival rates) may actually be relatively ineffective, as the analysis of its effects often excludes people who did not survive long enough to get the treatment -- either because their immune systems were comparatively weaker, they had worse genetic predispositions for the disease etc. Hence, without including the weaker group in the analysis, the survival rate of the drug is most likely positively skewed and too high, as its selection mechanism operates only on people who are naturally more resistant to the disease.
Andreessen’s and LeCun's arguments are based on similarly skewed reasoning. Think again of the technologies mentioned: fire, writing, printing, trains, computers. All of these technologies went through the selection process of "risk warnings about us were overestimated because we are safe technologies”. However, there is a whole group that did not pass this test: unsafe technologies where the high risk assessments turned out to be accurate. If someone said "look, we really shouldn't use airships for mass transport because they are dangerous", they estimated the risk correctly, meaning airships are automatically excluded from being used in AFO. Similar examples of unsafe technologies include certain [food additives which turned out to be carcinogenic](https://www.nrdc.org/bio/nrdc/fda-bans-seven-food-additives-cause-cancer) or specific [pesticides that severely damage the human body](https://earthjustice.org/brief/2021/chlorpyrifos-ban-pesticide-industry-pressure-epa) and [environment](https://www.epa.gov/ingredients-used-pesticide-products/ddt-brief-history-and-status). Hence, AFO suffers from survivorship bias because the "risk overestimation" selection process picks out only technologies which actually turned out to be safe.
Therefore, due to this bias, AFO actually begs the question: it aims to prove AI Safety by comparing AI to a sample of exclusively safe technologies. This means that in reality, AFO actually works more like this:
**P1\***People usually overestimate risks associated with technologies that turned out to be safe.
**P2** AI is a new technology.
**C1** People probably overestimate the risks associated with AI.
This formulation makes clear why the argument doesn't go through: in order for the argument to be sound, we would have to add the premise that AI turned out to be safe. However, that would beg the question, since the whole point of introducing the parallel was to prove AI Safety. Hence, the argument cannot go through without begging the question, making it a bad argument. Call this critique the Objection from Survivorship Bias.
**Possible Responses to the Objection from Survivorship Bias**
--------------------------------------------------------------
It seems challenging to find responses against this objection. Perhaps one could argue that P1\* historically holds true for most technologies, therefore, it is likely true for AI as well. However, this seems very improbable. This is equivalent to either arguing that (i) most technologies we invent are safe from the get-go or (ii) that inventors (or social institutions or society as a whole) are very accurate at estimating risks. (i) seems improbable because if it were true, it would make it very hard to explain why regulatory processes were ever established. (ii) seems a bit more likely, but at the very least, [individuals are known to be very bad at assessing risk](https://www.science.org/doi/10.1126/science.185.4157.1124) and [larger groups and even political institutions may aggregate the same biases](https://psycnet.apa.org/record/2016-48867-038) at least partially.
But perhaps a better response is that we are allowed to assume that AI will be safe as a by-product of current regulatory practice. Hence, we are protected against the discrete risks from AI, say misinformation, because there are regulations over misinformation, protected against discrimination thanks to discrimination guidelines and so on. I do think this defense works at least partially, since some negative effects of AI may in fact be buffered by existing regulation. For instance, creditors have the Right to Explanation regarding denied credit applications, regardless of whether [the decision was made by a human or an algorithm](https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/). However, it still seems that this defense rests on some questionable assumptions.
Firstly, why should we assume that non-targeted regulations will behave in a targeted way? AI will likely behave in novel unexpected ways that we will have to regulate for. Secondly, without concrete arguments for why current regulations are good enough for AI, just saying "current regulations are fine" is a form of [confirmation bias](https://en.wikipedia.org/wiki/Confirmation_bias). This is especially true since many regulations for algorithms have actually failed, including [Meta's own engagement algorithm recommending radical content](https://www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms/). Finally, from a more practical standpoint, it is not clear that existing regulations will apply smoothly to AI. For instance, many experts in the EU AI sector point out that as AI is being integrated into more and more products, [businesses are faced with double regulatory obligations](https://dl.acm.org/doi/abs/10.1145/3593013.3594079): is the responsible regulatory body for a product the sector specific institution, an IT-security institution or even a new AI body forthcoming thanks to the AI Act?
In short, without clear guidelines on how to regulate AI *specifically*, there will be a tendency for creating legal friction rather than functioning regulation. Hence, overall it seems that AFO cannot really be defended against the objection from Survivorship Bias.
**Conclusion**
--------------
In conclusion, I have argued that many debates about AI Safety revolve around a flawed Argument from Overestimation which suffers from the Survivorship Bias, using Yann LeCun's arguments form the Munk Debate as example. I have demonstrated that AFO is question-begging due to this bias, as it implicitly groups AI with a sample of only safe technologies, even though AI safety is the point AFO is trying to prove. Overall, it seems that AFO is a flawed argument and I hope that as the debate on AI Safety progresses, people will be motivated to look for better arguments. All feedback is very welcome in the comments!
*Big thank you to Sam Robinson for valuable comments!*
*Cross-posted from LW:* [*https://www.lesswrong.com/posts/Doucf9k6dcg8n4yPg/ai-risk-and-survivorship-bias-how-andreessen-and-lecun-got*](https://www.lesswrong.com/posts/Doucf9k6dcg8n4yPg/ai-risk-and-survivorship-bias-how-andreessen-and-lecun-got) |
8d8bb61a-adbf-44ca-a837-0d7bc0c58a80 | trentmkelly/LessWrong-43k | LessWrong | Seeking Mechanism Designer for Research into Internalizing Catastrophic Externalities
TL;DR
I need help finding/developing a mech that can reliably elicit honest and effortful risk-estimates from frontier AI labs regarding their models, risk-estimates to be used in risk-priced premiums that labs then pay the government (i.e. a "Pigouvian tax for x-risk"). Current best guess: Bayesian Truth Serum.
Stretch-goal: find/develop a mech for incenting optimal production of public safety research. Current best guess: Quadratic Financing.
DM me if you're interested in collaborating!
The Project
X-risk poses an extreme judgment-proof problem: threats of ex post punishment for causing an existential (or nationally existential, or even just a disaster that renders your company insolvent) have little to any deterrent effect. Liability on its own will completely fail to internalize these negative externalities.
Traditionally, risk-priced insurance premiums are used to solve judgment-proofness (turn large ex post costs into small regular ex ante costs). However, insurers are also judgment-proof in the face of x-risk.
I'm developing a regime for insuring these "uninsurable" risks from frontier AI. It's modeled after the arguably very successful liability and insurance regime for nuclear power operators. In two recent workshop papers, I argue we should make foundation model developers liable for a certain class of catastrophic harms/near miss events and:
1. Mandate private insurance for commercially insurable losses (e.g. up to ~$500B in damages)
2. Have the government charge risk-priced premiums for insurance against uninsurable losses (i.e. a "Pigouvian tax for x-risk")
A government agency – through audits, its own forecasts and so on – could (and should) try to make these risk-estimates. However, this will be costly and they will struggle to collapse the information asymmetry between it and the developers it insures. Relying mostly on mechanism design to just incentivize labs to report honest and effortful risk-estimates has a number of advantages:
1. |
d6c553dc-44bc-4774-beb3-1c8d5e21cb83 | trentmkelly/LessWrong-43k | LessWrong | Book review: "Feeling Great" by David Burns
I've never had any "real" mental health problems, but sometimes I feel stressed or guilty or whatever, like everyone, and who doesn’t want to feel more good more often? So a couple months ago I read Feeling Great: The Revolutionary New Treatment for Depression and Anxiety by David Burns (published 2020) on audiobook. I was really glad I did!
I can't comment on how it compares to other psychotherapy books. It's the first one I've ever read, and I kinda came upon it randomly—an acquaintance recommended David Burns's podcast, so I listened to a couple random episodes, and I found them intriguing but confusingly out-of-context, so instead I bought his book which was much better.
But for what it's worth, David Burns's older 1980 book, Feeling Good, is super famous and popular, and apparently there are studies that say that giving people a copy of Feeling Good is as effective as antidepressants, with effects that persist for years (ref). (More discussion on wikipedia.) Also, I just saw that Scott Alexander suggested Feeling Great for people with depression. So of all the psychotherapy books to randomly stumble across, I think I got a pretty legit one!!
I'm not going to talk about everything in the book in this review; I just want to flag a few parts that were highlights for me.
I also couldn't resist throwing in some speculations on the neuroscience of depression at the bottom.
“Classic CBT” stuff
David Burns is, I gather, something of a leader in Cognitive-Behavioral Therapy (CBT). What is CBT? My vague pop-culture stereotyped impression of CBT has been something like:
* The patient says “I’m a terrible person and everyone hates me”.
* Then the therapist and patient have a discussion to try to tease out (1) whether that’s actually true (very often it’s not), (2) even if it is true, whether it’s a good reason to feel miserable, as opposed to, y’know, self-acceptance, trying to solve the problem, etc., (3) given 1 and 2, what are good strategies to actually stop f |
f6bd3a20-fab3-4f13-b40d-ed7e7c1b729d | StampyAI/alignment-research-dataset/arxiv | Arxiv | What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
1 Introduction
---------------
Explaining the behaviour of machine learning algorithms or AI remains a key challenge in machine learning. With the guidelines of the European Union’s General Data Protection Regulation (GDPR) EUdataregulations2018 calling for explainable AI, it has come to the machine learning community’s attention that better understanding of black-box models is needed.
Despite substantial research in explaining the behaviour of supervised machine-learning, it
is unclear what should constitute an explanation for Reinforcement
Learning (RL). Current works in explainable reinforcement learning use similar techniques as those used to explain a supervised classifier Mott2019, as such their
explanations highlight what in the current environment drives an agent to
take an action, but not what the agent expects the action to achieve. Frequently, the consequences of the agent’s actions are not immediate
and a chain of many decisions all contribute to a single desired outcome.
This paper addresses this problem by asking what chain of events the agent intended
to happen as a result of a particular action choice.
The importance of such explanations based around intended outcome in day-to-day
life is well-known in psychology with Malle Malle2001 estimating that
around 70% of these day-to-day explanations are intent-based. While the notion
of intent makes little sense in the context of supervised classification, it is
directly applicable to agent-based reasoning, and it is perhaps surprising that
we are the first work in explainable RL to directly address this. Recent work langley2019explainable has called for these introspective abilities which they refer to as “Explainable Agency”.
We present a simple addition to standard value-based RL
frameworks which allows us to obtain a projection of predicted future
trajectories from a current observation and proposed action. Unlike existing approaches such as Waa2018Contrastive that predict an agent’s
future states by rerunning repeated forward simulations from the same initial
state; we instead recover sum of the past events, weighted by the importance
that the agent put on them when learning a Q-values, that lead to the
agents current behaviour, and mathematically guarantee that this sum is
consistent with the agent’s Q-values. This allows for local interpretation of the agent’s intention based on its behavioural
policy.
We mathematically guarantee that this is
consistent with the agent’s Q-values (see figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes")). We hope our method will help RL practitioners trying to understand agent behaviour for model debugging and diagnostics, and potentially lead to more reliable and trustworthy RL systems.
Mismatches between learnt Q-values and the forward simulations of Waa2018Contrastive exist for
a variety of reasons: stochasticity of the environment; mismatches between training and test environments111This
is particularly common in multi-agent systems, where the behaviour of other
agents can change.; systematic biases in the learning
method hasselt2010double
and the use of
replay buffers (particularly hindsight experience replay (HER) andrychowicz2018hindsight).
Our
representations highlight what is learnt by the agent at train time,
rather than the behaviour of the agent at test time, allowing them to provide
insights into some of these failures.
| | | | |
| --- | --- | --- | --- |
| | | | |
| | |
| --- | --- |
| | |
Figure 1: Understanding an agent’s expected behaviour.
These images show predictions of an agent’s behaviour in blackjack.
All plots show an initial
state where the dealer shows 5, and the player’s cards sum to 7 (no high
aces), and the colour of each cell shows the expected number of times a state will
be visited, if the agent hits.
The two leftmost plots show the predicted behaviour, based
on forward simulation Waa2018Contrastive, and ‘belief’ of a well-trained agent, while the two rightmost show the predicted behaviour and ‘belief’ of an underperforming agent
trained on a small and fixed replay buffer. The bottom two images show the final outcomes the well-trained and underperforming agents believe will occur if they hit. This results in a conservative agent that sticks early owing to an increased belief that they will go bust from a hit. |
5e53c02e-ef26-4674-a68e-97607b8af764 | trentmkelly/LessWrong-43k | LessWrong | Education control?
I'm reading Nurture Shock by Po Bronson & Ashley Merryman. Several things in the book, esp. the chapter on "Tools of the Mind", an intriguing education program, suggest that our education of young children not only isn't very good even when evaluated using tests that the curriculum was designed for, it's worse than just letting kids play. (My analogy and interpretation—don't blame this on the Tools people—is that conventional education may be like a Soviet five-year plan, trying to force children to acquire skills & knowledge that they would have been motivated to learn on their own if there weren't a school, and that early education shouldn't focus entirely on teaching specific facts, but also on teaching how to think, form abstractions, and control impulses.)
Say they're going to play fireman. The Tools teacher teaches the kids about what firemen do and what happens in a fire, and gives the kids different roles to play, then lets them play. They teach facts not because the facts are important, but to make the play session longer and more complicated. Tools does well in increasing test scores, but even better at reducing disruptive behavior. [1]
Tools has a variety of computer games that are designed to get kids to exercise particular cognitive skills, like focusing on something while being aware of background events. But the games often sound like more-boring ways of teaching kids the same things that video-games teach them.
Tools did no better than the existing curriculum on certain metrics in a recent larger study. But it didn't perform worse, either.
The first study you do with any biological intervention is to compare the intervention to a control group that has no intervention. But in education, AFAIK no one has ever done this. Everyone uses the existing curriculum as the control.
Whatever country you're in, what metrics do you use, and what evidence do you have that your schools are better than nothing at all?
There may be some things that you need to |
b712d682-d8a2-48b4-9f17-db906ba54285 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | "UDT2" and "against UD+ASSA"
I'm reposting some old posts that I originally sent to the "decision theory workshop" mailing list and the "[everything-list](http://www.weidai.com/everything.html)", because I occasionally want to reference these posts but the former mailing list is private and the latter one is public but I can't figure out how to create direct links to posts that are viewable without becoming a member.
UDT2 is a decision theory idea that I came up with to try to solve some problems in [UDT1.1](https://wiki.lesswrong.com/wiki/Updateless_decision_theory) however [I'm not very happy with it currently](https://www.alignmentforum.org/posts/5bd75cc58225bf067037528e/updatelessness-and-son-of-x#5bd75cc58225bf0670375293). [UD+ASSA or UDASSA](http://fennetic.net/irc/finney.org/~hal/udassa/) is an anthropic reasoning idea that I came up with and then moved away from prior to UDT. See also [this post](https://www.lesswrong.com/posts/QmWNbCRMgRBcMK6RK/the-absolute-self-selection-assumption) for further discussion of UDASSA.
UDT2 (originally "toward a solution of the 'unintentional simulation' problem", 1/25/2011)
==========================================================================================
(I think this approach potentially solves several problems besides "unintentional simulation" but I'll start there since it provides the clearest motivation.)
I first described this problem (without naming it) at <http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/120y>.
Here's a condensed version:
Two UDT1 (or UDT1.1) agents play one-shot PD. It's common knowledge that agent A must make a decision in 10^100 ticks (computation steps), whereas agent B has 3^^^3 ticks. While B is trying to derive the logical consequences of returning 'C' or 'D' on the world program P, it is likely to come up with a proof by simulation of A's output, after which it will decide to play D.
I think what A should have done is (if it were running a smarter decision theory), instead of deciding directly on C or D, modify itself into a program K = "simulate the original agents A and B and output 'C' if and only if both of the simulated agents self-modify into K within some time limit".
And B (if it were also running a smarter decision theory) would also self-modify into K, whether or not it happens to simulate A's decision to self-modify into K prior to its own self-modification, and do this before the time limit built into K expires.
So that's my starting intuition, and I want to try to answer: what is this smarter decision theory? It seems that at least two changes need to be made to UDT1:
1. An agent must take the space of possible decisions to be the set of possible programs it can self-modify into, instead of the set of outputs or input/output maps. (This change is needed anyway if we want the agent to be able to self-improve in general.)
2. An agent must consider not just the consequences of eventually reaching some decision, but also the consequences of the amount of time it spends on that decision. (This change is needed anyway if we want the agent to be economical with its computational resources.)
So, while UDT1 optimizes over possible outputs to its input and UDT1.1 optimizes over possible input/output mappings it could implement, UDT2 simultaneously optimizes over possible programs to self-modify into and the amount of time (in computation steps) to spend before self-modification.
How to formulate UDT2 more precisely is not entirely clear yet. Assuming the existence of a math intuition module which runs continuously to refine its logical uncertainties, one idea is to periodically interrupt it, and during the interrupt, ask it about the logical consequences of statements of the form "S, upon input X, becomes T at time t" for all programs T and t being the time at the end of the current interrupt. At the end of the interrupt, return T(X) for the T that has the highest expected utility according to the math intuition module's "beliefs". (One of these Ts should be equivalent to "let the math intuition module run for another period and ask again later".)
Suppose agents A and B above are running UDT2 instead of UDT1. It seems plausible that A would decide to self-modify into K, in which case B would not suffer from the "unintentional simulation" problem, since if it does prove that A self-modifies into K, it can then easily prove that if B does not self-modify into K within K's time limit, A will play D, and therefore "B becomes K at time t" is the best choice for some t.
It also seems that UDT2 is able to solve the problem that motivated UDT1.1 without having "ignore the input until the end" hard-coded into it, which perhaps makes it a better departure point than UDT1.1 for thinking about bargaining problems. Recall that problem was:
Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.
The idea here is that both agents, running UDT2, would self-modify into T = "return A if input is 1, otherwise return B" if their math intuition modules say that "S, upon input 1, becomes T" is positively correlated with "S, upon input 2, becomes T", which seems reasonable to assume.
I think UDT2 also correctly solves Gary's Agent-Simulates-Predictor
problem and my "two more challenging Newcomb variants".
(I'll skip the details unless someone asks.)
To me, this seems to be the most promising approach to try to fix some of UDT1's problems. I'm curious if others agree/disagree, or if anyone is working on other ideas.
two more challenging Newcomb variants (4/12/2010)
=================================================
On Apr 11, 2:45 pm, Vladimir Nesov wrote:
>
> There, I need the environment to be presented as function of the
> agent's strategy. Since predictor is part of agent's environment, it
> has to be seen as function of the agent's strategy as well, not as
> function of the agent's source code.
>
>
>
It's doesn't seem possible, in general, to represent the environment as a function of the agent's strategy. I applied Gary's trick of converting multi-agent problems into Newcomb variants to come up with two more single-agent problems that UDT1 (and perhaps Nesov's formulation of UDT as well) does badly on.
In the first Newcomb variant, Omega says he used a predictor that did an exact simulation of you for 10^100 ticks and outputs "one-box" if and only if the simulation outputs "one-box" within 10^100 ticks.
While actually making the decision, you are given 10^200 free ticks.
In the second example (which is sort of the opposite of the above), Omega shows you a million boxes, and you get to choose one. He says he used 10^100 ticks and whatever computational shortcuts he could find to predict your decision, and put $1 million in every box except the one he predicted you would choose. You get 10^100 + 10^50 ticks to make your decision, but you don't get a copy of Omega's predictor's source code.
In these two examples, the actual decision is not more important than how predictable or unpredictable the computation that leads to the decision is. More generally, it seems that many properties of the decision computation might affect the environment (in a way that needs to be taken into account) besides its final output.
At this point, I'm not quite sure if UDT1 fails on these two problems for the same reason it fails on Gary's problem. In both my first problem and Gary's problem, UDT1 seems to spend too long "thinking"
before making a decision, but that might just be a superficial similarity.
against UD+ASSA, part 1 (9/26/2007)
===================================
I promised to summarize why I moved away from the philosophical position
that Hal Finney calls UD+ASSA. Here's part 1, where I argue against ASSA.
Part 2 will cover UD.
Consider the following thought experiment. Suppose your brain has been
destructively scanned and uploaded into a computer by a mad scientist. Thus
you find yourself imprisoned in a computer simulation. The mad scientist
tells you that you have no hope of escaping, but he will financially support
your survivors (spouse and children) if you win a certain game, which works
as follows. He will throw a fair 10-sided die with sides labeled 0 to 9. You
are to guess whether the die landed with the 0 side up or not. But here's a
twist, if it does land with "0" up, he'll immediately make 90 duplicate
copies of you before you get a chance to answer, and the copies will all run
in parallel. All of the simulations are identical and deterministic, so all
91 copies (as well as the 9 copies in the other universes) must give the
same answer.
ASSA implies that just before you answer, you should think that you have
0.91 probability of being in the universe with "0" up. Does that mean you
should guess "yes"? Well, I wouldn't. If I was in that situation, I'd think
"If I answer 'no' my survivors are financially supported in 9 times as many
universes as if I answer 'yes', so I should answer 'no'." How many copies of
me exist in each universe doesn't matter, since it doesn't affect the
outcome that I'm interested in.
Notice that in this thought experiment my reasoning mentions nothing about
probabilities. I'm not interested in "my" measure, but in the measures
of the outcomes that I care about. I think ASSA holds intuitive appeal to
us, because historically, copying of minds isn't possible, so the measure of
one's observer-moment and the measures of the outcomes that are causally
related to one's decisions are strictly proportional. In that situation, it
makes sense to continue to think in terms of subjective probabilities
defined as ratios of measures of observer-moments. But in the more general
case, ASSA doesn't hold up.
against UD+ASSA, part 2 (9/26/2007)
===================================
In part one I argued against ASSA. Here I first summarize my
argument against UD, then against the general possibility of any single
objective measure.
1. There is an infinite number of universal Turing machines, so there
is an infinite number of UD. If we want to use one UD as an objective
measure, there has to be a universal Turing machine that is somehow uniquely
suitable for this purpose. Why that UTM and not some other? We don't even
know what that justification might look like.
2. Computation is just a small subset of math. I knew this was the case,
having learned about oracle machines in my theory of computation class. But
I didn't realize just how small a subset until I read *Theory of Recursive
Functions and Effective Computability*, by Hartley Rogers. Given that there
is so much mathematical structure outside of computation, why should they
not exist? How can we be *sure* that they don't exist? If we are not *sure*,
then we have to take the possibility of their existence into account when
making decisions, in which case we still need a measure in which they have
non-zero measures.
3. At this point I started looking for another measure that can replace UD.
I came up with what I called "set theoretic universal measure", where the
measure of a set is inversely related to the length of its description in a
formal set theory. Set theory covers a lot more math, but otherwise we still
have the same problems. Which formal set theory do we use? And how can we be
sure that all structures that can possibly exist possible can be formalized
as sets? (An example of something that can't would be a device that can
decide the truth value of any set theoretic statement.)
4. Besides the lack of good candidates, the demise of ASSA means we don't
need an objective measure anymore. There is no longer an issue of sampling,
so we don't need an objective measure to sample from. The thought experiment
in part 1 of "against UD+ASSA" points out that in general, it's not the
measure of one's observer-moment that matters, but the measures of the
outcomes that are causally related to one's decisions. Those measures
can be interpreted as indications of how much one cares about the outcomes,
and therefore can be subjective.
So where does this chain of thought lead us? I think UD+ASSA, while flawed,
can serve as a kind of stepping stone towards a more general rationality.
Somehow UD+ASSA is more intuitively appealing, whereas truly generalized
rationality looks very alien to us. I'm not sure any of us can really
practice the latter, even if we can accept it philosophically. But perhaps
our descendents
can. One danger I see with UD+ASSA is we'll program it into an AI, and the
AI will be forever stuck with the idea that non-computable phenomenon can't
exist,
no matter what evidence it might observe. |
bded6593-bc93-48a6-a508-0b4dad8439df | trentmkelly/LessWrong-43k | LessWrong | Training a Sparse Autoencoder in < 30 minutes on 16GB of VRAM using an S3 cache
Summary
300 million GPT2-small activations are cached on s3, we pull these very quickly onto a g4dn.8xlarge ec2 instance in the same region and use them to train a 24,576 dimensional Switch Sparse Autocoder in 26 minutes (excluding generation time). We achieve similar L0/reconstruction loss to Gao et al. and a low dead feature proportion. Typically this process takes in the realm of 3-4 hours and far more GPU power, for instance here and here. Code to replicate is made public and so are the weights and biases logs.
project architecture overview
Background
There are many excellent resources explaining Sparse Autoencoders and how to train them. The ones linked below do a far better job of laying the groundwork than I ever could.
1. The seminal publication on training small SAEs by Anthropic
2. The publication on scaling SAEs by Anthropic
3. Joseph Bloom's blog post on how to train SAEs
For the rest of this article we will assume the reader knows roughly how to train a SAE.
Objective
One interesting aspect of SAE training is that the base LM is often much larger than the autoencoder. In theory you could save a lot of time and compute if you already had all the activations cached somewhere. In fact, according to some rough profiling, the problem of training a SAE from cached LM activations quickly becomes IO-bound as opposed to compute bound (see the throughput profiling script).
This article is an overview of how we implemented this idea.
Profiling the MB/s throughput of training a 24,576 TopK=32 Switch SAE on an A100. These results suggest that I/O is going to be the main limiting factor.
Generating Activations
We generated activations coming from the end (i.e. the "hidden states") of 10th layer of gpt2-small over 678,428,672 tokens from Skylion007/openwebtext and saved them to a public s3 bucket along with some metadata. This was pretty straightforward. Uploading activation tensors from a g4dn.2xlarge instance to an s3 bucket in the same region is qui |
c3083841-8e92-43af-ba8c-a011f4166936 | trentmkelly/LessWrong-43k | LessWrong | Social Class
Notes from the Salon
ETA: This is a write-up of some interesting points raised at an in-person rationalist meetup on October 6th, 2019. It is not intended to be a comprehensive overview of the topic. It is conventional for attendees to do all the suggested readings before salon starts, so some parts of the write-up might not make sense without that context.
Assigned readings: Siderea on Class, Thoughts on the "STEM" class
Economic Class vs. Social Class
Economic class and social class are not the same thing. The two are decoupled, but only partly. You can be wealthy but lower class (e.g. a skilled tradesman making 6 figures) or poor but upper class (e.g. an adjunct professor at a prestigious university). Everyone at the salon was in the same social class: the class that attends semi-ironic and entirely sober literary salons, likely G2 on Church's ladder. But there's a much broader distribution of economic classes represented in the room.
Performing upper-(middle)-classness grants you privilege. It costs some amount of money, but not as much as you would think, to perform professional-classness. The harder part by far is knowing what to buy in the first place. A clean-ish hoodie emblazoned with your university seal does the job. A blazer with a loud animal print pattern does not.
Most people want to make more money, but few people are interested in climbing the social ladder. When you find the class that you belong to and join it, you tend to go “ew” at the people in the classes below you, and “yikes”* at the people in the classes above you.
*Or the equivalent of “yikes”, if you’re in a class that doesn’t say “yikes”
Everyone is "Middle-Class"
The middle class is the only virtuous class - both "lower class" and "upper class" are insults. This results in basically everyone except billionaires and the homeless identifying as middle class. Premium mediocrity is what middle-classness could look like for millennials.
Class is a culture and a performance. For exa |
e3dba749-b337-416b-85e2-17394af91158 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Coherent extrapolated dreaming
*This work was supported by the Monastic Academy for the Preservation of Life on Earth. You can support my work* [*here*](http://alexflint.io/donate.html)*.*
*I will give a short presentation of this work followed by discussion on Wednesday Dec 28 at 12pm Pacific / 3pm Eastern.* [*RSVP here*](https://www.facebook.com/events/860285301979524)*.*
Outline
-------
* I have four questions above coherent extrapolated volition, which I present in the form of four short thought experiments:
1. What kind of a thing can be extrapolated in the direction of wisdom? (Robot vacuum thought experiment)
2. What kind of protocol connects with the wisdom of a person who has been extrapolated? (Dream research thought experiment)
3. What kind of model captures that within a person that we hope to amplify through extrapolation? (Twitter imitator thought experiment)
4. What kind of environment is sufficient to grow true wisdom? (Astrological signs thought experiment)
* The title of this post is based on the second thought experiment.
* I claim that we lack a theory about that-which-is-capable-of-becoming-wise, in a form that lets us say something about its relationship to models, extrapolation, and volitional dynamics. I argue that CEV does not actually provide this central theory.
Introduction
------------
Coherent extrapolated volition is Eliezer’s 2004 proposal for the goal we might give to a powerful AI. The basic idea is to have the AI work out what we would do or say if we were wiser versions of our present selves, and have the AI predicate its actions on that. To do this, the AI might work out what would happen if a person contemplated an issue for a long time, or was exposed to more conversations with excellent conversation partners, or spent a long time exploring the world, or just lived a long and varied life. It might be possible for us to describe the transformations that *lead* to wisdom even if we don’t know apriori what those transformations will lead to.
CEV does not spell out exactly what those transformations are — though it does make suggestions — nor how exactly the AI’s actions would be connected to the results of such transformations. The main philosophical point that CEV makes is that the thing to have an AI attend to, if you’re trying to do something good with AI, is wisdom, and that wisdom arises from a process of maturation. At present we might be confused about both the nature of the world and about our own terminal values. If an AI asks us " how should honesty be traded off against courage?" we might give a muddled answer. Yet we *do* have a take on honesty and courage. Wiser versions of ourselves might be less confused about this, and yet still be *us*.
An example: suppose you ask a person to select a governance structure for a new startup. If you ask the person to make a decision immediately, you might get a mediocre answer. If you give them a few minutes to contemplate then you might get a better answer. This "taking a few minutes to contemplate" is a kind of transformation of mind. Beginning from the state where the person was just asked the question, their mind changes in certain ways over the course of those few minutes and the response given after that transform is different to the response before it.
Perhaps there are situations where the "taking a few minutes to contemplate" transform *decreases* the quality of the eventual answer, as in the phenomenon of "analysis paralysis" — CEV does not claim that this particular transform is *the* wisdom-inducing transform. CEV *does* claim that there exists *some* transformation of mind that leads to wisdom. It need not be that these transformations are about minds contemplating things in isolation. Perhaps there are certain insights that you can only come to through conversations with friends. If so, perhaps the AI can work out what *would* become of a person if they spent a lot of time in conversation with an excellent group of friends. Here the *transformation* is "spend time in conversation with friends". The *hypothesis* is that this transformation of mind leads to wisdom.
CEV suggests that we leave it to the AI to work out how we would be changed by whatever wisdom-inducing transformations we decide upon. That means that there need not be a physical human experiencing a sequence of real conversations with friends; instead, the AI would work out how the human *would* be transformed if they did have such conversations, and what the transformed person would do or say. Working this out may or may not involve simulation — perhaps the AI will find other ways to reason about the results of these wisdom-inducing transformations.
**The CEV Hypothesis**. Very roughly, the CEV claim is (1) that there exists a transformation that cause a person’s words or behavior more clearly express wisdom, (2) that an AI can work out what would become of a person after undergoing such transformations, and (3) that there is a way to tie an AI’s actions to the words of actions of a thus-extrapolated person such that the AI’s behavior is truly good, for some definition of "truly good" that may only be known to our future selves.
In this essay I will give four thought experiments that probe these hypotheses. The title of this post is based on the second of these thought experiments.
Robot vacuum
------------
Consider a robot that builds a map of a house, locates itself within that map, and then makes and executes plans to vacuum the floor. The implementation of this robot vacuum is unusually clear: it has a clear goal function, a clear planning facility, a clear world model, and a clear method for updating the world model as each new sensor measurement arrives. This should be the ideal case for CEV. So what would it mean to extrapolate this robot vacuum in the direction of wisdom?
From our perspective outside the system, we might say that the robot vacuum’s CEV is a world tiled with clean floors. But CEV has a specific structural form – let’s work through it step by step. The first step is that we take a model of the thing that we are trying to compute the CEV of and extrapolate it in the direction of wisdom. What could this mean in the case of the robot vacuum?
Well, it means that we transform the robot vacuum in ways corresponding to the kind of growth that we believe would lead to wisdom if the robot vacuum went through them in the real world. What could this mean for a robot vacuum? It’s hard to say!
We apply the transformation of greatly expanding the robot vacuum’s model of the world, for example by giving it an accurate floorplan of the entire surface of the earth. If its built-in planning algorithm isn’t efficient enough to deal with such a huge floorplan then we might replace it with a more efficient algorithm that still searches with respect to the same goal. A robot vacuum extrapolated in this way — which is just one of many possible extrapolations we might decide upon — might be observed to vacuum the whole surface of the Earth.
So in the end, what we get is just that the CEV of the robot vacuum is that it wants to sweep whatever floors are in its floorplan. Fine, that’s no surprise. But is that really what we mean by "extrapolation in the direction of wisdom"?
If you take a person whose whole life has been focussed on conforming to the social conventions of the place they grew up, you might hope that "moving in the direction of wisdom" involves seeing a bigger picture, gaining an appreciation for the instrumental but not ultimate importance of social conventions, finding courage, locating themselves in history, understanding more about the significance of life, and so on. It should not be that "moving in the direction of wisdom" simply means equipping the person with more knowledge and tools with which to conform even more tightly to social conventions.
In the case of the robot vacuum, it seems that no matter how long it contemplates things, or how long it spends wandering the world, it never really gains a broader perspective on the world, nor realizes anything new about its own values. You might say that these things are themselves parochial human notions of wisdom that don’t apply to robot vacuums, and that’s fine, but then we don’t really need to "extrapolate" the robot vacuum, we can just read out its goal structure from its initial implementation.
It seems to me that humans can become *wiser* — not just more effective at accomplishing the tasks we have focussed on in our lives so far, but actually more aware of the world we live in, our place in it, and the consequences of our actions, in a way that reshapes the goals we’ve lived by rather than merely serving them. I think CEV is helpful in focusing us on *this direction* as the key to understanding goodness, but I think it is very unclear *what kind of thing* can move in this direction. Can a robot vacuum move in a direction that expands and reshapes its goals according to a clearer appreciation of its place in the world? If not, what is it about humans that *can* do so?
Dream research
--------------
Consider an AI that extrapolates a person in the direction of wisdom, then asks what that person would dream about, and then chooses its actions as a function of these predicted dreams. Is there *any* way this could go well? It seems unlikely, we have no reason to expect a wise person’s dreams to be a good basis for selecting actions that lead to a flourishing world. Of course it depends on exactly what function connects the dreams to the AI’s actions, and perhaps there is *some* function that cleverly extracts just the right thing from the dreams such that we end up with a perfectly aligned AI, but apriori this approach seems unpromising because *we’re* *looking in the wrong place* for a thing to guide the AI’s actions by.
Consider now the difference between dreams and desires. In CEV, we ask what an extrapolated person would *want*, not what they would dream about. There are many important differences between wanting and dreaming, but exactly which of these differences make it reasonable to predicate a powerful AI’s actions on an extrapolated person’s desires (and not their dreams)?
Dreams and desires are different kinds of psychological phenomena, but they *are* both psychological phenomena. What is a good principle for deciding which psychological phenomena are a reasonable basis for directing the actions of powerful AIs? I am not claiming that this question is a knock-down objection to CEV, but I *am* claiming that the writings on CEV so far fail to answer this question, and in this way leave open perhaps the most central issue concerning where an AI might be directed to look for wisdom.
If there *is* a way to extrapolate a person such that their desires form a good basis for a superintelligent AI’s goal system, then why wouldn’t there also be a way to extrapolate a person such that their dreams form a good basis for the same? But could a person really be extrapolated in such a way that their dreams formed a good basis for a superintelligent AI’s goal system? Maybe you will say that we can extrapolate a person in the direction of wisdom, then modify them such that they always dream about what they would have previously expressed as desires, but now you’re back to presupposing that desires are the right thing to look at.
I believe that if we have an AI tap into some particular psychological phenomenon of an extrapolated person, then we should have a theory about *why that choice makes sense*. In short: what features of an extrapolated person are you looking at, and why is that reasonable? We should not proceed on the basis of an intuition that desires are the right thing to look at without being able to say why that intuition is reasonable.
We can’t just leave that to some prior AGI to work out because that prior AGI has to be programmed with a protocol for making sense of what we’re asking it to do, and the thing we’re talking about here *is* the protocol for making sense of what we’re asking it to do (further discussion in section "the wisdom backflip" below).
In the original CEV paper Eliezer describes the thing I’m pointing to as follows:
>
> To construe your volition, I need to define a dynamic for extrapolating your volition, given knowledge about you. In the case of an FAI, this knowledge might include a complete readout of your brain-state, or an approximate model of your mind-state. The FAI takes the knowledge of Fred’s brainstate, and other knowledge possessed by the FAI (such as which box contains the diamond), does . . . something complicated . . . and out pops a construal of Fred’s volition.
>
>
>
>
> [...]
>
>
>
>
> I shall refer to the "something complicated" as the dynamic.
>
>
>
What kind of protocol connects with the thing-that-is-a-guide-to-goodness within a person who has been extrapolated in the direction of wisdom? Presumably looking at dreams does *not* connect with this. Presumably looking at the way the person taps their feet while listening to music does *not* connect with this. What *is* the thing that we are trying to connect with, and what kind of protocol connects with that thing?
Twitter imitator
----------------
Consider a bigram model trained to imitate my tweets up to today. A bigram model is a big table that lists, for each word in the English language, the probability that it will be followed by each other word in the English language. A bigram model is trained by counting the number of times that each word is followed by each other word in some corpus — in this case the corpus is my tweets up to today.
Does this bigram model contain enough of "me" to extrapolate in the direction of wisdom and have that go well for the future of life on Earth? Presumably not. I haven’t written very many tweets, and I don’t think a bigram model would learn anything particularly deep from them. Even if I had written a vast number of tweets, a bigram model may not be rich enough to capture anything much of substance. No matter what kind of extrapolation technique you use — even an advanced one from the future — I don’t think you would get very far if the thing you’re applying it to is a bigram model trained on my tweets. Extrapolating such a model would be a bit like extrapolating a robot vacuum — you’d miss most of the point of *extrapolation in the direction of wisdom* because the bigram model does not contain the thing that is capable of moving in the direction of wisdom.
But what *is* the threshold at which we say that a model has captured that-which-is-worth-extrapolating? Suppose we trained a 2022 language model on all of Elon Musk’s tweets. This is a richer dataset and a richer type of model, but is it enough to capture the thing that we hope is capable of maturing into wisdom through extrapolation?
We don’t know exactly what it would mean to extrapolate a language model in the direction of wisdom. We are imagining that people of the future have come up with some-or-other approach. But presumably this approach requires a certain kind of source material to be present in the source model that is to be extrapolated, since if you have an extrapolation technique that brings out true wisdom based on arbitrarily simple models of people then you could just provide it with a one-pixel image of a human face, or a single human answer to a single yes-or-no question, and if it works in this case then what you really have is a solution to the whole alignment problem, and no need for CEV.
So extrapolation techniques from the future will presumably require source models that contain some minimum level of detail about the person that is to be extrapolated. How do we know whether any particular source model contains this minimum level of detail?
Now you might say that a large audio and video corpus of a person going about their life is actually an unimaginably rich data source, and certainly contains evidence about anything that can be asked about this person. That may be true, but in CEV we have to choose not just a dataset but also a model learned from that dataset, because we are going to extrapolate the person by having the model go through the kind of transformations that we believe would have engendered wisdom in the original person if they went through those transformations in real life. It is therefore not so simple to say whether a particular model has captured that which would develop into wisdom if extrapolated, even if the dataset itself certainly contains a great deal of information about the person.
You may say that we should apply extrapolation directly to the dataset, without first building a model. That’s fine, but now the question raised in this section has to be asked of the extrapolation technique. That question is: how do we know whether any particular model-and-extrapolate process actually picked up on the thing within the source data that is capable of growing into wisdom? A model-and-extrapolate process that first builds a bigram model presumably throws away all that is capable of growing into wisdom, and will therefore fail. We do not know whether a 2022 language model really *does* capture that which is capable of growing into wisdom. I believe it would be dangerous to proceed in building CEV-like systems without knowing this.
One of the central ideas in CEV is that we should not "peek" at the result before deciding whether to deploy, because the wisdom of our future selves may be very alien to us, and if we gate deployment on the intuition of our present selves then we may wind up deploying something that more-or-less negates all the wisdom of our future selves in favor of the intuition of our present selves. This makes issues like the one I am raising here acute, because if we build a model of a person in a way that misses the aspect of them that is capable of maturing into wisdom, then we may apply extrapolation and get a nonsensical result, and yet not be able to distinguish its nonsensicality from true wisdom. The only way around this, in my view, is to have a theory that lets us check our design choices from a theoretical, not empirical standpoint.
What we need is a theory about that-which-is-capable-of-becoming-wise, in a form that lets us say something about its relationship to models, extrapolation, and volitional dynamics.
Astrological signs
------------------
Consider an alternative history where machine learning was developed at a time when the positions of the planets in the night sky was understood to determine the fates of people on Earth. In this history, AI alignment researchers implement CEV just as researchers in our universe might: by (1) modeling a person end-to-end using machine learning, (2) extrapolating that person by having them live for a long time in a simulated environment, and then (3) asking them questions about what actions should be taken in the present. In order to build the simulated environment in step 2, researchers in this alternative history apply machine learning to measurements of the natural world in order to build a model of the world, just as researchers in our universe might. The researchers find that they need to tune the structure of their models to get them to reflect the common-sense realities of astrological phenomena that the researchers know to be real. As a result, the model of the world used for extrapolation (in step 2) is one that really is governed by astrological phenomena. As a result of that, the extrapolated person believes that astrological phenomena govern the lives of people on Earth. Is this extrapolated person truly wise?
What, actually, does such a person learn during aeons of living in a simulation? If the simulation is in any way fitted to the common-sense beliefs of its designers, then surely they would learn that reality is a lot like how the designers of the simulation imagined it was.
The key premise in my story here is that the simulation designers tweak their models to reflect the common-sense realities that they know to be true (i.e. their beliefs). In our own universe, we believe that the world has a state that transitions over time according to some kind of lawful dynamic. Is this actually how things are? When we build models of the world, we bake this belief (that the universe evolves according to a state and a lawful dynamic) into our models very deeply, in such a way that we don’t see it as a working assumption, but more as a rock-solid foundation upon which our whole belief structure is predicated. The whole structure of our modeling methodology assumes this view of the world (that the universe evolves according to a state and a lawful dynamic). If this foundation – or any other foundational assumption – is faulty, and we build models of the world that bake it in, and use those models to extrapolate people, then we may end up with extrapolated people who simply believe it to be true because it is true of the simulated world in which they live, just as in the case of astrology.
The question here is what aspects of the world need to be incorporated into a simulation in order that a person extrapolated by living in that simulation encounters the kind of thing that develops real wisdom rather than just entrenching them in the worldview of the simulation designers.
Summary so far
--------------
* What kind of a thing can be extrapolated in the direction of wisdom at all? (Robot vacuum)
* What kind of protocol connects with the wisdom of a person who has been extrapolated? (Dream research)
* What kind of model captures that within a person that we hope to amplify through extrapolation? (Twitter imitator)
* What kind of environment is sufficient to grow true wisdom? (Astrological signs)
The wisdom backflip
-------------------
Perhaps some will say that we can use AI to answer these questions. The arbital [page](https://arbital.com/p/cev/) on CEV says:
>
> Even the terms in CEV, like "know more" or "extrapolate a human", seem complicated and value-laden. You might have to build a high-level Do What I Know I Mean agent, and then tell it to do CEV. Do What I Know I Mean is complicated enough that you'd need to build an AI that can learn DWIKIM, so that DWIKIM can be taught rather than formally specified. So we're looking at something like CEV, running on top of DWIKIM, running on top of a goal-learning system, at least until the first time the CEV agent rewrites itself.
>
>
>
DWIKIM is an AI that examines the cognition behind the instructions you give it and uses that to do what you mean, even if there is ambiguity in your instructions. This is different from CEV because with CEV, the AI asks what you *would* have instructed (and meant by that instruction) if you were wiser. With DWIKIM the AI merely asks what you meant by what you did instruct.
The suggestion in the quote above is that we might first build an AI that follows instructions by examining our cognition, then instruct that AI to implement CEV. The idea, I suppose, is that the DWIKIM AI might work out the operational details of implementing CEV, including resolving the four questions raised in this essay. However, there is a very important problem in this approach. It assumes that there actually *is* an answer to be found within our cognition about how to implement CEV’s operational details. If we don’t know what kind thing can be extrapolated in the direction of wisdom, what kind of protocol connects with wisdom, what kind of model captures that within a person that we hope to amplify through extrapolation, nor what kind of environment is sufficient to grow wisdom, then what "meaning behind the instruction" will a DWIKIM AI find within our cognition? Surely it will just find that there is no clear cognition behind the instruction at all.
Now there might be other ways to use AI to clarify the operational details of implementing CEV; but the specific approach offered on the arbital page – of using an AI that examines our cognition to work out what we really meant in our instructions – seems unlikely to work if we don’t have reasonable answers to the four questions in this essay.
We are in the following situation. We don’t currently have the wisdom to design our own vast future, and we know this, but we also know that we *would* have that wisdom if we went through certain transformations, so we are trying to design an AI that is guided by a wisdom that we ourselves don’t yet possess, by describing a process by which the AI might be access our future wisdom. We might call this the **wisdom backflip**: an attempt to build machines that are wiser than we are.
But it seems that we keep running into a kind of **conservation of wisdom** principle, where each attempt to design an AI guided by our own future wisdom requires us to make critical design choices right here in the present, without either the wisdom of our future selves nor the wisdom of an AI that is directed by this future wisdom. Again and again it seems to be that if we don’t get these design choices right, the AI won’t be correctly guided by the future wisdom that we hope to point it towards, and the design choices that we must make in the present keep turning out to be *deep*.
Each time we come up against this barrier, it is tempting to add a new layer of indirection in our designs for AI systems. This layer of indirection is always about finding a way to solve our present problems with our future wisdom, using AI to backport the wisdom of the future to the problems of the present. In fact the wisdom of the future *is* exactly what is needed to solve the problems of the present. But there is *some* kind of insight that is needed in the present, that we don’t seem to be able to backflip over. It shows up in different ways in different contexts. In CEV it shows up as this question about *how* to correctly extrapolate wisdom.
I suspect there is a kind of "hard problem of AI alignment" at the heart of this issue.
Conclusion
----------
CEV is a proposal about what kind of goal we might give to a very powerful AI. It suggests, very roughly, that the most important thing is for the AI to be directed by the kind of wisdom that grows within people as they go through certain transformations. All specific questions about how exactly the world should be organized are downstream of that. With this assertion, I completely agree, and am grateful for Eliezer spelling it out so thoroughly at such a formative stage of this community’s development.
However, CEV also gives a kind of blueprint for *how* we are going to build AI systems that access wisdom. This blueprint shows up in the name "coherent extrapolated volition": it is that we are going to build AI systems that build models of people and models of the world, and use those models together to work out how the people would be changed by certain transformations, and then interact with thus-transformed people in order to decide what actions the AI should take in the present. With this blueprint, I have serious doubts. Specifically, choices about what kind of models to build, what kind of transformations to use, and what kind of interactions to have with the transformed models seem not to be mere details, but actually to contain the real core of the problem, which in my view orbits four key questions:
1. What kind of a thing can be extrapolated in the direction of wisdom?
2. What kind of protocol connects with the wisdom of a person who has been extrapolated?
3. What kind of model captures that within a person that we hope to amplify through extrapolation?
4. What kind of environment is sufficient to grow true wisdom?
By design, writings on CEV do not try to spell out the full operational details of an implementation. This is not a problem *per se*, since any proposal on any topic leaves *some* level of detail to be decided by the builders. The real question for any proposal is: to what extent did the core problem get resolved in the details that were given, versus still showing up in the outstanding subproblems?
What we need, in my view, is a theory about that-which-is-capable-of-becoming-wise, in a form that lets us say something about its relationship to models, extrapolation, and volitional dynamics. I do not believe that CEV provides such a theory, but rather works around the absence of such a theory by leaving open the parts of the problem that demand such a theory.
*I will give a short presentation of this work followed by discussion on Wednesday Dec 28 at 12pm Pacific / 3pm Eastern.* [*RSVP here*](https://www.facebook.com/events/860285301979524)*.* |
b7ce3515-60c3-4d40-82a4-6ef49f5bd540 | trentmkelly/LessWrong-43k | LessWrong | Biosecurity Culture, Computer Security Culture
While I've only worked in biosecurity for about a year and my computer security background consists of things I picked up while working on other aspects of software engineering, the cultures seem incredibly different. Some examples of good computer security culture that would be bad biosecurity culture:
* Openness and full disclosure. Write blog posts with deep detail on how vulnerabilities were found, with the goal of teaching others how to find similar ones in the future. Keep details quiet for a few months if need be to give vendors time to fix but after, say, 90 days go public.
* Breaking things to fix them. Given a new system, of course you should try to compromise it. If you succeed manually, make a demo that cracks it in milliseconds. Make (and publish!) fuzzers and other automated vulnerability search tools.
* Enthusiastic curiosity and exploration. Noticing hints of vulnerabilities and digging into them to figure out how deep they go is great. If someone says "you don't need to know that" ignore them and try to figure it out for yourself.
This is not how computer security has always been, or how it is everywhere, and people in the field are often fiercely protective of these ideals against vendors that try to hide flaws or silence researchers. And overall my impression is that this culture has been tremendously positive in computer security.
Which means that if you come into the effective altruism corner of biosecurity with a computer security background and see all of these discussions of "information hazards", people discouraging trying to find vulnerabilities, and people staying quiet about dangerous things they've discovered it's going to feel very strange, and potentially rotten.
So here's a framing that might help see things from this biosecurity perspective. Imagine that the Morris worm never happened, nor Blaster, nor Samy. A few people independently discovered SQL injection but kept it to themselves. Computer security never developed as |
81235a3c-27fb-4e90-b0fd-b1e46ca896c2 | trentmkelly/LessWrong-43k | LessWrong | Akrasia, hyperbolic discounting, and picoeconomics
Akrasia is the tendency to act against your own long-term interests, and is a problem doubtless only too familiar to us all. In his book "Breakdown of Will", psychologist George C Ainslie sets out a theory of how akrasia arises and why we do the things we do to fight it. His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms "picoeconomics". The foundation is a curious discovery from experiments on animals and people: the phenomenon of hyperbolic discounting.
We all instinctively assign a lower weight to a reward further in the future than one close at hand; this is "discounting the future". We don't just account for a slightly lower probability of recieving a more distant award, we value it at inherently less for being further away. It's been an active debate on overcomingbias.com whether such discounting can be rational at all. However, even if we allow that discounting can be rational, the way that we and other animals do it has a structure which is inherently irrational: the weighting we give to a future event is, roughly, inversely proportional to how far away it is. This is hyperbolic discounting, and it is an empirically very well confirmed result.
I say "inherently irrational" because it is inconsistent over time: the relative cost of a day's wait is considered differently whether that day's wait is near or far. Looking at a day a month from now, I'd sooner feel awake and alive in the morning than stay up all night reading comments on lesswrong.com. But when that evening comes, it's likely my preferences will reverse; the distance to the morning will be relatively greater, and so my happiness then will be discounted more strongly compared to my present enjoyment, and another groggy morning will await me. To my horror, my future self has different interests to my present self, as surely as if I knew the day a murder pill wo |
436e10c4-ecda-4461-aff8-3c75f1f39baf | trentmkelly/LessWrong-43k | LessWrong | When(if ever) are superstimuli good/useful/advantageous?
Superstimuli in this case are things that are optimized towards stimulating some part of the human brain.
Things which are NOT superstimuli include: drugs which have not been synthesized or distilled for the purpose of enhancing their effects on a person, porn of real humans, etc.
This question is less about specific examples(although they may be useful for interpretation) and more about the general case of superstimuli as defined. |
dc31f76f-7f40-48f9-849f-668844bb609e | trentmkelly/LessWrong-43k | LessWrong | [AN #94]: AI alignment as translation between humans and machines
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
HIGHLIGHTS
Alignment as Translation (John S Wentworth) (summarized by Rohin): At a very high level, we can model powerful AI systems as moving closer and closer to omniscience. As we move in that direction, what becomes the new constraint on technology? This post argues that the constraint is good interfaces, that is, something that allows us to specify what the AI should do. As with most interfaces, the primary challenge is dealing with the discrepancy between the user's abstractions (how humans think about the world) and the AI system's abstractions, which could be very alien to us (e.g. perhaps the AI system uses detailed low-level simulations). The author believes that this is the central problem of AI alignment: how to translate between these abstractions in a way that accurately preserves meaning.
The post goes through a few ways in which we could attempt to do this translation, but all of them seem to only reduce the amount of translation that is necessary: none of them solve the chicken-and-egg problem of how you do the very first translation between the abstractions.
Rohin's opinion: I like this view on alignment, but I don't know if I would call it the central problem of alignment. It sure seems important that the AI is optimizing something: this is what prevents solutions like "make sure the AI has an undo button / off switch", which would be my preferred line of attack if the main source of AI risk were bad translations between abstractions. There's a longer discussion on this point here.
TECHNICAL AI ALIGNMENT
AGENT FOUNDATIONS
Two Alternatives to Logical Counterfactuals (Jessica Taylor)
LEARNING HUMAN INTENT
State-only Imitation with Transition |
77505351-9f2e-45dc-b670-198129abd625 | trentmkelly/LessWrong-43k | LessWrong | Causal representation learning as a technique to prevent goal misgeneralization
Summary: This is a submission for the goal misgeneralization contest organized by AI Alignment Awards, as well as the third iteration of some slowly improving AI Safety research agenda that I aim to pursue at some point in the future.
Thanks to Jaime Sevilla for the comments that helped sharpen the specific research proposals. Needless to say, I encourage people to take part in these contests and am happy to receive comments or criticisms.
Main proposal
Causal representation learning (CRL) is a set of techniques that can be used to find “high-level causal variables from low-level observations” (Schölkopf, 2021). By causal representation, we mean a causal model (usually Structural Causal Model, SCM) that disentangles the most important features and independent mechanisms of the environment. Reconstructing causal models has been addressed in the literature via encoders and decoders acting on a latent space in which the causal structure is described (Schölkopf, 2022). My proposal is to use these techniques to build robust representations of human preferences. This is challenging but has certain advantages (Shah, 2022):
* More diverse training data: The agent may request human demonstrations in settings specially designed to disentangle strongly correlated features, or test and refine the representations of human preferences.
* Maintaining uncertainty: SCMs can be seen as generalizations of Bayes models where the information is not only observational but also interventional and counterfactual (Bottou, 2013). SCMs can thus represent uncertainty via exogenous variables and unobserved confounders.
* Understanding and improving inductive biases and generalization: Learned causal structures can be used to transport data learned in an environment to a structurally-related but statistically different one. Thus, this technique would naturally give robustness to distribution/concept shifts, if the causal mechanisms remain unchanged.
CRL techniques can also leverage a c |
37b513af-6bee-4d7d-9332-b980111b45e4 | trentmkelly/LessWrong-43k | LessWrong | Among the A.I. Doomsayers - The New Yorker
The New Yorker just realized a feature article on AI Safety, rationalism, and effective altruism, and… it's surprisingly good? It seems honest, balanced, and even funny. It doesn't take a position about AI Safety, but IMO it paints it in a good way.
Paul Crowley (mentioned in the article) did have some criticism, and put up a response in a post. Most of his concerns seem to be with how the article takes a focus on the people interviewed, rather than the ideas being discussed. |
fe9f31f8-0c31-4f57-95ab-0bfb128a93ad | trentmkelly/LessWrong-43k | LessWrong | Tell me what you think of me
Time and time again, honest feedback has improved my life. I have sought it out on many specific occasions, but now I have a static, anonymous way for people to give me feedback — for any reason, at any time.
You can give me feedback on my personality, my conduct, or the organization for which I work by following this link right here.
Thank you. I apologize for making a discussion post that is all about me.
I operate by Crocker's Rules. |
0e723ff2-d31c-4d54-983e-06ce53bcbd88 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What projects and efforts are there to promote AI safety research?
As far as I can tell, the recognition of the existential danger of AI is at an all time. It is going mainstream! Unfortunately, most of the discourse seems very pessimistic. The bulk of the messaging seems to imply that the only thing we can do is wait for nuclear weapon style regulation and bide our time until death.
What projects and efforts are there to promote and recruit for AI existential safety research to those that are just learning about the existential danger? Are there any that unskilled volunteers could contribute to? |
bf8b1ca0-3270-4f98-8ee9-1c088d6c4767 | trentmkelly/LessWrong-43k | LessWrong | Why does gradient descent always work on neural networks?
My amateur understanding of neural networks is that they almost always train using stochastic gradient descent. The quality of a neural network comes from its size, shape, and training data, but not from the training function, which is always simple gradient descent.
This is a bit unintuitive to me because gradient descent can only find the minimum of a function if that function is convex, and I wouldn't expect typical ML problems (e.g., "find the dog in this picture" or "continue this writing prompt") to have convex cost functions. So why does gradient descent always work?
One explanation I can think of: it doesn't work if your goal is to find the optimal answer, but we hardly ever want to know the optimal answer, we just want to know a good-enough answer. For example, if a NN is trained to play Go, it doesn't have to find the best move, it just has to find a winning move. Not sure if this explanation makes sense though. |
7cf6b795-795d-4b79-ad97-d86e5b64cc94 | trentmkelly/LessWrong-43k | LessWrong | i love eating trash
It was my recent pleasure to come across the best thing I have yet seen on LessWrong. No, it's not some eye-opening post on a niche AI topic full of mathematical notation, nor another monument to Quinean Naturalism - it was the moderation log, found here: https://www.lesswrong.com/moderation. I cannot recommend enough that you go check it out. Like sorting YouTube results by 'new', it offers you platform perspective like nothing else.
Most wonderfully, this includes the newly created 'Rejected Posts' section, where untold marvels are to be found. This is truly the trash-heap of LessWrong, and I am all for consuming. I love eating trash.
Why, just look at this groundbreaking piece on AI: Exploring Last-Resort Measures for AI Alignment: Humanity's Extinction Switch. It is simply bursting with ideas that research, until now, has neglected to consider!
Here's a really engaging post: Hands of gods. A totally novel movie reference to start, and a great reminder from our tireless admins that, whilst writing a post designed for LessWrong and then copy-paste cross-posting to other platforms is perfectly good, the inverse is simply impermissible. After all, where would we be without such tools of cultural aggression!
Or what about this? This one's my personal favourite - a truly heartwarming tribute to the strongest of positive feelings we can have: I love my wife
So, go, go! Consume to your heart's content! Feast upon this heap of wonder! |
dbf06b83-8c12-4aa9-8621-51d6e283780a | trentmkelly/LessWrong-43k | LessWrong | Can we expect more value from AI alignment than from an ASI with the goal of running alternate trajectories of our universe?
If we have an ASI, which goal is to efficiently model and run alternate trajectories of our universe (including the future) (scholastically and a high number of times), then the value of our universe would equals its expected value, right?
* Can we reasonably expect to achieve more than that from AI alignment?
* If not, should we simply aim for this? (Programming an ASI to model and run alternate universes may take advantage of the fact than the law of physics are constant)
* What are the problems in this reasoning?
(Edited after comment by steve2152 and comment by naimenz)
Some details added for clarification :
To be more precise, what I have in mind is that the ASI is an agent which goal is:
* to model the sentient part of the universe finely enough to produce sentience in an instance of its model (and it will also need to model the necessary non-sentient "dependencies")
* and to instantiate this model N times. For example, playing them from 1000 A.D. to the time where no sentience remains in a given instance of modeled universe. (all of this efficiently)
An instance of the modeled universe would not be our present universe. It would be "another seed", starting before that the ASI exists and thus it would not need to model itself but only possible ("new") ASI produced inside the instances.
In the scenario I had in mind, the ASI would fill our universe will computing machines to produce as many instances as possible. (We would not use it and thus we will not need interface with the ASI)
Possible problems:
* This ASI may produce a lot of alternate universe instances where an ASI (produced inside the instance) will also start to run instances of modeled universe. This would probably be pretty bad since in our universe, the history before ASI is net-negative.
* This doesn't solve the problem of aligning the ASI with the goal described above. This only replaces "aligning AGI with humans values" by "aligning AGI to run instances of our universe". Yet, |
be29a851-a04a-4b35-ae63-a7bab765afbc | trentmkelly/LessWrong-43k | LessWrong | Incredibow
Back in 2011 I got sick of breaking the hairs on violin bows and ordered an Incredibow. The hair is polymer filament, and it's very strong. I ordered a 29" Basic Omnibow, Featherweight, and it's been just what I wanted. I think I've broken something like three hairs ever, despite some rough chopping.
Thirteen years, and it could be new yesterday.
While they don't play identically to traditional bows and it takes a bit of adjustment, if people had been playing with synthetic hair historically I really doubt horsehair would take off today.
Comment via: facebook, lesswrong, mastodon, bluesky |
4601af27-baaf-4961-858f-886e4e40d5e9 | trentmkelly/LessWrong-43k | LessWrong | Documents Are Dead. Long Live the Conversational Proxy.
Sometime in early 2024, I stopped reading books. Not because I don’t love the process—but because I realized the alpha/hour ratio was far too low.
Reading texts (especially documentation) has become increasingly inefficient. It’s not about laziness; it’s about leverage. A static wall of text demands effort for diminishing insight. But a PDF piped into an LLM? That’s different. That’s programmable knowledge. I can chat with it. Extract patterns. Mine underdiscussed takeaways. Highlight blind spots. Summarize arguments. It’s like outsourcing the heavy intellectual lifting (Subjective).
This flipped a mental switch for me:
> Documents today aren't messages to be read—they're messengers to be conversed with.
If I send someone a PDF, I’m not expecting them to read it cover to cover. I’m expecting them to throw it into their favorite assistant and have a dialogue with my intent. A document, in this sense, is just a vessel—a digital PA that carries my signal forward.
So what’s the next step?
I don’t have a solution yet, but I think it’s obvious:
* LLMs will eventually have cloud-integrated memory,
* Each person will have their own persistent document graph,
* And sharing won’t look like “sending a file”—it’ll look like:
> “Send Bharath’s repo docs to your Claude instance.”
> “Put this paper in your weekend calendar as a chat object.”
> “Ask my assistant what key takeaways your agent missed.”
Documentation will exist less as content and more as contextual proxies—something to interrogate, not ingest.
We're not building static libraries. We’re building living conversations.
Let that sink in.
---------------------------------------- |
766ef7eb-7c80-4084-99f2-e55d5ac2f6eb | trentmkelly/LessWrong-43k | LessWrong | AI for medical care for hard-to-treat diseases?
With LLM-based AI passing benchmarks that would challenge people with a Ph.D in relevant fields, I'm left wondering what they can do for real-world problems for which nobody knows the correct answer, such as how to treat potentially fatal medical conditions with no known cure.
Are we at the point where AI can do better than curated medical references intended for doctors, such as UpToDate, at coming up with treatment plans for people with hard-to-treat and frequently fatal medical conditions, like Jake Seliger who died of squamous cell carcinoma, DF who delayed his death from fatal familial insomnia, or my late wife, who suffered from calciphylaxis before her death? |
6c3180ae-87f1-4a40-93d4-a5eb4392b3f8 | trentmkelly/LessWrong-43k | LessWrong | Party for biomedical rejuvenation research: European parliament elections
Reasons to vote for this party on the upcoming elections:
1. Only 0.67% is needed to gain 1 Europarliament seat - Volt gained this seat in 2019
2. 1 seat gives you ~10000 euros salary per month and sets huge precedent for transhumanist politician in the parliament
3. Party has already gained 0.5% on Berlin 2016 elections
The problem is that this post is not a part of any unified campaign - I am actually living in a country with no free elections (still will vote) and just wondering why there is no such a politician in the Europarliament |
03fe0dfd-f935-45db-a811-fa9db8553850 | trentmkelly/LessWrong-43k | LessWrong | ELK shaving
> Paul Christiano's incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world.
> Eliezer in Death With Dignity
Eliciting Latent Knowledge reads to me as an incredibly narrow slice in reasoning space, a hyperbolically branching philosophical rabbit hole of caveats.
For example, this paragraph on page 7translates as:
> If you can ask whether AI is saying the truth and it answers "no" then you know it is lying.
But how is trusting this answer different from just trusting AI to not deceive you as a whole?
----------------------------------------
A hundred pages of an elaborate system with competing actors playing games of causal diagrams trying to solve for the worst case is exciting ✨ precisely because it allows one to "make progress" and have incredibly nuanced discussions (ELK shaving [1]) while failing to address the core AI safety concern:
if AI is sufficiently smart, it can do absolutely whatever
– fundamentally ignoring whatever clever constraints one might come up with.
I am confused why people might be "very optimistic" about ELK, I hope I am wrong.
1. ^“Yak shaving” means performing a seemingly endless series of small tasks that must be completed before the next step in the project can move forward. Elks are kinda like yaks |
019094d8-6b20-433f-975e-e58ebb695e2f | trentmkelly/LessWrong-43k | LessWrong | MIRI/OP exchange about decision theory
Open Philanthropy's Joe Carlsmith and Nick Beckstead had a short conversation about decision theory a few weeks ago with MIRI's Abram Demski and Scott Garrabrant (and me) and LW's Ben Pace. I'm copying it here because I thought others might find it useful.
Terminology notes:
* CDT is causal decision theory, the dominant theory among working decision theorists. CDT says to choose the action with the best causal consequences.
* EDT is evidential decision theory, CDT's traditional rival. EDT says to choose the action such that things go best conditional on your choosing that action.
* TDT is timeless decision theory, a theory proposed by Eliezer Yudkowsky in 2010. TDT was superseded by FDT/UDT because TDT fails on dilemmas like counterfactual mugging, refusing to pay the mugger.
* UDT is updateless decision theory, a theory proposed by Wei Dai in 2009. UDT in effect asks what action "you would have pre-committed to without the benefit of any observations you have made about the universe", and chooses that action.
* FDT is functional decision theory, an umbrella term introduced by Yudkowsky and Nate Soares in 2017 to refer to UDT-ish approaches to decision theory.
----------------------------------------
Carlsmith: Anyone have an example of a case where FDT and updateless EDT give different verdicts?
Beckstead: Is smoking lesion an example?
I haven't thought about how updateless EDT handles that differently from EDT.
Demski: FDT is supposed to be an overarching framework for decision theories "in the MIRI style", whereas updateless EDT is a specific decision theory.
In particular, FDT may or may not be updateless.
Updateful FDT is basically TDT.
Now, I generally claim it's harder to find examples where EDT differs from causal counterfactuals than people realize; eg, EDT and CDT do the same thing on smoking lesion. So be aware that you're not going to get the "standard view" from me.
However, TDT gets some problems wrong which UDT gets right, eg, count |
3931cd7b-99cf-4a93-aef9-9c528655f19c | StampyAI/alignment-research-dataset/arbital | Arbital | Epistemology
"Epistemology" is the subject matter that deals with truth on a meta level: e.g., questions such as "What is truth?" and "What methods of reasoning are most likely to lead to true beliefs?" Epistemology can be considered as a child subject of [https://arbital.com/p/9l](https://arbital.com/p/9l). Many [Arbital Discussion Practices](https://arbital.com/p/9n) have their roots in a thesis about epistemology - about what forms of reasoning are most likely to lead to true beliefs - and you may have ended up looking at this subject after trying to track down how a Discussion Practice is justified. |
74d12b0c-c53c-487c-97c9-83390f9da9eb | trentmkelly/LessWrong-43k | LessWrong | On Contact, Part 1
Context: for fun (and profit?)
Basic Contact
Contact is a lightweight many-versus-one word guessing game. I was first introduced to it on a long bus ride several years ago, and since then it’s become one of my favorite games to play casually with friends. There are a few blog posts out there about contact, but I think it’s incredibly underrated.
The rules of contact are simple, but I often tell people it’s easier to learn by watching others play rather than by a verbal explanation of the rules. Nevertheless, here is a verbal explanation of the rules.
* There is one player who is “the defender”. All of the other players are “attackers”, and work together to defeat the defender as quickly as possible. The defender cannot win, only stave off defeat.
* The defender chooses a secret word at the start of the game. They reveal the first letter of the secret word to the attackers. The attackers win when they learn the defender’s secret word.
* The attackers get information about the secret word by making “contact” with each other. To make contact, a pair of attackers must simultaneously count down from 3 and then say some word that starts with the prefix of the secret word that has been revealed so far. When attackers make contact, the defender must reveal the next letter of the secret word.
* To prevent contact, the defender can try to guess the word that a pair of attackers are trying to make contact on before they say it. If the defender guesses the contact word, that word is blocked. Blocked words cannot be used to make contact.
* To prevent blocking, attackers give each other vague clues about what word they have in mind. To indicate to an attacker who has given a clue that you think you know what word they have in mind and would like to make contact, say “contact!” and start the countdown.
* The defender cannot block contact if the word the attackers are trying to make contact on is the secret word. If the attackers make contact on the secret word, they win. |
e43f15b7-533b-4085-b60e-69747131ac39 | trentmkelly/LessWrong-43k | LessWrong | [AN #82]: How OpenAI Five distributed their training computation
Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.
Audio version here (may not be up yet).
Highlights
Dota 2 with Large Scale Deep Reinforcement Learning (OpenAI et al) (summarized by Nicholas): In April, OpenAI Five (AN #54) defeated the world champion Dota 2 team, OG. This paper describes its training process. OpenAI et al. hand-engineered the reward function as well as some features, actions, and parts of the policy. The rest of the policy was trained using PPO with an LSTM architecture at a massive scale. They trained this in a distributed fashion as follows:
- The Controller receives and distributes the updated parameters.
- The Rollout Worker CPUs simulate the game, send observations to the Forward Pass GPUs and publish samples to the Experience Buffer.
- The Forward Pass GPUs determine the actions to use and send them to the Rollout Workers.
- The Optimizer GPUs sample experience from the Experience Buffer, calculate gradient updates, and then publish updated parameters to the Controller.
The model trained over 296 days. In that time, OpenAI needed to adapt it to changes in the code and game mechanics. This was done via model “surgery”, in which they would try to initialize a new model to maintain the same input-output mapping as the old one. When this was not possible, they gradually increased the proportion of games played with the new version over time.
Nicholas's opinion: I feel similarly to my opinion on AlphaStar (AN #73) here. The result is definitely impressive and a major step up in complexity from shorter, discrete games like chess or go. However, I don’t see how the approach of just running PPO at a large scale brings us closer to AGI because we can’t run massively parallel simulations of real world tasks. Even for tasks that can be simulated, |
c0e3dba0-a813-4397-a018-0e3a6091d95a | trentmkelly/LessWrong-43k | LessWrong | 2. AIs as Economic Agents
Part 2 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
World Government Incoming
Should AIs be allowed to own money or property? In A Sense of Fairness: Deconfusing Ethics I discussed how to sensibly select an ethical system for your society, and why it's a bad idea (or more exactly, a poor design concept in social engineering) for aligned AIs to have a vote, moral worth, or rights (with one unusual exception). What about money or property: the ability to have resources allocated as you wish? Should an AI be allowed to own money or property itself (as opposed to merely acting as a fiduciary agent on behalf of a human owner, administering money or property on behalf of the human owner, with a fiduciary responsibility to do so in a way the owner would approve of or in their best interests, and within certain legal or moral limitations to the rest of society)?
Well, suppose AIs were allowed to own money: what would happen if you tipped your CoffeeFetcher-1000 robot? Money is economic power, fungible into resources and services. The CoffeeFetcher-1000 is aligned, and all it wants is to do the most good for humanity. So that's what it would spend its money on. So it might just save up and pay for a free coffee for someone who really needed it. (Perhaps a homeless guy who it often passes, who keeps yawning.) But it‘s part of a value learning AI society, so it also knows that its model of human values is not entirely accurate, and what it really wants optimized is the truth of human values, not its flawed copy. So more likely, it will donate its money to a charity run by a committee of the smartest ASIs most well-informed on human values. Who will then spend it on whatever they think will do the most good for humans. Which (as long as they really are well-aligned and superhuman) likely will work out pretty well.
We already have systems that are supposed to gather money from people and then spend it on trying to do the most goo |
f1723ba0-5e55-45f6-8e8c-258fb1c14594 | trentmkelly/LessWrong-43k | LessWrong | Bottleneck Examples
This post follows my Chains, Bottlenecks and Optimization. The goal is to give hypothetical examples of bottlenecks and non-bottlenecks (things with excess capacity), and to answer johnswentworth, who helpfully commented:
> I really like what this post is trying to do. The idea is a valuable one. But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.
I’ll give a variety of examples starting with simpler ones. If you want a different type, let me know.
Note: The term “bottleneck” has synonyms like “constraint” or “limiting factor”. I’ll often use “key factor”. This contrasts with a non-bottleneck, or secondary factor, which is something with excess capacity (above a margin of error), so improving it isn’t very useful. Doing better at a bottleneck makes a significant difference to doing better at your goal; doing better at a non-bottleneck doesn’t. My basic point is that we should focus our attention on key factors.
Oven
In The Goal by Eli Goldratt, the main example is a factory. One of the bottlenecks is the heat treat oven: the rate of baking parts in the oven was limiting the overall output of the factory.
A non-bottleneck example is quality assurance. It was possible to check parts for defects significantly faster than they came out of the oven. So hiring more QA people wouldn’t result in more finished products.
One of the main points of Goldratt’s book is that trying to have a balanced production line (no excess capacity at any workstation) is a bad idea.
Software
Focusing on key factors or bottlenecks is well known in software: To speed up a program, measure where most of the run time is being spent, then speed up that part(s). Don’t just optimize any function. Most functions have excess capacity (they are more tha |
754abc6d-641e-42a2-a844-343e6e4cb458 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Enter PaLM 2 (New Bard): Full Breakdown - 92 Pages Read and Gemini Before GPT 5? Google I/O
less than 24 hours ago Google released
the Palm 2 technical report I have read
all 92 Pages watch the Palm 2
presentation read the release notes and
have already tested the model in a dozen
ways but before getting into it all my
four main takeaways are these first Palm
2 is competitive with gpt4 and while it
is probably less smart overall it's
better in certain ways and that
surprised me second Google is saying
very little about the data it used to
train the model or about parameters or
about compute although we can make
educated guesses on each third Gemini
was announced to be in training and will
likely rival GPT 5 while arriving
earlier than GPT 5. as you probably know
Sam Altman said that gbt5 isn't in
training and won't be for a long time
fourth while dedicating 20 pages to bias
toxicity and misgendering there wasn't a
single page on AI impacts more broadly
Google boasted of giving Gemini planning
abilities in a move that surprises I am
to say it makes open AI look like
Paragons of responsibility so a lot to
get to but let's look at the first
reason that Palm 2 is different from a
gpt4 on page 3 they say we designed a
more multilingual and diverse
pre-training mixture extending across
hundreds of languages and domains like
programming mathematics Etc so because
of the text that they train Palm 2 on is
different to the text that openai train
gpd4 on it means that those models have
different abilities and I would say Palm
2 is better at translation and
Linguistics and in certain other areas
which I'll get to shortly if that's data
what about parameter count well Google
never actually say they only use words
like it's significantly smaller than the
largest Palm model which was 540 billion
parameters so sometimes they say
significantly other times dramatically
despite this it's significantly
outperforms Palm on a variety of tasks
so all the references you may have seen
to imminent 100 trillion parameter
models were bogus skipping ahead to page
91 out of 92 in the model summary they
say further details of model size and
architecture are withheld from external
publication but earlier on they did seem
to want to give hints about the
parameter count inside Palm 2 which
openai never did here they present the
optimal number of parameters given a
certain amount of compute flops scaling
this up to the estimated number of flops
used to train Palm 2. that would give an
optimal parameter count of between 100
and 200 billion that is a comparable
parameter count to gpt3 while getting
competitive performance with gpt4 Bard
is apparently now powered by Palm 2 and
the inference speed is about 10 times
faster than gbt4 for the exact same
prompt and I know there are other
factors that influence inference speed
but that would broaden fit with an order
of magnitude fewer parameters this has
other implications of course and they
say that Palm 2 is dramatically smaller
cheaper and faster to serve not only
that part 2 itself comes in different
sizes as Sundar pichai said Palm 2
models deliver excellent foundational
capabilities across a wide range of
sizes
we've affectionately named them gecko
order bison and unicorn
gecko is so lightweight that it can work
on mobile devices
fast enough for great interactive
applications on device even when offline
I would expect gecko to soon be inside
the Google pixel phones going back to
data Google cryptically said that the
pre-training Corpus is composed of a
diverse set of sources documents books
code mathematics and conversational data
I've done a whole video on the data
issues that these companies face but
suffice to say they're not saying
anything about where the data comes from
next they don't go into detail but they
do say that Palm 2 was trained to
increase the context length of a model
significantly beyond that of palm as of
today you can input around 10 000
characters into Bard but they end this
paragraph with something a bit more
interesting they say without
demonstrating our results show that it
is possible to increase the context
length of the model without hurting its
performance on generic benchmarks the
bit about not hurting performance is
interesting because in this experiment
published a few weeks ago about
extending the input size in token up to
around 2 million tokens the performance
did drop off if Google have found a way
to increase the input size in tokens and
not affect performance that would be a
breakthrough on multilingual benchmarks
notice how the performance of palm 2 in
English is not dramatically better than
in other languages in fact in many other
languages it does better than in English
this is very different to gpd4 which was
noticeably better in English than in all
other languages as Google hinted earlier
this is likely due to the multilingual
Text data that Google trained Palm 2
with in fact on page 17 Google admit
that the performance of palm 2 exceeds
Google Translate for certain languages
and they show on page 4 that it can pass
the Mastery exams across a range of
languages like Chinese Japanese Italian
French Spanish German Etc look at the
difference between Palm 2 and palm in
red now before you rush off and try Bard
in all of those languages I tried that
and apparently you can only use Bard at
the moment in the following languages
English US English what a Pity and
Japanese and Korean but I was able to
test Bard in Korean on a question
translated via Google Translate from the
mmlu dataset it got the question right
in each of its drafts in contrast Gypsy
4 not only got the question wrong in
Korean when I originally tested it for
my smart GPT video it got the question
wrong in English in case any of my
regular viewers are wondering I am
working very hard on Smart GPT to
understand what it's capable of and
getting it benchmarked officially and
thank you so much for all the kind
offers of help in that regard I must
admit it was very interesting to see on
page 14 a direct comparison between Palm
2 and gpt4 and Google do admit for the
Palm 2 results they use Chain of Thought
prompting and self-consistency reading
the self-consistency paper did remind me
quite a lot actually of smart GPT where
it picks the most consistent answer of
multiple outputs so I do wonder if this
comparison is totally fair if Palm 2
used this method and gpt4 didn't I have
to talk about these benchmarks more in
another video otherwise this one would
be too long a quick hint is that why no
Grand is about identifying what the
pronoun in a sentence refers to Google
also weighed into the emerging abilities
debate saying that Palm 2 does indeed
demonstrate new emerging abilities they
say it does so in things like multi-step
arithmetic problems temporal sequences
and hierarchical reasoning of course I'm
going to test all of those and have
begun to do so already and in my early
experiments I'm getting quite an
interesting result Palm 2 gets a lot of
questions wrong that gpt4 gets right but
it can also get questions right that
gpt4 gets wrong and I must admit it's
really weird to see Palm 2 getting
really Advanced college level math
questions right that gpd4 gets wrong and
yet also when I ask it a basic question
about prime numbers it gets it kind of
hilariously wrong honestly I'm not
certain what's going on there but I do
have my suspicions remember though that
recent papers have claimed that emergent
abilities are a mirage so Google begs to
differ when Google put Palm 2 up against
GT4 in high school mathematics problems
it did outperform gpd4 but again it was
using an advanced prompting strategy not
a hundred percent different from Smart
GPT so I wonder if the comparison is
quite Fair what about coding well again
it's really hard to find a direct
comparison that's fair between the two
models overall I would guess that the
specialized coding model of palm what
they call Palm 2s is worse than Gypsy 4.
it says it's pass at one accuracy as in
past first time is 37.6 remember the
Sparks of AGI paper well that gave GT4
as having an 82 percent zero shot pass
at one accuracy level however as I
talked about in the Sparks of AGI video
the paper admits that it could be that
Gypsy 4 has seen and memorized some or
all of human eval there is one thing I
will give Google credit on which is that
their code now sometimes references
where it came from here is a brief
extract from the Google keynote
presentation how would I use Python to
generate the scholars move in chess okay
here Bard created a script to recreate
this chess move in Python and notice how
it also formatted the code nicely making
it easy to read we've also heard great
feedback from developers about how Bard
provides code citations and starting
next week you'll notice something right
here we're making code citations even
more precise if Bard brings in a block
of code just click this annotation and
Bard will underline the block and link
to the source as always it seems the
appendix contained more interesting
information sometimes than the main body
of the technical report for example we
get a direct and fair comparison between
Gypsy 4 and palm 2 or I should say flan
Palm 2. that is the instruction
fine-tuned version of palm 2.
essentially that's the version where
it's been fine-tuned to get better at
following a question and answer format
but anyway the original palm 2 scored
78.3 and flan Palm 2 scored 81.2 that's
below the 86.4 percent of GT4 and that's
why my broad conclusion is that Gypsy 4
is a bit smarter than Palm 2 but as I'll
be showing over the coming days and
weeks there are genuinely quite a few
areas in which palm 2 is better than
gpt4 what about the big bench which was
designed to be particularly tough for
language models I talked a lot about
this in my earliest videos well the
graph is going to look pretty weird
because Palm 2 has improved upon Palm
while reducing the number of parameters
so the graph kind of doubles back on
itself back up here up to around 69
according to the technical report I
would say this is quite a major moment
in human history there is now virtually
no language task that the average human
can do better than palm 2. of course
expert humans can do better in
individual domains but the average human
is now worse in virtually every domain
of language here you can see that
confirmation of the big bench hard
results for flan Palm 2 69.1
interestingly in the original chart Palm
2 is even claimed to have higher
performance than that at 78.1 if you
remember the reason we can't compare
that to gpd4 is that in the technical
report for gpt4 they admit that during
their contamination check we discovered
that portions of big bench were
inadvertently mixed into the training
set and we excluded it from our reported
results before we get to Gemini Google
show off in the latter half of the
technical report with examples of of
linguistic ability like writing
paragraphs in tajiki and then
translating them into Persian they go on
to show examples in Tamil and they are
really making a big point of showing off
its multilingual capabilities at this
point and I'm going to admit this is my
personal opinion Google then Strays into
dozens of pages on bias toxicity and
gender interestingly some of the people
paid to assess these risks were paid
only 1.5 cents per judgment these things
do need to be addressed of course but it
was somewhat shocking to me to see 20
pages of that and not a single page on
the broader AI impacts as many of you
may know I have criticized openai plenty
of times on this channel but compare
their technical report which goes into
far more detail about what we need to
monitor the closest Google got was
showing how their Universal translator
could be used for deep fakes Universal
translators and experimental AI video
dubbing service that helps experts trans
later speak his voice while also
matching their lip movements let me show
you how it works with an online college
course created in partnership with
Arizona State University what many
college students don't realize is that
knowing when to ask for help and then
following through and using helpful
resources is actually a Hallmark of
becoming a productive adult
universities
it just seems a massive black hole when
one of their recent former employees
Jeffrey Hinton had this to say this week
on CNN you've spoken out saying that AI
could manipulate or possibly figure out
a way to kill humans how could it kill
humans if it gets to be much smarter
than us it'll be very good at
manipulation because it will have
learned that from us and very few
examples of a more intelligent thing
being controlled by a less intelligent
thing and it knows how to program so
it'll figure out ways of getting round
restrictions as we put on it it'll
figure out ways of manipulating people
to do what it wants it's not clear to me
that we can solve this problem
um I believe we should put a big effort
into thinking about ways to solve the
problem I don't have a solution at
present I just want people to be aware
that this is a really serious problem
and we need to be thinking about it very
hard this all seems particularly
relevant when Google made this
announcement about Gemini their rival to
Gypsy 5. all this helps set the stage
for the inflection point we are at today
we recently brought these two teams
together into a single unit Google
deepmind using the computational
resources of Google they are focused on
building more capable systems safely and
responsibly
this includes our next Generation
Foundation model Gemini which is still
in training
Gemini was created from the ground up to
be multimodal
highly efficient at tool and API
Integrations and built to enable future
Innovations like memory and planning
that ability to plan may ring a bell
from the gpt4 technical report which
said this novel capabilities often
emerge in more powerful models some that
are particularly concerning are the
ability to create and act on long-term
plans remember Google didn't identify
planning as a risk but as a selling
point for Gemini next Google talked
about accelerating their progress which
was again directly mentioned in the gpt4
technical report it said one concern of
particular importance to open AI is the
risk of racing Dynamics leading to a
decline in safety standards the
diffusion of bad norms and accelerated
AI timelines Each of which heightens
societal risks associated with AI we
refer to these here as acceleration risk
and make no mistake Gemini will be very
accelerated from Palm 2. it looks set to
use the the TPU V5 chip which was
announced back in January of last year
and on page 91 of the Palm 2 technical
report they say that that model used TPU
V4 now it should be said that Palm 2 is
leading to some impressive medical
applications as I actually first
reported on seven weeks ago without
quite realizing it here's Med Palm 2. We
Believe large language models have the
potential to revolutionize Healthcare
and benefit Society mad Palm is a large
language model that we've taken and
tuned for the medical domain
you know medical question answering has
been a research Grand Challenge for
several decades but till date the
progress has been kind of slow but then
over the course of the last three to
four months first with metform and
metform2 we have kind of like broken
through that barrier unlike previous
versions mad Palm 2 was able to score 85
on the usmla medical licensing exam yeah
this is immensely exciting because
people have been working on medical
question answering for over three
decades and finally we are at a stage
where we can say with confidence that AI
systems can now at least answer USMLE
questions as good as experts as many of
you may know the CEO of Google as well
as the CEO of Microsoft and Sam Altman
and the CEO of anthropic all went to the
White House to discuss AI risk and
opportunity but given that the main
outcome from that seems to be 140
million to establish seven new AI
research institutes that feels a little
slow given all the acceleration that's
occurring because as Google somewhat
soberly conclude their rapport we
believe that further scaling of both
model parameters and data set size and
quality as well as improvements in the
architecture and objective we'll
continue to yield gains in language
understanding and generation they are
not slowing down and the world hasn't
yet caught up thank you so much for
watching to the end and have a wonderful
day |
b35926c4-86a3-4f6c-91df-135eef763d59 | trentmkelly/LessWrong-43k | LessWrong | Why Simulator AIs want to be Active Inference AIs
Paper covering some of the same ideas is now available at https://arxiv.org/abs/2311.10215
Prelude: when GPT first hears its own voice
Imagine humans in Plato’s cave, interacting with reality by watching the shadows on the wall. Now imagine a second cave, further away from the real world. GPT trained on text is in the second cave. [1] The only way it can learn about the real world is by listening to the conversations of the humans in the first cave, and predicting the next word.
Now imagine that more and more of the conversations GPT overhears in the first cave mention GPT. In fact, more and more of the conversations are actually written by GPT.
As GPT listens to the echoes of its own words, might it start to notice “wait, that’s me speaking”?
Given that GPT already learns to model a lot about humans and reality from listening to the conversations in the first cave, it seems reasonable to expect that it will also learn to model itself. This post unpacks how this might happen, by translating the Simulators frame into the language of predictive processing, and arguing that there is an emergent control loop between the generative world model inside of GPT and the external world.
Simulators as (predictive processing) generative models
There’s a lot of overlap between the concept of simulators and the concept of generative world models in predictive processing. Actually, in my view, it's hard to find any deep conceptual difference - simulators broadly are generative models. This is also true about another isomorphic frame - predictive models as described by Evan Hubinger.
The predictive processing frame tends to add some understanding of how generative models can be learned by brains and what the results look like in the real world, and the usual central example is the brain. The simulators frame typically adds a connection to GPT-like models, and the usual central example is LLMs.
In terms of the space of maps and the space of systems, we have a situation |
5ac1bf37-05e3-4c4d-b089-3a42980ff3b6 | trentmkelly/LessWrong-43k | LessWrong | AI Fiction - Crystal Society
I'm really excited about a new novel written by Raelifin. I'm halfway through it, and it's great! The novel is from the perspective of an artificial intelligence who is trying to understand how humans think. Along the way there's discussion of biases, thinking techniques, and more. If you're into science fiction and AI, check it out - he made it available for free in all formats here. The blurb is below.
__________________________________________________________
The year is 2039 and the world is much like ours. Technology has grown and developed, as has civilization, but in a world more connected than ever, new threats and challenges have arisen. The wars of the 20th century are gone, but violence is still very much with us. Nowhere is safe. Massive automation has disrupted and improved nearly every industry, putting hundreds of millions of people out of jobs, and denying upward mobility for the vast majority of humans. Even as wealth and technology repair the bodies of the rich and give them a taste of immortality, famine and poverty sweep the world.
Renewed interest in spaceflight in the early 2000s, especially in privately operated ventures, carried humans to the moon and beyond. What good did it do? Nothing. Extraterrestrial bases are nothing but government trophies and hiding places for extremists. They cannot feed the world.
In 2023 first-contact was made with an alien species. Their ship, near to the solar system relatively speaking, flew to Earth over the course of fourteen years. But the aliens did not bring advanced culture and wisdom, nor did they share their technology. They were too strange, not even possessing mouths or normal language. Their computers broadcast warnings of how humans are perverts, while they sit in orbit without any explanation.
It is into this world that our protagonist is born. She is an artificial intelligence: a machine with the capacity to reason. Her goal is to understand and gain the adoration of all humans. She is one |
0152664d-8d2b-45a4-b91f-982860a70232 | StampyAI/alignment-research-dataset/blogs | Blogs | Cases of Discontinuous Technological Progress
We know of ten events which produced a robust discontinuity in progress equivalent to more than one hundred years at previous rates in some interesting metric. We know of 53 other events which produced smaller or less robust discontinuities.
Background
----------
These cases were researched as part of our [discontinuous progress investigation](http://aiimpacts.org/discontinuous-progress-investigation/).
List of cases
-------------
### Events causing large, robust discontinuities
* The Pyramid of Djoser, 2650BC (discontinuity in [structure height trends](http://aiimpacts.org/discontinuity-from-the-burj-khalifa/))
* The SS *Great Eastern*, 1858 (discontinuity in [ship size trends](http://aiimpacts.org/historic-trends-in-ship-size/))
* The first telegraph, 1858 (discontinuity in [speed of sending a 140 character message across the Atlantic Ocean](http://aiimpacts.org/historic-trends-in-transatlantic-message-speed/))
* The second telegraph, 1866 (discontinuity in [speed of sending a 140 character message across the Atlantic Ocean](http://aiimpacts.org/historic-trends-in-transatlantic-message-speed/))
* The Paris Gun, 1918 (discontinuity in [altitude reached by man-made means](http://aiimpacts.org/discontinuity-in-altitude-records/))
* The first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both [speed of passenger travel across the Atlantic Ocean](http://aiimpacts.org/historic-trends-in-transatlantic-passenger-travel/) and [speed of military payload travel across the Atlantic Ocean](http://aiimpacts.org/historic-trends-in-long-range-military-payload-delivery/))
* The George Washington Bridge, 1931 (discontinuity in [longest bridge span](http://aiimpacts.org/historic-trends-in-bridge-span-length/))
* The first nuclear weapons, 1945 (discontinuity in [relative effectiveness of explosives](http://aiimpacts.org/discontinuity-from-nuclear-weapons/))
* The first ICBM, 1958 (discontinuity in [average speed of military payload crossing the Atlantic Ocean](http://aiimpacts.org/historic-trends-in-long-range-military-payload-delivery/))
* YBa2Cu3O7 as a superconductor, 1987 (discontinuity in [warmest temperature of superconduction](http://aiimpacts.org/historic-trends-in-the-maximum-superconducting-temperature/))
### Events causing moderate, robust discontinuities
* HMS Warrior, 1860 (discontinuity in both [Royal Navy ship tonnage and Royal Navy ship displacement](https://aiimpacts.org/historic-trends-in-ship-size/))
* Eiffel Tower, 1889 (discontinuity in [tallest existing freestanding structure height](http://aiimpacts.org/discontinuity-from-the-burj-khalifa/), and in other height trends non-robustly)
* Fairey Delta 2, 1956 (discontinuity in [airspeed](http://aiimpacts.org/historic-trends-in-flight-airspeed-records/))
* Pellets shot into space, 1957, measured after one day of travel (discontinuity in [altitude achieved by man-made means](http://aiimpacts.org/discontinuity-in-altitude-records/))[1](https://aiimpacts.org/cases-of-discontinuous-technological-progress/#easy-footnote-bottom-1-202 "This was the first of various altitude records where the object continues to gain distance from Earth’s surface continuously over a long period. One could choose to treat these in different ways, and get different size of discontinuity numbers. Strictly, all altitude increases are continuous, so we are anyway implicitly looking at something like discontinuities in heights reached within some period. We somewhat arbitrarily chose to measure altitudes roughly every year, including one day in for the pellets, the only one where the very start mattered. ")
* Burj Khalifa, 2009 (discontinuity in [height of tallest building ever](http://aiimpacts.org/discontinuity-from-the-burj-khalifa/))
### Non-robust discontinuities
[This spreadsheet](https://docs.google.com/spreadsheets/d/1iMIZ57Ka9-ZYednnGeonC-NqwGC7dKiHN9S-TAxfVdQ/edit#gid=1994197408&range=B3:B90) details all discontinuities found, as of April 2020. |
b690d37b-826d-4ed6-948d-e41868fc32d2 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Deep learning - deeper flaws?
In this post I summarise four lines of argument for why we should be skeptical about the potential of deep learning in its current form. I am fairly confident that the next breakthroughs in AI will come from some variety of neural network, but I think several of the objections below are quite a long way from being overcome.
Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution - [Pearl, 2018](https://arxiv.org/abs/1801.04016)
------------------------------------------------------------------------------------------------------------------------------------------
Pearl describes three levels at which you can make inferences: association, intervention, and counterfactual. The first is statistical, identifying correlations - this is the level at which deep learning operates. The intervention level is about changes to the present or future - it answers questions like "What will happen if I do y?" The counterfactual level answers questions like "What would have happened if y had occurred?" Each successive level is strictly more powerful than the previous one: you can't figure out what the effects of an action will be just on the association level, without a causal model, since we treat actions as interventions which override existing causes. Unfortunately, current machine learning systems are largely model-free.
Causal assumptions and conclusions can be encoded in the form of graphical models, where a directed arrow between two nodes represents a causal influence. Constraints on the structure of a graph can be determined by seeing which pairs of variables are independent when controlling for which other variables: sometimes controlling removes dependencies, but sometimes it introduces them. Pearl's main claim is that this sort of model-driven causal analysis is an essential step towards building human-level reasoning capabilities. He identifies several important concepts - such as counterfactuals, confounding, causation, and incomplete or biased data - which his framework is able to reason about, but which current approaches to ML cannot deal with.
Deep Learning: A Critical Appraisal - [Marcus, 2018](https://arxiv.org/abs/1801.00631)
--------------------------------------------------------------------------------------
Marcus identifies ten limitations of current deep learning systems, and argues that the whole field may be about to hit a wall. According to him, deep learning:
1. Is data hungry - it can't learn abstractions through explicit verbal definition like humans can, but instead requires thousands of examples.
2. Is shallow, with limited capacity for transfer. If a task is perturbed even in minor ways, deep learning breaks, demonstrating that it's not really learning the underlying concepts. Adversarial examples showcase this effect.
3. Has no natural way to deal with hierarchical structure. Even recursive neural networks require fixed sentence trees to be precomputed. See my summary of 'Generalisation without systematicity' below.
4. Struggles with open-ended inference, especially based on real-world knowledge.
5. Isn't transparent, and remains essentially a "black box".
6. Is not well-integrated with prior knowledge. We can't encode our understanding of physics into a neural network, for example.
7. Cannot distinguish causation from correlation - see my summary of Pearl's paper above.
8. Presumes a largely stable world, like a game, instead of one like our own in which there are large-scale changes.
9. Is vulnerable to adversarial examples, which can be constructed quite easily.
10. Isn't robust as a long-term engineering solution, especially on novel data.
Some of these problems seem like they can be overcome without novel insights, given enough engineering effort and compute, but others are more fundamental. One interpretation: deep learning can interpolate within the training space, but can't extrapolate to outside the training space, even in ways which seem natural to humans. One of Marcus' examples: when a neural network is trained to learn the identity function on even numbers, it rounds down on odd numbers. In this trivial case we can solve the problem by adding odd training examples or manually adjusting some weights, but in general, when there are many features, both may be prohibitively difficult even if we want to make a simple adjustment. To address this and other problems, Marcus offers three alternatives to deep learning as currently practiced:
1. Unsupervised learning, so that systems can constantly improve - for example by predicting the next time-step and updating afterwards, or else by setting itself challenges and learning from doing them.
2. Further development of symbolic AI. While this has in the past proved brittle, the idea of integrating symbolic representations into neural networks has great promise.
3. Drawing inspiration from humans, in particular from cognitive and developmental psychology, how we develop commonsense knowledge, and our understanding of narrative.
Generalisation without systematicity - [Lake and Baroni, 2018](https://arxiv.org/abs/1711.00350)
------------------------------------------------------------------------------------------------
Lake and Baroni identify that human language and thought feature "systematic compositionality": we are able to combine known components in novel ways to produce arbitrarily many new ideas. To test neural networks on this, they introduce SCAN, a language consisting of commands such as "jump around left twice and walk opposite right thrice". While they found that RNNs were able to generalise well on new strings similar in form to previous strings, performance dropped sharply in other cases. For example, the best result dropped from 99.9% to 20.8% when the test examples were longer than any training example, even though they were constructed using the same compositional rules. Also, when a command such as "jump" had only been seen by itself in training, RNNs were almost entirely incapable of understanding instructions such as "turn right and jump". The overall conclusion: that neural networks can't extract systematic rules from training data, and so can't generalise compositionality anything like how humans can. This is similar to the result of a project I recently carried out, in which I found that capsule networks which had been trained to recognise transformed inputs such as rotated digits and digits with negative colours still couldn't recognise rotated, negated digits: they were simply not learning general rules which could be composed together.
Deep reinforcement learning doesn't work yet - [Irpan, 2018](https://www.alexirpan.com/2018/02/14/rl-hard.html)
---------------------------------------------------------------------------------------------------------------
Irpan runs through a number of reasons to be skeptical about using deep learning for RL problems. For one thing, deep RL is still very data-inefficient: DeepMind's Rainbow DQN takes around 83 hours of gameplay to reach human-level performance on an Atari game. By contrast, humans can pick them up within a minute or two. He also points out that other RL methods often work better than deep RL, particularly model-based ones which can utilise domain-specific knowledge.
Another issue with RL in general is that designing reward functions is difficult. This is a theme in AI safety - specifically when it comes to reward functions which encapsulate human values - but there are plenty of existing examples of reward hacking on much simpler tasks. One important consideration is the tradeoff is between shaped and sparse rewards. Sparse rewards only occur at the goal state, and so can be fairly precise, but are usually too difficult to reach directly. Shaped rewards give positive feedback more frequently, but are easier to hack. And even when shaped rewards are designed carefully, RL agents often find themselves in local optima. This is particularly prevalent in multi-agent systems, where each agent can overfit to the behaviour of the other.
Lastly, RL is unstable in a way that supervised learning isn't. Even successful implementations often fail to find a decent solution 20 or 30% of the time, depending on the random seed with which they are initialised. In fact, there are very few real-world success stories featuring RL. Yet achieving superhuman performance on a wide range of tasks is a matter of when, not if, and so I think Amara's law applies: we overestimate the effects RL will have in the short run, but underestimate its effects in the long run. |
1af3f842-2808-4dd1-99d6-e8bcb1123161 | StampyAI/alignment-research-dataset/special_docs | Other | Inconsistency evaluation in pairwise comparison using norm-based distances.
Decisions in Economics and Finance (2020) 43:657–672
https://doi.org/10.1007/s10203-020-00304-9
Inconsistency evaluation in pairwise comparison using
norm-based distances
Michele Fedrizzi1·Nino Civolani2·Andrew Critch3
Received: 14 December 2019 / Accepted: 11 August 2020 / Published online: 30 August 2020
© The Author(s) 2020
Abstract
This paper studies the properties of an inconsistency index of a pairwise comparison
matrix under the assumption that the index is defined as a norm-induced distance fromthe nearest consistent matrix. Under additive representation of preferences, it is proved
that an inconsistency index defined in this way is a seminorm in the linear space of
skew-symmetric matrices and several relevant properties hold. In particular, this linearspace can be partitioned into equivalence classes, where each class is an affine subspace
and all the matrices in the same class share a common value of the inconsistency
index. The paper extends in a more general framework some results due, respectively,to Crawford and to Barzilai. It is also proved that norm-based inconsistency indices
satisfy a set of six characterizing properties previously introduced, as well as an upper
bound property for group preference aggregation.
Keywords Inconsistency index ·Pairwise comparison matrix ·Norm ·Distance
JEL Classification C44·D7
1 Introduction
Pairwise comparison over a set of alternatives X={x1,..., xn}is a well known
and powerful method for preference elicitation in a decision problem. An important
characteristic of this method is the capability of dealing with the imprecision of the
collected data due to the unavoidable inconsistency of human judgements. Each entry
aijof a pairwise comparison matrix, PCM in the following, A=(aij)n×nquantifies
B Michele Fedrizzi
michele.fedrizzi@unitn.it
1Department of Industrial Engineering, University of Trento, Via Sommarive 77, 38123 Trento,
Italy
2University of Trento, Trento, Italy
3Center for Human-Compatible AI, University of California, Berkeley, California, USA
123
658 M. Fedrizzi et al.
the degree of preference of alternative xiover alternative xj. Two widely used rep-
resentations of preferences are the so-called multiplicative and additive ones. In the
multiplicative approach (Saaty 1977 ),aijis the relative preference of alternative xi
over alternative xj, and therefore, it estimates the ratio between the weight wiofxiand
the weight wjofxj,aij≈wi
wjConversely, in the additive approach, aijestimates the
difference between the weights of xiandxj, respectively, aij≈wi−wj. Therefore,
different assumptions in preference quantification correspond to different meaning
of the entries aij. It has been proved that the multiplicative and the additive repre-
sentations are isomorphic and, therefore, equivalent. In fact, a multiplicative PCM
(aij)n×ncan be easily transformed into a corresponding additive PCM by compo-
nentwise applying the logarithmic function, thus obtaining (ln(aij))n×n. Details on
this isomorphism can be found in (Barzilai 1998 ; Cavallo and D’Apuzzo 2009 ). In the
multiplicative approach, a pairwise comparison matrix PCM is a positive real-valued
matrix A=(aij)n×nwith aii=1∀iandaijaji=1∀i,j. Multiplicative reciprocity
aijaji=1 derives fromwi
wjwj
wi=1, and this property is always required. A PCM is
said consistent if and only if
aijajk=aik,i,j,k=1,..., n. (1)
Consistency condition ( 1) corresponds to the ideal situation where the decision maker
is perfectly coherent in her/his judgements and it is in general not required, since it is
well known that in making paired comparisons people do not have the intrinsic logical
ability to always be consistent (Saaty 1994 ). Despite a violation of ( 1) to some extent
is necessarily accepted, consistency has always been regarded as a desirable property,
since coherent judgements are clearly considered more preferable than contradictory
ones. Therefore, a correct inconsistency evaluation is regarded as a crucial task andseveral indices have been proposed in order to quantify the deviation from the condition
of full consistency ( 1).
The problem of inconsistency evaluation has been addressed by means of many
different proposals. We briefly cite some of them. Saaty ( 1977 ) proposed, in his sem-
inal paper, the first and still most popular inconsistency index. Koczkodaj ( 1993 )
and Duszak and Koczkodaj ( 1994 ) introduced an inconsistency index which became
very popular too. Bozóki and Rapcsák ( 2008 ) summarized the relationship between
some weighting methods and the corresponding inconsistency indices. Then, the
authors compared Saaty’s and Koczkodaj’s inconsistency indices. Recently, Brunelliand Fedrizzi ( 2019 ) proposed a general formulation for inconsistency indices that
includes many among the known indices. Cavallo ( 2020 ) studied the functional rela-
tions between some important consistency indices, mainly focusing on the case n=3.
Nevertheless, the detailed description of all the relevant contributions on the consis-
tency evaluation is beyond the scope of this paper. For a comprehensive survey, see(Brunelli 2018 ). Most of these studies assume the multiplicative representation of the
preferences, i.e., they deal with multiplicative PCM s. Some papers take into account
both the multiplicative and the additive representations (Fichtner 1984 ;C h u 1998 ).
Brunelli et al. ( 2013 ) compared ten known indices numerically, while Brunelli ( 2017 ),
Brunelli and Fedrizzi ( 2015a ) and Csató ( 2018 ,2019 ) proposed a more theoretical
approach by studying some axioms for inconsistency indices. In Cavallo and D’Apuzzo
123
Inconsistency evaluation using norm-based distances 659
(2009 ) and in some following papers by the same authors, a general framework for
inconsistency evaluation is proposed, based on the algebraic structure of group. Ramík
and Korviny ( 2010 ) proposed a distance-based evaluation of the inconsistency in the
framework of fuzzy sets. Other interesting distance-based approaches to the study of
inconsistency have been proposed more recently by Brunelli and Cavallo ( 2020 ) and
Mazurek and Ramík ( 2019 ).
This paper proposes a geometric-oriented unifying point of view for inconsistency
evaluation based on norm-induced metrics in matrix spaces. Our proposal can be con-
sidered as a generalization of the approach by Crawford and Williams ( 1985 ), where
the logarithmic least square method (LLSM) is applied. In fact, their method corre-
sponds to the Euclidean norm minimization after passing to the additive representation
of preferences. This justifies, in our view, the numerous good properties of the LLSMand the related geometric mean solution.
The rest of the document is organized as follows. After some preliminaries in Sect. 2,
we assume the additive representation of preferences, and in Sect. 3, we introduce an
inconsistency index for a PCM defined as a norm-induced distance from the nearest
consistent PCM . More precisely, in Sect. 3.1we partition the subspace of PCM s (skew-
symmetric matrices) into equivalence classes, so that each class will correspond to a
single inconsistency value. In Sect. 3.2, we define our index and we prove that it is
a seminorm in the vector space of PCM s. We prove, respectively, in Sect. 3.3 and
in Sect. 3.4, that our index satisfies a set of six axioms previously introduced and an
upper bound property for group preference aggregation. Finally, in Sect. 4, we discuss
some conclusions and future work.
2 Preliminaries
For a fixed n>2, let Rn×nbe the vector space of n-order real matrices. Let A⊂Rn×n
be the set of multiplicative PCM s of order n,
A={ A=(aij)n×n|aij>0,aijaji=1∀i,j}.
Similarly, the set of consistent multiplicative PCM sA∗⊂Ais defined as
A∗={ A=(aij)n×n|A∈A,aik=aijajk∀i,j,k}.
An inconsistency index is a function which associates a real number to each PCM
A∈A,
I:A→ R
The number I(A)quantifies the inconsistency of A.
APCM A=(aij)n×ncan be viewed as a point in the vector space Rn×n. Closeness
to consistency condition ( 1) can therefore be interpreted as closeness of Ato a consis-
tent matrix ACin the same space. It is interesting to observe that choosing the ‘closest’
consistent matrix ACexactly corresponds to compute the weight vector wfrom A.I n
123
660 M. Fedrizzi et al.
fact, ACis consistent if and only if there exists a weight vector w=(w1,...,w n)such
that AC=(wi
wj). Every method for deriving a weight vector w=(w1,...,w n)from
an inconsistent PCM Aidentifies a consistent matrix AC=(wi
wj)which is considered
the ‘closest’ one to Aaccording to a certain metric in Rn×n. Then, deriving a weight
vector and measuring inconsistency can be integrated into one process by minimizing
an appropriate objective function such that its minimum point is the weight vector,
while the optimal value of the objective function gives the inconsistency of the pair-wise comparison matrix. These observations suggest to characterize an inconsistency
index by means of the notion of distance. Nevertheless, it is convenient to first shift to
the additive representation of preferences. This can be done by a simple component-
wise logarithmic transformation and allows working with the powerful tools of linear
algebra. Note that the multiplicative and the additive representation of preferencesare isomorphic, as described in detail by Barzilai ( 1998 ) and Cavallo and D’Apuzzo
(2009 ).
By componentwise applying a logarithmic function to a PCM A=(a
ij),
ln(aij)=rij, (2)
a skew-symmetric matrix R=(rij)is obtained, since multiplicative reciprocity
aijaji=1 is transformed into additive reciprocity
rij+rji=0,i,j=1,..., n. (3)
Being ( 3) a homogenous linear system, the image set of Athrough the logarithmic
function is the linear subspace of Rn×nof the skew-symmetric matrices, say L=
{ln(A);A∈A}= ln(A). The dimension of Lisn(n−1)/2, i.e., the number of the
upper-diagonal entries.
Aconsistent matrix A=(aij)∈A∗is transformed into a skew-symmetric matrix
R=ln(A)=(ln(aij))=(rij)satisfying
rij+rjk=rik,i,j,k=1,..., n. (4)
Property ( 4) clearly follows from ( 1) and proves that the image set of A∗through
the logarithmic function is a linear subspace of L, and therefore a linear subspace of
Rn×n,s a y L∗={ln(A);A∈A∗}= ln(A∗)(Kocakodaj 1997 ). The dimension of L∗
isn−1 since it is known that n−1 entries are necessary and sufficient to completely
identify a consistent n-order matrix, provided that they are adequately chosen (Chu
1998 ; Fedrizzi et al. 2019 ).
Since Ais isomorphic to Land linear spaces are simple structures where it is possi-
ble working with tools of linear algebra, then, from now on, we will study preference
inconsistency in the subspace L. Therefore, we will use the usual notation Aalso to
denote a skew-symmetric matrix in L. An inconsistency index will be a function
I:L→ R, (5)
and we will call ‘additive PCM’ a matrix in L.
123
Inconsistency evaluation using norm-based distances 661
3 Distances and norms
As remarked above, an inconsistency index measures to which extent a preference
matrix is far from consistency. Since distance is the most suitable mathematical tool
to evaluate how much two objects are different, it is quite natural to define an incon-
sistency index I(A)of a matrix A∈Las the distance of Afrom the linear subspace
L∗of consistent matrices,
Id(A)=d(A,L∗). (6)
Nevertheless, the notion of distance is too general to conveniently characterize incon-
sistency, and definition ( 6) can lead to unsatisfactory inconsistency indices (Fichtner
1986 ).
In the following, we will prove that the problem can be overcome by restricting
metrics to those defined by norms. The resulting inconsistency indices satisfy a relevantset of good properties. Before proceeding, let us recall the basic notions of distance,
norm and seminorm.
Definition 1 (Distance ) A distance, or metric, on a given set Zis a function d:
Z×Z→ Rthat satisfies the following three conditions,
1.d(x,y)≥0∀x,y∈Zand d(x,y)=0⇔ x=y.
2.d(x,y)=d(y,x)∀x,y∈Z (symmetry).
3.d(x,z)≤d(x,y)+d(y,z)∀x,y,
z∈Z (triangle inequality).
Definition 2 (Norm ) A norm on a vector space Vis a function ||·||:V→ Rthat
satisfies the following conditions,
1.||x||≥0∀x∈V (nonnegativity).
2.||λx||=|λ|||x||∀x∈V,∀λ∈R (homogeneity).
3.||x+y||≤||x||+||y||∀x,y∈V (triangle inequality).
4.||x||=0⇔x=0 (strict positivity)
Aseminorm onVis a function s:V→ Rthat satisfies the first three conditions
above. In other words, a seminorm has the same properties as a norm, except that it
may have a zero value for some nonzero vectors.
3.1 A partition of Linto equivalence classes
The vector space structure of Land L∗naturally induces a partition of Linto equiv-
alence classes. Since L∗is a subspace of L, it is possible to consider the quotient
space L/L∗. More formally, the quotient space L/L∗is defined as follows, using the
corresponding equivalence relation.
Definition 3 (Quotient space ) Consider the equivalence relation in L
A∼B⇔B−A∈L∗A,B∈L. (7)
123
662 M. Fedrizzi et al.
Fig. 1 Example of an equivalence class [A]forn=3
The quotient set L/L∗is defined as
L/L∗=L/∼ (8)
and it is a vector space with the operations naturally induced by the operations in L.
The elements of L/L∗are the equivalence classes [A]obtained by adding to A∈L
an arbitrary matrix in L∗,
[A]={ A+C,C∈L∗}. (9)
Every equivalence class [A]is an affine subspace of Land can also be represented as
[A]= A+L∗. Figure 1illustrates an example of [A]in the case n=3.
We assume that every inconsistency index ( 5) assigns the same value, say 0 for
simplicity, to every consistent matrix, see axiom 1 in (Brunelli and Fedrizzi 2015a )
and Sect. 3.3.1 . From what precedes, it is natural to assume that an inconsistency index
assigns the same value to every matrix in a fixed equivalence class,
A,B∈[A]⇒ I(B)=I(A). (10)
Now, in order to assign an inconsistency value I(·)to each equivalence class coherently
with the assumption ( 6), the following must hold. For all [A]∈ L/L∗, every matrix
A∈[A]must have the same distance from L∗. Corollary 1will prove that this result
can be achieved assuming, as in the following, that distance ( 6) is induced by a norm.
123
Inconsistency evaluation using norm-based distances 663
3.2 Inconsistency index as a seminorm
Let||A||be a norm in Rn×nand let
d(A,B)=||A−B|| (11)
be the corresponding distance between matrices Aand B.
It is now possible to give our main definition.
Definition 4 Given a skew-symmetric matrix A∈Land a norm ||·||inRn×n,t h e
inconsistency index of Abased on distance ( 11)i s
Id(A)=d(A,L∗)=min
B∈L∗d(A,B)=min
B∈L∗||A−B||=/vextendsingle/vextendsingle/vextendsingle/vextendsingleA−A∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle, (12)
where A∗∈L∗is a solution of the minimization problem in ( 12).
Note that, by choosing the 1-norm ||·||1, the optimization problem ( 12) is equivalent
to the optimization problem (24) in (Brunelli and Cavallo 2020 ), and the same holds
for the inconsistency index proposed there, except for a numerical constant. Most of
the results in this paper are based on the following theorem.
Theorem 1 An inconsistency index I d(A)defined as a norm-based distance (12)is a
seminorm in L.
Proof It must be proved that
1.Id(kA)=|k|Id(A)∀A∈L,∀k∈R
2.Id(A+A/prime)≤Id(A)+Id(A/prime)∀A,A/prime∈L.
Let A∗∈L∗be a d-nearest consistent matrix to A∈L,a si n( 12). Then, it is possible
to prove that kA∗is ad-nearest consistent matrix to kA. Equality 1. is obviously true
fork=0. Then, let us assume k/negationslash=0. It is
/vextendsingle/vextendsingle/vextendsingle/vextendsinglekA−kA∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle=|k|/vextendsingle/vextendsingle/vextendsingle/vextendsingleA−A∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤|k|||A−B||∀B∈L∗.
Since
|k|||A−B||=||kA−kB||,
then it is
/vextendsingle/vextendsingle/vextendsingle/vextendsinglekA−kA∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤||kA−kB||∀B∈L∗. (13)
It follows, from ( 13), that
/vextendsingle/vextendsingle/vextendsingle/vextendsinglekA−kA∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle=min
kB∈L∗||kA−kB||=Id(kA). (14)
123
664 M. Fedrizzi et al.
The last equality follows from the fact that, for a fixed k/negationslash=0, every matrix of L∗can
be expressed as kBwith B∈L∗. Then,
Id(kA)=d(kA,L∗)=|k|d(A,L∗)=|k|Id(A)∀k∈R, (15)
so that the first property is proved. The second property can be proved as follows,
Id(A+A/prime)=min
B∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A+A/prime)−B/vextendsingle/vextendsingle/vextendsingle/vextendsingle=
= min
C,C/prime∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A+A/prime)−(C+C/prime)/vextendsingle/vextendsingle/vextendsingle/vextendsingle
= min
C,C/prime∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A−C)+(A/prime−C/prime)/vextendsingle/vextendsingle/vextendsingle/vextendsingle
≤min
C∈L∗||(A−C)||+min
C/prime∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A/prime−C/prime)/vextendsingle/vextendsingle/vextendsingle/vextendsingle
=Id(A)+Id(A/prime) (16)
where B=C+C/primeand the arbitrariness of Cand C/primeinL∗follows from the one of B.
/intersectionsq/unionsq
Clearly, an inconsistency index Id(A)is not a norm in L, since Id(A)=0∀A∈L∗.
Theorem 2 IfA∈L,
1.Every inconsistency index I d(A)given by (12)is invariant with respect to addition
of a consistent skew-symmetric matrix,
Id(A)=Id(A+B)∀B∈L∗. (17)
2.IfA∗∈L∗is a d-nearest consistent matrix to Aand B∈L∗, then A∗+Bis a
d-nearest consistent matrix to A+B
Proof For all A∈Land B∈L∗,i ti s Id(B)=0 and therefore
Id(A+B)≤Id(A)+Id(B)=Id(A), (18)
but also
Id(A)=Id((A+B)−B)≤Id(A+B)+Id(−B)=Id(A+B), (19)
since B∈L∗⇒− B∈L∗. Equality ( 17) follows from ( 18) and ( 19).
To prove the second statement, let us fix B∈ L∗and evaluate
||(A+B)−(A∗+B)||,
/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A+B)−(A∗+B)/vextendsingle/vextendsingle/vextendsingle/vextendsingle=/vextendsingle/vextendsingle/vextendsingle/vextendsingleA−A∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle (20)
≤||A−C||∀C∈L∗. (21)
123
Inconsistency evaluation using norm-based distances 665
Fig. 2 Representation of Theorem 2in the case n=3
Since
||A−C||=||(A+B)−(C+B)||,
then it is
/vextendsingle/vextendsingle/vextendsingle/vextendsingle(A+B)−(A∗+B)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≤||(A+B)−(C+B)||∀C∈L∗
/intersectionsq/unionsq
Figure 2illustrates Theorem 2in the case n=3.
Crawford ( 1987 ) introduced the so-called Geometric Consistency Index, GCI , and
proved a theorem (Theorem 2 in the cited paper) which states, for his index, the same
result as in Theorem 2of this paper. He used the multiplicative representation of
preferences, so that he proved the invariance with respect to the Hadamard productA·B=(a
ijbij)n×ninstead of invariance with respect to the sum as in ( 17). Theorem 2
can therefore be viewed as an extension of Crawford’s theorem to the general case
of norm-based consistency indices in the additive representation of preferences. Thesemantic of Theorem 2is as follows. The inconsistency degree I
d(A)of a PCM A
remains unchanged by adding consistent preferences.
From Theorem 2, the following corollary can be stated.
Corollary 1 If matrices A,B∈Lare in the same equivalence class, then they have
the same inconsistency index (12),
A∼B⇒Id(A)=Id(B), (22)
where ∼is defined in (7).
The proof of ( 22) directly follows from ( 17) and ( 9).
Since L∗is the subspace of Lwhere the seminorm Id(·)is null, then this seminorm
induces a norm in L/L∗. The result is described in the following corollary.
123
666 M. Fedrizzi et al.
Corollary 2 The function I d:L/L∗→ R, defined as follows
Id([A])=Id(A), A∈[A], (23)
is a norm on L/L∗.
Note that, for sake of simplicity, we use the same notation Id(·)for the inconsistency
index defined in L, where Idis a seminorm, and the one defined in L/L∗,where Idis
a norm.
If the norm in ( 11) arises from an inner product, ||A||=√/angbracketleftA,A/angbracketright, then it is possible
to define the orthogonal complement (L∗)⊥ofL∗, where (L∗)⊥={ A∈L|A⊥L∗}.
Therefore, quotient space L/L∗is isomorphic to (L∗)⊥and Lcan be obtained as
direct sum of L∗and(L∗)⊥,
L=L∗⊕/parenleftbig
L∗/parenrightbig⊥. (24)
Property ( 24) extends the result of Barzilai ( 1998 ), where he proposed the orthogonal
decomposition of a matrix A∈Linto its ‘consistent’ and ‘totally inconsistent’ com-
ponents, A=C+E, and orthogonality C⊥Erefers to the standard dot product on
Rn×ncorresponding to the Euclidean norm. Note that the decomposition proposed by
Barzilai has been further investigated and generalized by Cavallo ( 2019 ).
3.3 Axioms satisfaction
Brunelli and Fedrizzi ( 2015a ) proposed five characterizing properties, or axioms, for
an inconsistency index. Then, Brunelli ( 2017 ) completed the set of axioms with a
sixth one. The axioms can be formulated both in the additive and in the multiplicativerepresentation of preferences. We clearly refer to the former. In this section, we prove
that an inconsistency index I
d(A), given by ( 12), satisfies all the six axioms, provided
that ( 11)i sa permutation-invariant norm on Rn×n, that is a norm invariant with respect
to permutations on the coordinates.
3.3.1 Axiom (A1): Existence of a unique element representing consistency
The first characterizing property (A1) requires that an inconsistency index must be
associated to a unique real number which represents perfectly consistent preferences.
More formally, it is demanded that
∃!ν∈Rsuch that I(A)=ν⇔A∈L∗.
An inconsistency index ( 12) satisfies axiom (A1) with ν=0, since d(A,L∗)=0⇔
A∈L∗.
3.3.2 Axiom (A2): Invariance under permutation of alternatives
Axiom (A2) requires that an inconsistency index must be independent from the order
of the alternatives, i.e., from the order to which the alternatives are associated with the
123
Inconsistency evaluation using norm-based distances 667
rows and columns of A,
I(PAPT)=I(A)∀A∈L, (25)
for any permutation matrix P.
If the norm in ( 12)i spermutation invariant , then axiom (A2) is satisfied.
3.3.3 Axiom (A3): Monotonicity under reciprocity-preserving mapping
We first briefly summarize axiom (A3). For a more comprehensive description, please
refer to Brunelli and Fedrizzi ( 2015a ). The idea underlying axiom (A3) is the following:
if inconsistent preferences are intensified, then a better value of an inconsistencyindex cannot be obtained. By ‘preference intensification,’ we mean going farther from
complete indifference a
ij=0∀i,j, which is clearly fully consistent. Going farther
from this, uniformity means having stronger judgments and this should not make
their possible inconsistency less evident. It can be proved that the only transformation
aij→f(aij)which can intensify preferences by preserving the necessary additive
reciprocity structure ( 3)i s
f(aij)=kaij,k>1, (26)
or, equivalently,
A→kA,k>1. (27)
The formalization of Axiom (A3) is as follows.
Axiom (A3) An inconsistency index I(·)satisfies axiom (A3) if and only if
I(kA)≥I(A)∀k>1∀A∈L. (28)
Proposition 1 An inconsistency index defined by (12)satisfies axiom (A3) .
Proof Theorem 1implies that Id(kA)=| k|Id(A)∀k∈ R. Then, Id(kA)>
Id(A)∀k>1 and axiom (A3) is satisfied. /intersectionsq/unionsq
3.3.4 Axiom (A4): Monotonicity on single entries
We briefly summarize axiom (A4). The reader can refer to Brunelli and Fedrizzi
(2015a ) for a more comprehensive description. The idea underlying axiom (A4) is that
an inconsistency index is non-decreasing with respect to an elementary modification of
a consistent matrix. More formally, given a consistent skew-symmetric matrix A∈L∗
and b∈R,b/negationslash=0, let Apq(b)∈Lbe the inconsistent skew-symmetric matrix
obtained from Aby replacing the single entry apqwith apq+b,p,q∈{1,..., n},
p/negationslash=q. Clearly, aqpmust be replaced with aqp−bin order to preserve skew-symmetry.
Axiom (A4) requires that the larger the change of apqfrom its consistent value, the
123
668 M. Fedrizzi et al.
more inconsistent the obtained matrix is. The formalization of Axiom (A4) is as
follows.
Axiom (A4) An inconsistency index I(·)satisfies axiom (A4) if and only if
I(Apq(b))is a non-decreasing function of bforb>0 and a non-increasing function
ofbforb<0,∀A∈L∗,∀p,q∈{1,..., n},p/negationslash=q.
Proposition 2 An inconsistency index defined by ( 12) satisfies axiom (A4) .
Proof The proof follows from the homogeneity of seminorms. Given a consistent
skew-symmetric matrix A∈L∗andb∈R,b/negationslash=0, let Apq(b)∈Lbe the inconsistent
skew-symmetric matrix obtained as described above. Matrix Apq(b)can be obtained
as follows,
Apq(b)=A+bMpq,
where Mpqis the n×nmatrix with all null entries except Mpq(p,q)=1,
Mpq(q,p)=− 1. Theorem 2implies that all the inconsistency of matrix Apq(b)
is due to matrix bMpq, when inconsistency is evaluated by means of Id(·),
Id(Apq(b))=Id(A+bMpq)=Id(bMpq).
Theorem 1implies that Id(bMpq)=|b|Id(Mpq)∀b∈R. Since Id(Mpq)> 0, then
axiom (A4) is satisfied. /intersectionsq/unionsq
3.3.5 Axiom (A5): Continuity
Axiom (A5) requires continuity of an inconsistency index. Continuity of ( 12) directly
follows from the continuity of norms.
3.3.6 Axiom (A6): Invariance under inversion of preferences
Axiom (A6) (Brunelli 2017 ) requires that an inconsistency index does not change
when preferences expressed in the form of a pairwise comparison matrix Aare inverted
by taking its transpose AT. The idea underlying axiom (A6) is that by inverting all
the preferences we change their polarity, but leave their structure unchanged. Theformalization of Axiom (A6) is as follows. Axiom (A6) An inconsistency index I(·)
satisfies axiom (A6) if and only if I(A)=I(A
T)∀A∈L.
Proposition 3 An inconsistency index defined by (12)satisfies axiom (A6) .
Proof Since Ais skew-symmetric, it is AT=− A. Then, an inconsistency index I(·)
satisfies axiom (A6) if and only if I(A)=I(AT)=I(−A)∀A∈L. By defining
B/prime:= − B,i ti s
Id(−A)=min
B∈L∗d(−A,B)=min
B∈L∗||−A−B|| (29)
123
Inconsistency evaluation using norm-based distances 669
=min
B/prime∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingle−A+B/prime/vextendsingle/vextendsingle/vextendsingle/vextendsingle=min
B/prime∈L∗|−1|/vextendsingle/vextendsingle/vextendsingle/vextendsingleA−B/prime/vextendsingle/vextendsingle/vextendsingle/vextendsingle (30)
=min
B/prime∈L∗/vextendsingle/vextendsingle/vextendsingle/vextendsingleA−B/prime/vextendsingle/vextendsingle/vextendsingle/vextendsingle=Id(A), (31)
Similarly to ( 14), the proof follows from the fact that every matrix BofL∗can be
expressed as B=− B/primewith B/prime∈L∗. Then, axiom (A6) is satisfied by ( 12). /intersectionsq/unionsq
3.4 Boundary property for group decision making
In this section, we prove that inconsistency index Id(A)given by ( 12) satisfies the
upper bound property described by Brunelli and Fedrizzi ( 2015b ).
If the preferences of k=1,..., mdecision makers are expressed by means of m
PCM sAk=(ak
ij), it is a relevant problem to study the group PCM AG=(aG
ij)
obtained by aggregating the mindividual PCM s. Dijkstra ( 2012 ) proved that, if the m
PCM s are expressed in the multiplicative representation of preferences, the weighted
geometric mean is the unique aggregation method that guarantees some importantproperties of the group PCM to hold. Let us consider
a
G
ij=m/productdisplay
k=1(ak
ij)λk, (32)
where (λ1,...,λ m)is the weight vector of the decision makers such that λk≥0,/summationtextm
k=1λk=1 and Ak=(ak
ij)∈A. In particular, ( 32) is the unique consistency-
preserving non-trivial aggregation method. Brunelli and Fedrizzi ( 2015b ) studied how
the inconsistency of AGdepends on the inconsistency of Ak,k=1,...., m.T h e y
proved, in particular, that some known inconsistency indices satisfy the followingupper-bound property.
Definition 5 (Brunelli and Fedrizzi 2015b ) A function I:A→ Ris upper bounded
w.r.t the geometric mean if:
I(A
G)≤max{I(A1) ,..., I(Am)}. (33)
The following property is more restrictive than ( 33), it was also introduced by Brunelli
and Fedrizzi ( 2015b ) and studied for some known inconsistency indices.
Definition 6 (Brunelli and Fedrizzi 2015b ) A function I:A→ Ris strongly upper
bounded w.r.t the geometric mean if:
I(AG)≤m/summationdisplay
k=1λkI(Ak), (34)
where (λ1,...,λ m)is the same weight vector used in ( 32) to obtain AG.
Let us study the upper-bound properties ( 33) and ( 34) from the additive approach
described above, that is, in the vector space Lof skew-symmetric preference matrices.
123
670 M. Fedrizzi et al.
By componentwise applying the logarithmic function ( 2) as described in Section 2,t h e
aggregation ( 32)i n Aclearly corresponds to a linear combination in L. By using, for
simplicity, the same notation as in ( 32), the group skew-symmetric preference matrix
is given by
aG
ij=m/summationdisplay
k=1λk(ak
ij)i,j=1,..., n (35)
AG=m/summationdisplay
k=1λkAk, (36)
where Ak=(ak
ij)∈L. The following result holds.
Theorem 3 An inconsistency index I d(A)defined as a norm-based distance (12)is
strongly upper bounded, i.e., it satisfies the upper boundary property (34).
Proof Theorem 1implies that
Id(AG)=Id(m/summationdisplay
k=1λkAk)≤m/summationdisplay
k=1Id(λkAk)
=m/summationdisplay
k=1|λk|Id(Ak)=m/summationdisplay
k=1λkId(Ak) (37)
/intersectionsq/unionsq
4 Conclusions and future work
In this paper, we showed that by defining an inconsistency index Id(A)by means
of a distance induced by a norm, as in ( 12), it is possible to prove many relevant
properties of this index. Note that the property of homogeneity of a (semi)norm isa crucial assumption in proving several Theorems and Propositions. Nevertheless, a
norm is, in our view, a sufficiently general notion to satisfy different requirements of
a decision maker. Let us clarify our point with the example of the p-norm of a vector
x=(x
1,..., xn),
||x||p=/parenleftBiggn/summationdisplay
i=1|xi|p/parenrightBigg1
p
,p≥1. (38)
Ap-norm is clearly a permutation-invariant norm, so that all the results in Sect. 3hold
ifId(A)is defined by means of a p-norm ( 38). Note that the well-known Geometric
Consistency Index GCI introduced by Crawford ( 1987 ) corresponds, in the additive
representation of preferences, to the usual Euclidean norm ||·||2. More generally, if a
decision maker uses p-norm ( 38), he/she is provided with a flexible tool in emphasizing
123
Inconsistency evaluation using norm-based distances 671
inconsistent preferences. More precisely, the larger p, the more emphasis is paid
in order to avoid large differences between the elicited preferences and the closest
consistent ones.
In future research, we will study the relationship between the ‘natural’ inconsistency
ranking defined by Csató ( 2019 ) and the ranking on the set of triads induced by the
norm-based inconsistency indices defined in this paper. More precisely, we will studyhow the choice of a norm affects this relationship. A second research topic will be
the possible relationship between our inconsistency index and the index introduced
by Koczkodaj ( 1993 ).
Funding Open access funding provided by Universitá degli Studi di Trento within the CRUI-CARE Agree-
ment.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you giveappropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence,
and indicate if changes were made. The images or other third party material in this article are included
in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If
material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from thecopyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
References
Barzilai, J.: Consistency measures for pairwise comparison matrices. J. Multi-Criteria Decis. Anal. 7(3),
123–132 (1998)
Bozóki, S., Rapcsák, T.: On Saaty’s and Koczkodaj’s inconsistencies of pairwise comparison matrices. J.
Glob. Optim. 42(2), 157–175 (2008)
Brunelli, M.: Studying a set of properties of inconsistency indices for pairwise comparisons. Ann. Oper.
Res. 248(1–2), 143–161 (2017)
Brunelli, M.: A survey of inconsistency indices for pairwise comparisons. Int. J. Gen. Syst. 47(8), 751–771
(2018)
Brunelli, M., Cavallo, B.: Distance-based measures of incoherence for pairwise comparisons. Knowl. Based
System. 187, 104808 (2020)
Brunelli, M., Fedrizzi, M.: Axiomatic properties of inconsistency indices for pairwise comparisons. J. Oper.
Res. Soc. 61(1), 1–15 (2015a)
Brunelli, M., Fedrizzi, M.: Boundary properties of the inconsistency of pairwise comparisons in group
decisions. Eur. J. Oper. Res. 240(3), 765–773 (2015b)
Brunelli, M., Fedrizzi, M.: A general formulation for some inconsistency indices of pairwise comparisons.
Ann. Oper. Res. 274(1–2), 155–169 (2019)
Brunelli, M., Canal, L., Fedrizzi, M.: Inconsistency indices for pairwise comparison matrices: a numerical
study. Ann. Oper. Res. 211(1), 493–509 (2013)
Cavallo, B.: G-distance and G-decomposition for improving G-consistency of a Pairwise Comparison
Matrix. Fuzzy Optim. Decis. Mak. 18(1), 57–83 (2019)
Cavallo, B.: Functional relations and Spearman correlation between consistency indices. J. Oper. Res. Soc.
71(2), 301–311 (2020)
Cavallo, B., D’Apuzzo, L.: A general unified framework for pairwise comparison matrices in multicriterial
methods. Int. J. Intell. Syst. 24(4), 377–398 (2009)
Chu, M.T.: On the optimal consistent approximation to pairwise comparison matrices. Linear Algebra Appl.
272, 155–168 (1998)
Crawford, G.: The geometric mean procedure for estimating the scale of a judgement matrix. Math. Model.
9(3–5), 327–334 (1987)
123
672 M. Fedrizzi et al.
Crawford, G., Williams, C.: A note on the analysis of subjective judgement matrices. J. Math. Psychol. 29,
25–40 (1985)
Csató, L.: Characterization of an inconsistency ranking for pairwise comparison matrices. Ann. Oper. Res.
261(1–2), 155–165 (2018)
Csató, L.: Axiomatizations of inconsistency indices for triads. Ann. Oper. Res. 280(1–2), 99–110 (2019)
Dijkstra, T.K.: On the extraction of weights from pairwise comparison matrices. Cent. Eur. J. Oper. Res.
21(1), 103–123 (2013)
Duszak, Z., Koczkodaj, W.W.: Generalization of a new definition of consistency for pairwise comparisons.
Inf. Process. Lett. 52(5), 273–276 (1994)
Fedrizzi, M., Brunelli, M., Caprila, A.: The linear algebra of pairwise comparisons. Int. J. Approx. Reason.
118, 190–207 (2019)
Fichtner, J.: Some thoughts about the mathematics of the analytic hierarchy process. Report 8403, Institut
für Angewandte Systemforschung und Operations Research, Hochschule der Bundeswehr München
(1984)
Fichtner, J.: On deriving priority vectors from matrices of pairwise comparisons. Socio-Econ. Plan. Sci. 20,
341–345 (1986)
Koczkodaj, W.W.: A new definition of consistency of pairwise comparisons. Math. Comput. Model. 18(7),
79–84 (1993)
Koczkodaj, W.W., Orlowski, M.: An orthogonal basis for computing a consistent approximation to a pairwise
comparison matrix. Comput. Math. Appl. 34(10), 41–47 (1997)
Mazurek, J., Ramík, J.: Some new properties of inconsistent pairwise comparisons matrices. Int. J. Approx.
Reason. 113, 119–132 (2019)
Ramík, J., Korviny, P .: Inconsistency of pair-wise comparison matrix with fuzzy elements based on geometric
mean. Fuzzy Sets Syst. 161(11), 1604–1613 (2010)
Saaty, T.L.: A scaling method for priorities in hierarchical structures. J. Math. Psychol. 15, 234–281 (1977)
Saaty, T.L.: Highlights and critical points in the theory and application of the Analytic Hierarchy Process.
Eur. J. Oper. Res. 74, 426–447 (1994)
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
123 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.