id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
1108c57f-2c0f-4729-a7bf-f4435cea368d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Looking for a post about vibing and banter
I'm looking for a post that describes the difference between staying in our head, which is something nerds tend to do (and bantering), and feeling your whole body, "vibing" but thus having less control over your actions.
If someone knows it I would really appreciate a link!
I recently started reading it from my RSS feed, I'm fairly sure it was on LessWrong, but now I can't find it with Google or in my history.
|
cd98555a-bb0f-4ffd-9def-d6357787632f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Not owning our beliefs
Julian Baggini argues that we might be more willing to judge our beliefs objectively if we avoid thinking of them as "our own". I hadn't thought before about explicitly distancing myself from my beliefs in this sense.
|
b5873469-f603-42ff-9b75-1ae40a4d7741
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Looking for an alignment tutor
Hey, this is me. I’d like to understand AI X-risk better. Is anyone interested in being my “alignment tutor”, for maybe 1 h per week, or 1 h every two weeks? I’m happy to pay.
Fields I want to understand better:
* Anything related to prosaic AI alignment/existential ML safety
* Failure stories/threat models
Fields I’m not interested in (right now):
* agent foundation
* decision theory
* other very mathsy stuff that’s not related to ML
My level of understanding:
* I have a decent knowledge of ML/deep learning (I’m in the last year of my PhD)
* I haven’t done the AGI Safety Fundamentals course, but I just skimmed it, and I think I had independently read essentially all the core readings (which means I probably have also read many things not on the curriculum). I’d say I have a relatively deep understanding of a majority (but not all) of the content in this curriculum.
* Similarly for the AGI Safety Fundamentals 201, excluding the tracks
Example questions I wrestled with recently, and I might have brought up during the tutoring:
* It seems to me that our currently level of outer alignment tools (RLHF + easy augmentation) is enough to solve the outer alignment problem sufficiently well so that humans don’t end up dead or disempowered (conditional on slow takeoff); and then we can solve further outer alignment problem as the come up, with iteration and regulation. So I basically think that the core of the alignment problem, at the moment, is inner alignment + deceptive alignment. What am I missing? (I read Christiano’s “Another Outer Alignment Failure Story”, but I still have this question.)
* I understand that a reward maximiser would wire-head (take control over the reward provision mechanism), but I don’t see why training an RL agent would necessarily end up in a reward-maximising agent? Turntrout’s Reward is Not the Optimisation Target shed some clarity on this, but I definitely have remaining questions.
* Is the failure mode describe i
|
9edf34ef-592e-4111-a930-8eac3c0887f9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Strangest Thing An AI Could Tell You
Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.
A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage. If you're told outright that your arm is paralyzed, you'll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:
> After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.
And it really makes you wonder...
...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyz
|
5e71857c-f615-4320-982a-9a12720b351f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Fully" acausal trade
Acausal trade happens when two agents manage to reach a deal with each other, despite not being able to interact causally (and, in some cases, not being sure the other one exists). Consider, for example, the prisoner's dilemma played against another copy of yourself, either in the next room or the next universe.
But those two situations are subtly different. If my copy is in the next room, then we will interact after we've reached our decision; if they're in the next universe, then we won't.
It might seem like a small difference, but my simple way of breaking acausal trade succeeds in the "next universe" situation, but fails in the "next room" situation.
So it would be good to distinguish the two cases. Since the terminology is well established, I'll call the "next universe" situation - where there are no interactions between the futures of the agents - to be "fully" acausal trade.
|
cd8c39f8-890a-4d06-92bb-27c5eb1e3217
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
A quick econ FAQ for AI/ML folks concerned about technological unemployment
This is a FAQ aimed at a very rapid introduction to key standard economic concepts to professionals in AI and machine learning who have become concerned with the potential economic impacts of their work in the field.
It takes a strong intuitive understanding of "comparative advantage", "aggregate demand", and several other core concepts, to talk sensibly about a declining labor force participation rate and what ought to be done about it. There are things economists disagree about, and things that economists *don't* disagree about, and the latter set of concepts are indispensable for talking sensibly about technological unemployment. I won't belabor this point any further.
An extremely compressed summary of some of the concepts introduced here:
- **Comparative advantage** says that even if Alice is less efficient (less productive) at all of her tasks than is Bob, Alice and Bob still both benefit from being able to trade with one another.
- If Alice is *relatively* twice as good at growing apples than she is at growing oranges, and Bob is relatively twice as good at growing oranges than growing apples, Alice and Bob can both benefit by trading her apples for his oranges, even if Bob can grow 10 times as many apples per unit of labor than Alice.
- If Wisconsin is producing cheese and trading it to Ohio, and then Michigan becomes much better at producing cheese, this can harm the economy of Wisconsin. It should *not* be possible for *Wisconsin* to be harmed by trading with Michigan unless something weird is going on.
- The **lump of labour fallacy** is the standard term for the (appealing but incorrect) reasoning which says that if you automate away 95% of the jobs, only 5% of the people remain employed.
- Humanity did in fact automate away 95% of all labor during the industrial revolution - 98% of the population used to be farmers, and now it's 3% - and yet more than 5% of the people still have jobs.
- The standard explanation for why we aren't all already unemployed revolves around **complementary inputs:** if each sausage-in-a-bun requires 1 sausage and 1 bun, and it previously took 20 units of labor to make 10 sausages and 10 units of labor to make 10 sausage buns, and then sausage-making productivity doubles, the classically predicted new equilibrium is 15 sausages in 15 buns.
- The **broken window fallacy** is the standard term for the (appealing but incorrect) reasoning which says that if you heave a rock through somebody's window, they have to pay the glazier to repair the window, and the glazier buys bread from the baker, and so production thrives and everybody is better off.
- Under normal conditions the economy should be bounded by the available supply of glass and similar goods, and destroying goods in one place diverts them from somewhere else. Modern economies *can* get into wedged states where glaziers are sitting idle; and the economy is then bounded by a different quantity, the amount of flowing money available to animate trades, or **aggregate demand.** Heaving a rock through a window doesn't increase this quantity either, but there's one weird trick that can, and it works in both theory and practice.
- When a good has a relatively low-sloped **supply curve** (function for supply versus price), subsidizing that good will tend to increase its price much more than the amount of that good.
- If you make it easier for people to take out loans to go to universities, but it's not very easy to build lots of new universities, the result is that people end up with a lot of student loan debt.
- Only 350 orthodontists are allowed to graduate every year in the USA. These orthodontists can only make braces for a limited number of children. Nothing you do with insurance premiums and subsidies can cause more children to have braces than that, unless you are somehow causing the supply of orthodontists to not be flat with respect to price.
- "Inequality" in a bad sense usually results from **rent extraction.** Under some conditions, extractible rents just go up a corresponding amount when people's incomes rise.
- This is one potential thing that might go wrong with the basic income. It might possibly just cause the Ferguson Police Department to issue more tickets, unless you fix the Ferguson Police Department first.
- Reading articles written by actual poor people in the US gives the strong impression that one's top priority for helping them ought to be outlawing towing fees.
- Intellectual property protection is technically rent and can easily go out of control.
- Economies of scale in dealing with regulators -- being able to afford lawyers -- can effectively protect big companies from competition and enable them to extract monopoly rents.
- **Labor market flexibility** (aka the ability of people to move around and match up with available jobs) is a critical factor in employment. This can be hindered by e.g. restrictions on building more housing in cities with high labor demand.
- We could equally view this as due to major employers continuing to situate new jobs in the cities with already-skyrocketing housing prices. (One observes that, often, these housing costs are relatively far less burdensome on the high-level decision makers who decide where to put the company.)
- **Effective marginal tax rates** are how much benefit accrues to somebody who earns 1 more dollar at their current wage level. Not just due to income taxes and payroll taxes, but also due to phasing-out of benefits, it's possible for somebody who earns +$1 to be effectively +$0.20 better off, or even for this number to go negative.
- Current estimates of effective marginal tax rates in low income brackets are usually in the range of 70% and on many 'cliffs' higher than 100%.
- Alternatives to minimum wage include **wage subsidies.**
- We can consider the minimum wage as a subsidy on wages, paid for by a marginal tax on employers who hire people at less than the minimum wage. If society thinks this subsidy is a good idea, it should almost certainly not be paid by that particular tax.
- Alternatives to labor taxes include **land value tax** and **consumption taxes.**
- If you tax labor, you'll discourage labor. Taxing the implicit rents on land doesn't cause there to be less land, so it's a less distortive tax.
- If you expect AI to create a class of people who are having a hard time selling their labor, among the things you should do to not make those lives any worse is to tax (and regulate) their transactions as little as possible.
- **Sticky prices** of labor, especially **downward sticky** wages, cause small amounts of deflation to do far more damage to economies than small amounts of inflation.
- This is one reason why modern economics suggests trying to stabilize an economy at 2% inflation rather than 0% inflation; small amounts of inflation help prices adjust to new realities over time.
- There's an up-and-coming view, not yet fully adopted, which rephrases the view as being about the total amount of flowing money, and says that people lose their jobs when there isn't enough flowing money to pay their wages. This view suggests that the best help monetary policy can give to employment is **targeting a price level** of **per-capita NGDP.**
Many readers will also have ideas about:
- Regulatory burdens on employers decreasing employment.
- Regulatory capture by large corporations influencing regulators to make it harder for new entrants to compete with them.
- Regulatory compliance costs generally destroying wealth.
There is not universal agreement on how large these problems are, nor whether they should be top priorities. But standard economics does not regard these ideas as *naive,* so I won't discuss them any further in this FAQ. My purpose here is to introduce relatively more technical concepts, that are more likely to be new to the reader, and that counterargue views widely regarded by professional economists as being misguided on technical grounds.
This document is currently incomplete, and the Table of Contents below shows what is currently covered. **Warning:** as of March 2017 this is a work in progress and has not yet been reviewed for accuracy by relevant specialists.
[https://arbital.com/p/toc:](https://arbital.com/p/toc:)
# Comparative advantage, and the mercantilist fallacy
**Ricardo's Law of Comparative Advantage** says that parties (individuals, groups, countries) can both benefit from trading with each other, even when one party has an **absolute advantage** over the other in every factor of production.
Suppose:
- Alice can produce 20 apples per unit of labor, and 10 oranges per unit of labor.
- Bob can produce 50 apples per unit of labor, and 100 oranges per unit of labor.
- Alice and Bob both have 10 units of labor to use.
If Alice wants an equal number of apples and oranges, she's better off producing 200 apples with all of her labor, and then trading 100 of the apples to Bob for 100 oranges.
Bob in turn is better off producing 100 oranges with 1 unit of labor, and trading them to Alice for 100 apples, than Bob would be if he used 2 units of labor to produce 100 apples directly.
Economists usually regard this idea as refuting some widespread intuitions about **mercantilism,** which says that countries are better off the more they sell to other countries, and worse off the more they buy from other countries.
Another possibly-not-intuitive consequence of comparative advantage is that, except for *sticky prices* (see below), the exchange rates on a country's currency have little long-run effect on its trade.
Suppose:
- Germany buys 100,000 dollars from the US for 100,000 euros;
- Germany then uses these 100,000 dollars to buy 100 air conditioners from the US;
- People in the US use their 100,000 euros to buy 10 cars from Germany.
One day the European Central Bank decides to print enough new euros to halve the value of the currency. Then:
- People in Germany buy 100,000 dollars from people in the US, using 200,000 euros.
- People in Germany buy 100 air conditioners from the US for $1000 each.
- People in the US buy 10 cars from Germany, each of which now costs 20,000 euros.
Ricardo's Law thus suggests that we can view the US as shipping 100 air conditioners to Germany in exchange for 10 cars, with the currency exchange rates being something of an epiphenomenon. (In practice this is modified by "sticky prices" in that some people already own euros or dollars, or bonds denominated in euros or dollars, and there are existing sales contracts that fix prices for a term. But effects like that tend to wash out over the longer run.)
The concept of comparative advantage is important for discussing technological unemployment because it conveys the general idea that when we draw a line around a group or region, like "{Alice, Bob, and Carol}" or "Wisconsin", this collective entity is not harmed by being able to trade with someone else who is more productive than them. (To say nothing of getting their *own* advanced technology.)
This isn't to say that technological advances can never hurt Wisconsin's economy. Maybe Wisconsin is producing cheese and trading it to Ohio in exchange for Ohio's coal. One day, Michigan becomes much more efficient than Wisconsin at producing cheese, causing Ohio to start trading with Michigan instead. This can hurt Wisconsin's economy, quite a lot in fact.
The conclusion from Ricardo's Law is rather that Wisconsin can't be harmed by the fact of Wisconsin *itself* being allowed to trade with Michigan. What hurts Wisconsin in the above scenario is that *Ohio* is allowed to trade with Michigan. Wisconsin can only make itself even worse off by choosing to cut off its own trade with Michigan.
If Wisconsin closes its borders and tries to get along in its own little world that doesn't include those awful new cheese-making machines, Ricardo's Law says that Wisconsin as a whole *should* become worse-off thereby. If this stops being true, then some unusual phenomenon must be at work, one that violates the broad axioms under which Ricardo's Law is a theorem.
An intuition I'll be trying to pump throughout this FAQ is that the state of affairs over most of human history, in which technological automation made most people better off without causing them to be permanently unemployed, is a very normal state of affairs from the standpoint of economic theory. If we're currently looking at lasting unemployment caused by automation, this is a *surprising* state of affairs.
And: Even if something has changed and something weird is going on, the reason for it won't be that Ricardo's Law suddenly stopped being a theorem. Whatever the truth turns out to be, Ricardo's Law will still be a relatively better lens than mercantilism through which to understand it.
# Complementary inputs, and the lump of labour fallacy
Broadly speaking, the "**lump of labour fallacy**" is reasoning as if there's a limited amount of work to be done in the world, and a limited number of jobs to go around, and every time one of those jobs is automated away, one more person ends up unemployed.
Suppose that:
- People like to eat sausages in buns.
- Each sausage-in-a-bun requires 1 sausage and 1 bun as an input, and a trivial amount of labor to assemble.
- At present, it requires 1 unit of labor to make a bun, and 2 units of labor to make a sausage.
- The resulting equilibrium involves 10 sausages-in-a-bun being manufactured by 30 people expending 30 units of labor.
So then the critical question becomes: if we invent a new kind of sausage-making machine that doubles productivity and makes it possible to produce 2 sausages using 2 units of labor, do we...
- End up with 10-sausages-in-a-bun produced by 20 people, and 10 unemployed former sausage makers?
- End up with 15-sausages-in-a-bun produced by 30 people?
Ending up in the second equilibrium, maybe not right away, is widely regarded as the core idea behind how we could start with a world where 98% of the population were farmers, improving agricultural productivity by a factor of 100 between then and now, and end up with a world in which 3% of the people are farmers and the other 95% of the population are *not* unemployed.
One way of looking at this is that, over the course of human history so far, *everything* has behaved like it is complementary. When agricultural productivity skyrocketed, a bunch of people went into making shoes and clothes and tables and houses. The end result was that more people had food, more people had more nice clothes and furniture, and *not* that most of the population ended up unemployed. The "food" input to a human became cheaper, and labor went into making more of the "clothes" and "furniture" inputs to humans. Then again later, when manufacturing productivity skyrocketed, more people went into services.
Now it could be that this phenomenon is now breaking down for any number of possible reasons. But among those reasons, it seems relatively implausible that we have reached *the end of demand* and that all the people now have enough of every kind of good and service that they want. There are way too many poor and unhappy people still in the world for that to be true!
Okay, so we haven't reached the end of human wants. But maybe some people in the modern world can no longer produce *anything* that other people want...?
I'd like to encourage you to regard this idea with some surprise. Especially the notion that this "unnecessariat" exists today, before we can reliably build a robot that puts away the dishes from a dishwasher without breaking them. AI is making progress, but we haven't exactly reached par-human perception and motor skills, yet. Are there really people in the world who can do *nothing* that *anybody* else with money wants?
Heck, why can't those people trade with each other? Why can't people in this supposed unnecessariat at least help each other, and get their own economy going, even if nobody else wants to trade with them? And doesn't Ricardo's Law say that this state of affairs ought to provide a floor on the group's minimum economic activity, since in theory the group can only benefit further by trading with the outside world...?
Why *isn't* it an economic theorem that there's never any unemployment?
I would encourage you to stare at this argument and try to feel surprised that unemployment exists. It's not that theorems get to overrule reality; if a valid theorem fails to describe reality, that just says one of the axioms must be empirically false. But we do know that *more* must be going on in the existence of unemployment, than there being a limited lump of jobs and not enough jobs for all the people.
- If you are a neoliberal, you are likely to think about minimum wages, payroll taxes, occupational licensing, and regulatory burdens. If the state imposes a minimum cost and minimum danger on hiring someone, people may not be able to produce anything that is worth the price and risk to their employer.
- If you are Tyler Cowen, you will worry that some people in the labor pool are actively destructive; that lawsuits have made it harder for employers to use word-of-mouth to distinguish destructive people; and that people who have been unemployed for a while, end up in a global pool of people who on *average* produce "zero marginal product".
- If you are of a more leftist persuasion, you may worry that the billionaires are running out of real wants, and that the people who *do* want things don't have enough money to buy them.
- If you are a Georgist, you are liable to observe that "unemployed" people can't build houses for each other and can't farm food for each other, because the state has decided that these people own zero land; so they aren't allowed to get started anywhere.
And so on. There are more than enough possible hypotheses to explain why unemployment can possibly exist. Still, it's worth noting that there was *less* lasting unemployment in, say, 1850. What changed between then and now... could be all the regulations saying you can't just go build somebody a house. Or it could be increased inequality. Or it could be housing prices preventing people from moving to California where there are jobs.
But it's relatively much stranger to postulate that the *only* problem is that we're running out of jobs in the global lump of labor as AI advances eat away at the lump.
It could indeed be that existing AI, and to a much larger degree, ordinary computer programming, has gone a long way towards devaluing many jobs that can be done without a college degree.
Labor force participation is in fact dropping under current conditions, whatever those are. It's possible and maybe probable that, all else being equal, AI conquering more perceptual and motor tasks will make more people permanently unemployed.
It's reasonable for people in AI to feel concerned about this and want to make things better.
But it's unlikely that the issue is the end of human want, or the running-out of the lump of labor. The problem may not be that some people have *intrinsically* nothing left to contribute to any remaining human want; but rather, that the modern world has put obstacles in the way of people being allowed to contribute.
Regardless: However you look at the situation, and whatever solution you propose, please don't talk about AI or trade or automation "destroying jobs." This will cause any economists listening to scream in silent horror that they cannot give voice.
# The broken window fallacy, and aggregate demand
## Surprisingly, recessions exist
In theory, when we double the productivity of sausage-makers, we should get 15 sausages-in-buns instead of 10. We should not get 10 unemployed sausage-makers; or, if we temporarily get 10 unemployed sausage-makers, that state of affairs shouldn't last longer than it takes sausage-makers to become willing to look for work at the booming bun factories with HELP WANTED signs plastered all over their windows.
And yet there seem to be these occasions, like the "Great Depression" or the "Great Recession", when there are silent factories and people standing idle, or the modern equivalent. The unemployed sausage-makers are willing to look for new jobs, but there aren't tons of HELP WANTED signs to let them get started on retraining. The bun factories are actually making fewer buns, because now the unemployed sausage-makers aren't buying sausages-in-a-bun anymore.
One will observe, however, that these historical occasions seem to come on suddenly, and *not* due to sudden brilliant inventions causing a huge jump in productivity. They happen at around the same time that there's trouble in the financial system--banks blowing up.
Furthermore, the number of idle factories and amount of idle labor seems to be far in excess of what could reasonably be accounted for by sausage-makers needing to change jobs. And even people with what might seem like very useful skills, can't find anywhere to put them to use. None of the previous reasons we've considered, for why unemployment could possibly exist, seem to apply to how *that* much unemployment could exist in the Great Depression. And then a decade later all those people had jobs again, too; so it wasn't something intrinsic to the people or the jobs that caused them to be unemployed for so long.
That is weird and astonishing! If your brain doesn't currently think that is weird and astonishing, please try at least briefly to get yourself into the state of mind where it is.
## The broken window fallacy
As my entry point into explaining this astonishing paradox, I'm going to take an unusual angle and start with the **broken window fallacy.**
The broken window fallacy was originally pointed out in 1850 by Frederic Bastiat, in a now-famous essay, "[That Which Is Seen, And That Which Is Not Seen](http://bastiat.org/en/twisatwins.html)" Bastiat begins with the parable of a child who has accidentally thrown a rock through somebody's window:
> Have you ever witnessed the anger of the good shopkeeper, when his careless son happened to break a square of glass? If you have been present at such a scene, you will most assuredly bear witness to the fact, that every one of the spectators, were there even thirty of them, by common consent apparently, offered the unfortunate owner this invariable consolation — "It is an ill wind that blows nobody good. Everybody must live, and what would become of the glaziers if panes of glass were never broken?"
And doesn't the glazier, in turn, spend money at the baker, and the shoemaker, and the baker and shoemaker spend money at the mill and tannery? Doesn't all of society end up benefiting from this broken window?
> Now, this form of condolence contains an entire theory, which it will be well to show up in this simple case, seeing that it is precisely the same as that which, unhappily, regulates the greater part of our economical institutions.
>
> Suppose it cost six francs to repair the damage, and you say that the accident brings six francs to the glazier's trade — that it encourages that trade to the amount of six francs — I grant it; I have not a word to say against it; you reason justly. The glazier comes, performs his task, receives his six francs, rubs his hands, and, in his heart, blesses the careless child. All this is that which is seen.
>
> But if, on the other hand, you come to the conclusion, as is too often the case, that it is a good thing to break windows, that it causes money to circulate, and that the encouragement of industry in general will be the result of it, you will oblige me to call out, "Stop there! your theory is confined to that which is seen; it takes no account of that which is not seen."
>
> It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented.
Bastiat's point was widely accepted as persuasive; or at least, it was accepted by economists.
Let us nonetheless notice that Bastiat's original essay is worded in a bit of an odd way. Why focus on the fact that when the shopkeeper has spent six francs in one place, he therefore can't spend six francs somewhere else? If the limiting factor is just money, couldn't we make all of society better off by adding more money?
You would ordinarily expect a healthy economy to be limited by its resources, by the available sand and heat for making glass. If the glazier replaces one window, he isn't replacing another. Or if some other seller provides the glazier with more sand and coal, to make more glass than he otherwise would've, then that's coal which isn't going to the smithy.
Of course what Bastiat really meant is that sending resources to repair a broken window calls away those resources from somewhere else. This is *symbolized* by the six francs being spent in one place rather than another.
That's how a market economy works, after all: when goods flow in one direction, money flows in the other direction. Six francs flow from the shopkeeper to the glazier; a glass window travels in the opposite direction.
So Bastiat talks about a change in a flow of francs, as a shorthand for talking about what really matters, a corresponding change in the opposite flow of goods.
Now ask: What if the economy is, somehow, Greatly Depressed? What if the glazier was standing idle until the window was broken, and isn't being called away from any other job? What if there's a ton of coal in an empty lot, waiting for someone to purchase it, and the reason nobody is purchasing it is that nobody seems to have any money?
Well, in this case, it still doesn't help to go around breaking windows. And now Bastiat's original wording happens to *precisely* describe the resulting problem: the shopkeeper will spend six francs on the broken window, and therefore not spend six francs on something else.
The only way we can imagine that breaking a window will help this economy... is if the shopkeeper has a buried chest of silver; and when somebody throws a rock through his window, the shopkeeper spends some of this buried silver; and then the shopkeeper *doesn't* try later to top off this chest of buried silver again. Only in that hypothetical is there actually an additional six francs added to the economy, flowing to an otherwise idle glazier, who trades with a baker that wouldn't bake otherwise, who buys from a farmer who can afford to buy more fertilizer and grow more crops than before.
Would you agree with that statement -- with the paragraph as written above? Standard economics says it is correct.
But this suggests an astonishing implication. It seems to say that we could just *print more money* and make the shopkeeper's town better off. Printing new money is pretty much the same as digging up money from a buried chest and not replacing it later... right?
## More money, fewer problems?
"Now hold on," says the alert computer scientist, sitting bolt upright. "I know that society conditions us to think of little rectangular pieces of colored paper as super-valuable. But dollar bills or euro bills or whatever are just *symbols* for actual goods and services that are actually valuable. Thinking you can create more goods and services by creating more money is like counting your fingers, getting an answer of 10, erasing the 10 and writing down 11, and thinking you'll end up with 11 fingers. Money isn't wealth, it's a claim on wealth, and if you counterfeit $100 in your bank account, you're just stealing wealth that someone else won't get. Any theory that says you can create more wealth by creating more money, no matter how complicated the argument, will in the end turn out to be missing some row of the spreadsheet that makes the totals add up to zero."
And in a *healthy* economy, this would be correct. But remember, those idle factories are not supposed to exist in the first place. Is it so impossible that a weird condition that shouldn't exist, could be fixed by a weird tactic that shouldn't work?
"So what you're saying," says the skeptical computer scientist, "is that the two impossibilities cancel out. Like the story about the physics student who derives $E = -mc^2,$ and who gets told to try next time to make an even number of mistakes."
Well... yes, as it turns out. There's even an elegant explanation, which we'll get to, for *why* those two particular impossibilities are symmetrical and cancel each other out.
First, let's consider a more ordinary exception to the rule that creating money can't help--the reason that money exists in the first place.
Suppose that Alice, Bob, and Carol are as usual stuck on a deserted island. Alice is growing apples, Bob grows bananas, and Carol grows cucumbers. One day Alice wants a banana, Bob wants a cucumber, and Carol wants an apple; and they all go hungry because Alice doesn't have any cucumbers to trade to Bob for a banana. Also they don't have computers on their deserted island, so they can't do anything clever like write software to detect cycles in people's desired goods.
In this case the standard solution for this lack of computing power is... (drumroll) ...money!
Creating symbolic money to add to this island, where there was no money before, can indeed the three people on the island materially better off. Inventing money causes more trades to occur than previously; trading can make people materially better off.
When an economy has idle factories and standing workers, it's in a state where more trades could occur, but aren't occurring.
So... adding more money to that economy, to animate more trades, isn't far from what happens when we add money to an island that doesn't have money?
Now I have, indeed, pulled some wool over your eyes here. The macroeconomic theory is waaaaay more complicated than this, in much the same way modern neural nets are a tad more complicated than just the idea of gradient descent.
To give an idea of some of the complications we are blatantly skipping over: classical economics says that destroying money should not in fact slow down an economy composed of rational actors. In a rational world, if somebody sets fire to half of the money supply, everyone just halves their prices and life goes on.
The reason our world is *not* like that ideal world where the prices just go down, is partially that there are outstanding loans denominated in the current currency, and existing contracts denominated in the current currency. But *most* of the problem--according to the particular economic school I happen to subscribe to--is that people who set prices are *reluctant to lower them,* in a way that an economy full of perfectly rational actors wouldn't be.
And this is the kind of statement that causes fistfights between economists.
But there *is* widespread agreement that, for whatever reason, we don't live in the classical world of perfectly rational actors.
And that is--on the standard theory--why fractional reserve banks blowing up and destroying money and making fewer loans, cause there to be a bunch of unemployed people and fewer jobs; and nobody being able to afford to buy things because none of their own customers can afford to buy from them. In a modern economy, for every flow of goods, there's money flowing the other way. So when you destroy money or slow down its flow, this can reduce the actual number of trades.
Which means that banks blowing up and thereby reducing the flow of money, can reduce the amount of flowing goods instead of prices dropping. We are pretty sure from observation that this does in fact happen.
After we're done clipping gradients and initializing orthogonal weights and normalizing batches, in the end deep neural nets are *still* sliding down a loss gradient. Similarly, it really looks in theory and in historical observation like an economy with idle resources, especially *unemployed workers,* can be made materially wealthier by creating more money, which causes more trades to occur and brings the economy closer to operating at full capacity.
Historical observation also *seems* to bear this part out, although the near-impossibility of running real controlled experiments makes it hard to be sure. The most recent case you might have heard about was in 2013 in the United States, where the "sequester" was going to produce a sharp cut in government spending just as the US was starting to recover a little bit, not completely, from the Great Recession. Some people predicted the economy would tank again; and other people said it would be fine so long as the Federal Reserve created a correspondingly large amount of money, aka **monetary offset.** And the sequester happened, and the Federal Reserve did in fact print a bunch of money, and pretty much nothing happened to the economy. Amazing proof, right?
Okay, so that's not a large number of bits of evidence by *your* standards, you spoiled brats who get to test neural nets on 127GB of training data. But it was in fact an experimental test of two different predictions, and the economists who thought in terms of flowing money were right; and in the end, that's all we have in this life.
## A general glut?
John Maynard Keynes -- I know some of you are shuddering at the name, but he was a respected economist of his day and he came up with a clever illustration, so please bear with me -- John Maynard Keynes once came up with an illustration of what could possibly cause anything as weird as a recession, and how this could possibly be fixed by anything as weird as printing money.
Suppose that on an island, Alice and Bob and Carol are growing apples and oranges and pomegranates. Furthermore, as in the Great Depression, it is not yet the case that everyone has enough to eat.
In this situation, could there be an excess supply or "glut" of apples?
Well, if people are still willing to eat apples, then what would "a glut of apples" even mean?
We answer: Suppose Alice and Bob, who both grow apples, have such a large apple harvest that people on the island feel kind of saturated on apples, and now prefer to eat other, scarcer fruits *if available*. In this case we'd see the relative trading value of apples dropping, compared to other fruits. Maybe 1 apple used to be worth 1 orange, and now somebody would only trade 1 orange for 2 apples. This would signal Alice and Bob to shift more of their effort to growing oranges and pomegranates in the future.
We could perhaps call this a "glut" of apples, meaning a *relative* excess of supply of apples, versus the supply/demand balance of other fruits, compared to previously.
Observe that to say that apples are now cheaper in terms of oranges, is equally to say that oranges have become more expensive in terms of apples. Then is it possible to have a glut of *every kind of fruit at once?*
How (asked Keynes) could that possibly be? So long as Alice and Bob and Carol aren't stuffed full, so long as they would still *prefer* to eat more fruit, how could that possibly be? You can't have *each* fruit being ultra-cheap as denominated in all the other fruits.
Then what are we to think if we see Alice and Bob and Carol collectively cutting back on growing apples and oranges *and* pomegranates?
How is it possible for a whole economy to be suffering from a problem of "excess supply," a situation where factories in general are going idle, without there being a corresponding surge of demand somewhere else?
The answer (said Keynes) is that the only way this "general glut" can happen, is if there is some additional, invisible good that people are pursuing more than apples and oranges *and* pomegranates. What we see is the value of all three fruits falling relative to the value of this invisible good, so that people produce less of all three of them. And the way that this can happen in real life (said Keynes) is if Alice and Bob and Carol have invented a fourth, invisible good called money.
This may sound weird and abstract, so consider this even simpler real-life example drawn from the overused case of the [Capitol Hill Babysitting Co-op](https://en.wikipedia.org/wiki/Capitol_Hill_Babysitting_Co-op#Cooperative_system_and_history), a baby-sitting club which created a currency that parents could trade among themselves to pay for babysitting. The parents in this system tried to keep a reserve of co-op scrip so they could be sure of getting babysitting on demand. In fact, it turned out, people wanted to keep more scrip than the co-op made available in total. Soon there was less and less actual babysitting going on as a result, and the co-op nearly died. Like a glut of apples that is symmetrically a boom in oranges, there was a glut of babysitting and a booming demand for babysitting tokens. Only the babysitting tokens by themselves, sitting around motionless, weren't really *helping* anything.
If you add in more goods, like oranges and apples and pomegranites, the overall situation is still the same: when the economy has spare productive capacity and yet everything seems to be slowing down at once, we can see the required symmetric boom as taking place in the demand for that final good of currency... which is quite harmful to the real economy, because currency doesn't *do* anything.
It's the weird nature of currency as a worthless symbolic good, that allows recessions and Depressions to happen in the first place. Which is why the spreadsheet can in fact balance, and the two symmetrical impossibilities can cancel out, when we try to fix a recession by creating more money!
## Aggregate demand and monetary stimulus
When economists talk about this sort of thing, they often use the phrase "**aggregate demand**".
Actual human wants are, if not infinite, then certainly far higher than the total amount of stuff produced by the world economy. We don't all have mansions, at least not yet.
So what "aggregate demand" really means is "aggregate purchasing power", which in turn means the total amount of flowing money trying to buy all the goods.
The standard economic view can then be stated thus: when the purely symbolic financial system blows up, and yet in the real world people lose their jobs and factories stop operating, what's happening is an "**aggregate demand deficit**." There isn't enough purchasing power to buy all the stuff the economy could supply at full capacity.
The obvious reason this happens is that exploding banks (or nowadays, banks that get bailed out, but are less enthusiastic about making more loans afterwards) reduce the total "**money supply.**" There's less total symbolic money to flow through the trades.
Another reason aggregate demand decreases, is that people want to *hold on* to more money during a depression or a recession. They have an increased preference for larger numbers in their bank accounts, and are more reluctant to spend money. This decreases **monetary velocity.**
When all this is starting to go wrong inside a country, conventional economics says that the central bank ought to immediately swing into action and create more money, aka "**monetary stimulus.**" Or rather, the central bank is supposed to create money and use it to buy government bonds or something, and then hold onto the bonds. Later on, when monetary velocity picks up, the central bank should sell the bonds and destroy the money it previously created, to avoid creating too much inflation.
If the central bank thinks it *can't* create enough money, then a widely advocated policy says that the country's government should instead sell treasury bonds, in exchange for money, and spend that money on... pretty much anything. This is not in fact a null action in monetary terms--we are not just removing taxes and spending them elsewhere, or so most economists think. The people who buy bonds in exchange for money have bonds afterwards instead of money; and this can also satisfy their desire to hold assets. Then the money goes out the other end of the government and into circulation, where it wasn't circulating before. This is "**fiscal stimulus.**"
Whether it is ever a good idea to do "fiscal stimulus" is another one of those questions that cause economists to get into fistfights. I personally happen to side with the school that goes around saying, "What do you mean, the central bank *can't create enough money* to stabilize the flow of currency and trade? Did you run out of ones and zeroes? Why are you asking the government to increase the national debt over this? Just do your darned job!"
I expect that this business of talking about how central banks are supposed to stabilize the national flow of money, is causing half of you to raise skeptical eyebrows and ask if maybe the whole thing would somehow stabilize itself if anyone could issue their own currency instead of it being a government monopoly.
And the other half of you are trying to figure out some way to solve it using a blockchain.
But right now, nearly all countries have central banks. So long as that is in fact the way things are, conventional economic theory says central banks should try to stabilize the flow of money, and hence wages, and hence employment, by constantly tweaking the amount of money in circulation.
It's not agreed among economists which countries today might be suffering from too little aggregate demand, and working under capacity. The economists in my preferred school suspect that it is presently happening inside the European Union due to the European Central Bank being run by lunatics. Most economists think the United States is *not* currently bounded by an aggregate demand deficit, or running far under capacity.
I myself feel a bit unsure about that. Sometimes it really seems like we could all be much happier if we all just simultaneously decided to be 10% richer. Or that, say, Arizona, would benefit a *lot* from having its own currency to use on the side, until everyone there was working for everyone else. But I haven't paid enough attention to the specifics here, and you probably shouldn't listen to me.
So: If you're worried about technological unemployment due to AI advances, one of the obvious things that we should do along with any other measures we take -- that we really *really* need to do, all this abstract stuff has an enormous in-practice impact on the real economy -- is make sure that people are rich enough to buy things.
It is possible for an economy to end up in a state where most people feel poor, and can't afford to buy things from each other, and go around being sad and out of work... such that you could in fact cause everyone to perk up and trade with each other and be happy again, just by declaring that everyone has more money. (Basic income would just move money around; the idea here is that you can make things better if *everyone simultaneously* has more money.) If everyone thinks they're too poor to afford to hire one another, you can sometimes fix the problem by just having everyone be rich instead of poor. If, after great advances in automation, you saw lots of people standing around doing nothing, that would be the *first* thing we ought to try.
I'm going to temporarily stop going on about this because there's other economic concepts I want to run through and I don't want to risk boring you about this one topic. We'll pick up this thread again in a later section about "NGDP level targeting."
For now, I just want to observe that, compared to problems like reforming labor markets, it can be a lot simpler to just print more ones and zeroes at the central bank. And central banks are not entirely unwilling to hear about it. It's arguably the best choice for what's worth spending political capital to fix first.
# Gently sloping supply curves, and skyrocketing prices
By now you might be starting to imagine a rosy post-automation scenario which seems *in theory* like it shouldn't be that hard to achieve.
You could look further ahead than I think we're actually going to get in practice -- not because I don't think AI can get that far, quite the opposite really, but still -- and imagine a world where robots are far more common, and human beings... are still trading with each other.
A world where, if Alice knows how to grow food, and Bob knows how to build houses, then they're not both going around cold and hungry because agriculture and house-building can be automated.
A world where we haven't wantonly *defined* ourselves into poverty; a word where robots exist, but we've decided to be imagine enough pretend symbolic money into existence for us to afford them.
A world where, to be honest, some of the dignity of work *has* been lost. Where the most common new jobs aren't as awesome as forging trucks out of molten steel with your bare hands. But people still want things, and other people do those things and get paid for them.
A world where people have as at least as much self-respect, where the economy is at least as vibrant, as it was in say 1960. Because logically, any given group could always draw a line around themselves and go back to 1960s technology. And if that group then *removes* this imaginary line, starts trading with others, and adopts more technology, then life should only get better for them.
Because of Ricardo's Law of Comparative Advantage.
Right?
So... there are a number of obvious difficulties that could completely blow up this rosy scenario.
One such class of difficulties is if there's any good X that people *really* need, and that stays extremely expensive in the face of automation--if it takes months and months of babysitting to afford one unit of X.
Like, say, *housing, health care,* or *college.*
You can find various graphs showing that all of the increased productivity over the last 20 years has been sucked up by increasing housing prices. Or that all of the missing wages, as productivity rises and wages stay flat, can be accounted for by the cost to employers of health insurance. I can't recall seeing a graph like that for college costs and student debt, but it sure ain't cheap. If all of those graphs are true simultaneously (and they are), you can see why people might feel increasingly impoverished, even if their nominal wages in dollars are theoretically flat.
And if those costs, or any other costs, went on rising while a hypothetical tidal wave of automation was crashing down, that could smash any utopian vision of people living peacefully on less labor in a less costly world.
For a brief but excellent overview of the *problem*, I would recommend Slate Star's "[Considerations on Cost Disease](http://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/)". It turns out, for example, that the United States spends around \$4000 per citizen on Medicaid and Medicare and insurance subsidies. \$4000 is roughly what other developed countries pay for *universal* coverage. It's not that the United States is unwilling to pay for universal coverage, but that somehow healthcare costs far more in the United States. (Without any visibly improved medical outcomes, of course.)
There are various aspects of all this that economists will still get into fistfights over, so I can't just hand you a standard analysis and solution. But there's at least one important standard economic lens through which to view the problem.
## Supply and demand are always equal
If we look at all the loaves of bread sold in a city on Monday, then, on that Monday, the number of loaves of bread sold, and the number of loaves bought, are equal to one another. Every time a loaf of bread is sold, there's one person selling the loaf and one person buying a loaf.
Supply and demand are always equal.
Imagining supply not equaling demand is like imagining a collection of nonoverlapping line segments with an odd number of endpoints.
Or at least, that's how economists define the words "supply" and "demand".
So what use is such a merely tautologous equation?
Defining the terms that way lets economists talk about **supply curves** and **demand curves** as a function of prices. Or to be snootier, **supply functions** and **demand functions.** Since supply and demand are always equal, the point where the two curves *meet* tells us the price level.
"Hold on," you say. "If supply equals demand by definition, then how would we talk about the old days of the Soviet Union, where there were 1000 people who wanted toilet paper and only 400 rolls of toilet paper? And it was illegal for the store to raise the set price of toilet paper? In this case, didn't demand just directly exceed supply?"
What was seen in Soviet supermarkes were long lines of people waiting outside before the store opened, trying to get to the toilet paper before it's all gone. In this case, the *time spent in line* is by definition part of the 'price' of a roll of toilet paper. No matter how the demand curve slopes--no matter how much people want toilet paper, and how reluctant they are to give up--the time required to stand in line will increase until 600 people give up and go home. And then the monetary price of the toilet paper, *plus* the time required to stand in line, is the "price" at which the demand function equilibrates with the supply function.
This system wasn't good for the Soviet Union because the time spent in line was burned, destroyed, in a way that produced no additional goods. When the price is allowed to go up under capitalism, the buyers may indeed spend more money; but just passing that money around doesn't *destroy* wealth the way that standing in line destroys time. (The flip side is that everyone ultimately has an equal supply of time and not everyone has an equal supply of money. Poorer people *can* sometimes benefit relative to non-poor people when part of a good is repriced in time rather than money. Although of course sufficiently rich people will just hire someone else to stand in line.)
If you set up a cart selling \$20 bills for \$1 each, a line will form in front of the cart and extend until it's long enough to burn \$18.99 worth of time, as priced by people who could otherwise earn the least amount of money per hour. You can't *actually* sell \$20 bills, to anyone who wants them, at a price of $1, and have that be the real entire price.
A similar dynamic plays out every year when the organizers of Burning Man once again try to defy the laws of mathematics as well as capitalism in order to sell 70,000 tickets at a fixed price of \$500, into a market of way more than 70,000 people who are willing to pay significantly more than \$500. They also try really hard to outlaw resale at higher prices; if Burning Man finds out about the resale, your tickets invalidated, and your car will be turned around at the last minute and told to go home. The resulting scramble... well, I've heard it suggested that, in this case, the real "price" of the ticket is pulling on your social connections in order to obtain Burning Man tickets for a properly nominal price of \$500, via trading favors with people who were savvy enough to sign up for the sales lottery. Thus selecting socially savvy people to attend Burning Man. I've never gone there.
## When subsidy is futile and will be assimilated
Imagine a city under siege, where there are only 3,000 loaves of bread to be had, and each loaf is selling for 2 golden crowns. Can the governor of the city ease the plight of the people by ordering the treasury to pay 1 golden crown for loaf? The people will still be hungry -- there's still only so much bread to go around -- but perhaps their financial sorrows can be eased?
No, in fact. The price of the 3,000 loaves of bread rises until all but 3,000 people drop out of the market for purchasing it. If there are 3,000 people willing to pay 2 golden crowns for a loaf of bread, and the governor steps in and says the city will pay 1 golden crown per loaf of the price, then there will be 3,000 people willing to pay 2 golden crowns plus 1 golden crown subsidy. The price of each loaf rises to 3 crowns, and the 3,000 people buying it are 2 golden crowns out of pocket, the same as before.
In economitalk, you'd say that the supply was **inelastic:** this supply curve is entirely horizontal with respect to price. No matter how much more people pay (the x axis), the supply of bread stays the same (the y axis).
The demand curve does slope, in this example; it slopes downward as usual with increasing price, until the demand curve crosses the flat supply curve at a price point of 2 golden crowns.
If the governor then adds a subsidy of 1 golden crown to each purchase of bread, we are in effect *shifting the demand curve to the right* along the x-axis - the demand for bread at 2 golden crowns, is now the demand level we would have previously seen at a price of 1 golden crown. We have had a change of variables: Demand(x') = Demand(x - 1 crown). Since the supply curve is flat, this just means that the two curves meet at a price' which is 1 golden crown greater than the old price.
In other words: It doesn't help any buyers to try to subsidize a good in inflexible supply. The only way subsidies can *possibly* help buyers *at all,* is if increasing the price of the good causes more of the good to exist.
And by this I don't just mean that we check whether the supply of a good is allowed to increase, and if it can increase, then subsidies are allowed to help people. When you hand out subsidies evenhandedly to anyone who wants to buy X, *the only mechanical means* by which the people receiving the checks have any more money *at all* is through the medium of increasing the supply of X. Everything else is just a change of variables and shifting the demand curve to the right.
The idea that the person getting a check for \$120 is better off because they now have \$120 is entirely an illusion. The only mechanical means of transmission by which this person has more money in the bank, at the end of the day, is whatever extent the supply of X goes up; and their bank accounts will end up at the same level as a strict function of the supply of X, irrespective of the nominal amounts listed on the checks and whether the subsidy checks are taxed. To whatever extent the supply of X is not increasing, the act of placing money into people's hands has exactly as much effect on their bank accounts as praying to a golden statue of a twenty-dollar bill (stipulating arguendo that we live in a universe where praying to golden statues of things doesn't work).
And yet you will observe that in all public political discourse that makes it onto TV, all the sober talking heads in business suits are talking as if by subsidizing people with \$120 checks we are causing their bank accounts to go up by \$120, rather than talking about how many new universities or doctors or houses the \$120 checks will cause to exist.
This is genuinely crazy. This is not a Republican-economist point or a Democratic-economist point. I know of *no* framework of economics in which cutting everyone in the market the same subsidy check makes them have more money at the end, *except* insofar as supply happens to increase and their retained money is a strict function of supply. That the sober talking heads on TV are talking otherwise is mass civilizational insanity. It's not like the conversation about global warming where at least *in principle* we could be wrong about the empirical effect of increasing carbon dioxide on future global temperatures. It's not even like evolutionary biology, where at least the enormous mountain of empirical evidence is complicated enough that you might need to spend a few days reading to understand. The public conversation about subsidies is patently illogical. It is as if the sober talking heads on TV in their business suits were having deep conversations about whether to pray to a golden statue of a \$20 bill or a \$100 bill. There *is no mechanical effect* on bank accounts of universal subsidy checks that is not entirely mediated by, and quantitatively the sole function of, the final supply level of goods.
## Skyrocketing prices
A price can only skyrocket when a gently sloping demand curve meets a gently sloping supply curve.
If the demand curve slopes sharply downward, this describes a state of affairs where, as the price of the good increases, lots of people *rapidly* become unwilling to pay and drop out of the market before the price has moved very far.
If the supply curve slopes sharply upward, the price can't increase very far. As the price goes up, the market is rapidly flooded with more sellers seeking to produce the good.
But if the city of San Francisco refuses to build more than a handful of apartments, and a large number of people in search of jobs *really* want to live in San Francisco and do *not* want to be driven out even as prices go up... then prices go *way* up. Until, however reluctantly, the least wealthy people are driven out.
"I already knew that," you say. Okay, but now consider health care and college.
In the United States there are, if I recall correctly, only 350 people allowed to become orthodontists every year. That is how many residents the orthodontic schools have decided to accept. Everyone else gets turned away.
For as long as that *deeply* entrenched system holds, it can accomplish literally nothing to try to subsidize orthodontics. You cannot cause more poor children to have braces by having the government offer to pay part of the costs of orthodontia. 350 orthodontists per year can only put braces on a limited number of children. No amount of shuffling money around or tweaking insurance regulations can allow more children to have braces than that.
Once upon a time after World War II, the US government passed a GI bill subsidizing college for military veterans. And in actual practical reality... a lot more people ended up going to college! So there must have been more colleges springing up, or existing colleges must have expanded to serve more students. That's the only possible way that more total people could have gone to college.
Later, US society decided there still weren't enough people going to college. So the US government offered to subsidize insurance on student loans, in order to allow more students to get bank loans to pay for college. And what happened that time... is that US college prices skyrocketed. So now people are leaving college with a crippling, immiserating load of student debt, and if you don't like that, don't go to college.
So: the second time a subsidy was tried, there was some increased barrier or difficulty to starting a new college. Or at least, it was hard to start a new college that the new students actually wanted to go to. We can also infer that the demand function sloped down very slowly with increasing price: lots of people *really really* wanted to go to college. Or really really wanted to go to a college with an existing reputation, instead of a new for-profit college that nobody had heard about. And existing reputable colleges were unwilling or unable to expand. We know all this, because otherwise the price couldn't have skyrocketed.
You can't *effectively* subsidize housing unless higher housing prices can cause there to be more housing.
You can't effectively subsidize education unless higher tuitions can cause new attractive universities to spring up, or old reputable universities to expand.
There's literally nothing you can do to cause more people to have more healthcare by moving around insurance premiums, unless the resulting higher prices are causing more doctors and more hospitals to exist.
All this **cost disease** isn't a simple issue for our civilization, and economists don't agree on exactly what's happening, let alone what to do about it.
On the other hand, politicians almost always *only* talk about demand -- who purchases the good, how much they pay. They talk about supply not at all. They talk about mortgage tax deductions and federally insured mortgages; restrictions on building apartments, not so much. When costs go up, they talk about the need for higher subsidies instead of asking why supply isn't already expanding. "Where are the new colleges?" they don't cry. "How do we have so few hospitals, with prices so high?" they don't say. "We must do something about the limited number of orthodontic residencies so that more children can have braces!" you will never hear any legislator say. Which is, on standard economics, a recipe for prices that never stop going up.
If you were around for the 2008 Presidential election in the US, there was this a case where some gasoline refineries had broken down, and thus the US was experiencing a gasoline shortage with correspondingly high prices.
And there was floated a proposal to temporarily repeal some gasoline taxes...
...which would have been a *pure gift* to existing gasoline refineries. The total supply of gasoline was price-unresponsive; it physically couldn't go up until new refineries were built or repaired. The situation was almost perfectly analogous to the city under siege in which there were only 3000 loaves of bread and no more could be made.
Pretty much every academic economist in the United States, Republican and Democrat alike, agreed in unison: "Lifting the gas tax will not change prices at the pump. It won't even cause gas stations to make more money. Literally the only people who benefit from shifting this demand curve to the right are the owners of gasoline refineries."
Presidential candidate John McCain was in favor of temporarily lifting the gas tax. Presidential candidate Hillary Clinton was in favor of temporarily lifting the gas tax. In an extraordinary moment that stunned all of us who knew economics, Barack Obama came out *against* lifting the tax.
Of course McCain and Clinton almost certainly knew why the measure would be futile. And they probably weren't in the pay of gas refinery owners either. What's going on, I think, is that political journalists *believe* that voters are too stupid to understand literally any abstraction whatsoever. It doesn't matter whether or not journalists are correct about that, because so long as journalists *believe* that, they will report on any discussion of economics as a blunder in the political horse race. And politicians know that being reported on as having 'blundered' can be fatal. So McCain and Clinton didn't dare publicly oppose decreased gas taxes, even though they both knew it was stupid. Or something.
I haven't the tiniest idea what to do about that.
The **cost disease** across housing, education, and above all, medicine, seems like it could all by itself smash an otherwise pleasant outcome where everything gets cheaper as a result of automation.
I have no idea how anyone in AI or machine learning can do anything to help solve this.
Somehow or other, the political equilibrium seems to naturally forbid any politician to mention, at all, what economics would suggest as the actual key elements of the problem.
I have no good ideas on how to solve that either.
What standard economics does say is that you *can't* solve a cost disease by having the government pay more or trying to move existing subsidies around. You can't help it with a startup that makes doing medical paperwork more efficient. To make costs come down, you need to either (a) make people stop wanting surgeons, bachelors' degrees, and bedrooms; or (b) you need to somehow make more of those things exist, in a world where supply is currently not increasing even as the prices are already skyrocketing.
## Absorbed costs
At this point somebody usually points out that various studies are showing that insurance companies, or doctors, or whoever, are not making an excess profit.
There's a phenomenon here I don't fully understand, which looks to me something like this: when a good is in restricted supply and high demand, weeds start to grow on the supply chain.
This isn't just a financial phenomenon. In the US there was a recent controversy about whether medical residents, doctors-in-training, ought to be allowed to work 30-hour-shifts -- 30 hours on the job without a break -- or whether limiting them to 24-hour shifts would be safer. Trying to organize my mental understanding of this phenomenon, it seems to me that this could not happen if not for the fact that there is a huge oversupply of people who want to be doctors, compared to people who are allowed to become doctors. So if you impose this kind of horrible agony, the students don't flee, but stick around.
It's not that anyone is deliberately sitting down to think about how to torture students. It's not even that the medical school has a financial incentive to do it. My guess is that there's a noise, an entropy, a carelessness, that is present in organizations by default; and the only thing that *opposes* this entropy is if it threatens the organization's survival. So long as the organization can go on operating and making money *even if* it is torturing its medical residents, there just isn't enough counterforce to oppose the entropy that operates by default to make people's lives horrible.
Similarly with universities and the explosion of administrative costs. If we were thinking of the universities as intelligent beings trying to maximize their profits, they would be opposing administrative expansion regardless of the overall industry situation; every dollar paid to a needless administrator is a dollar out of their own pockets. But that's not actually how large organizations work; they have no such unified will. There's an entropic force that adds administrators and paperwork by default, and so long as the university's survival is not threatened by the present level of entropy, so long as the college goes on getting enough students and tuition and alumni donations to keep functioning, then there isn't the organizational will to oppose that entropy. Most individual people aren't absolutely driven to maximize their earnings and minimize their expenses, so long as they can afford their apartments and not lose their jobs. We should expect this tendency to be even greater for large organizations where no one executive suffers all the organizational inefficiencies as their personal out-of-pocket losses. Why should any particular person drive themselves mad trying to stop it, especially if other parts of the organization are unenthusiastic about supporting the effort? I know little about the empirics of this field, but what little of the literature I've read suggests that in practice, firms seem motivated to cut costs when they must do so to stay competitive -- to survive at all in the market -- more than they automatically do so out of an organizational will to save every possible dollar.
For whatever reason, whether or not the above story is anything like correct, there does seem to be some analogue of Parkinson's Law ("work expands to fill the time available for its completion") which says that in conditions of restricted supply and low competition between suppliers, costs and inconveniences and barnacles expand into the excess wiggle room so created, whether or not anyone profits thereby. Lots of medical residents want medical residencies in restricted supply, so medical residences become horrible to expand into the wiggle room provided by that demand. If university tuitions are skyrocketing because of supply restrictions and subsidies, then administrative costs will expand into that slack. On my hypothesis this is because of background entropic forces that no longer pose threats to organizational survival, but for whatever reason it certainly does seem to happen.
Which implies that the massive amounts of paperwork and administrative costs in the medical industry are not the cause of high prices for medicine, they are *caused by high prices for medicine.*
Restricted supply and subsidy means the prices *must* be high to equalize supply and demand. Since they have to be high, and especially since custom makes it look bad to just take out all that money as shareholder profit, there is a vast wiggle room into which will expand barnacles, hospital paperwork, insurance paperwork, high-cost secondary suppliers for goods and services, etcetera. On my hypothesis this is because when you are a supplier in restricted license and demand is high, these inefficiencies do not threaten your organizational survival and so they happen by default. But even if you don't buy that particular hypothesis for *why* barnacles expand to fill the wiggle room, they clearly do.
So there's no point in trying to fix the price of medicine by trying to eliminate all that inefficient paperwork. It will just grow back as barnacles somewhere else. It never caused the price increase in the first place. There had to be a skyrocketing price to balance the subsidized demand with the restricted supply, and something automatically filled the wiggle room created by that price.
# Rising rents
From [a speech by Winston Churchill in 1909](http://www.landvaluetax.org/current-affairs-comment/winston-churchill-said-it-all-better-then-we-can.html):
> Some years ago in London there was a toll bar on a bridge across the Thames, and all the working people who lived on the south side of the river had to pay a daily toll of one penny for going and returning from their work. The spectacle of these poor people thus mulcted of so large a proportion of their earnings offended the public conscience, and agitation was set on foot, municipal authorities were roused, and at the cost of the taxpayers, the bridge was freed and the toll removed. All those people who used the bridge were saved sixpence a week, but within a very short time rents on the south side of the river were found to have risen about sixpence a week, or the amount of the toll which had been remitted!
Of course this outcome was an *inevitable* consequence of there being a limited amount of housing on the south side of the river. The landlords *couldn't* have refused to raise those rents--they could instead have introduced a new hidden price in the time required to apply for apartments, or the social capital and pull required to get the apartments, but supply and demand must equalize. If housing was relatively unrestricted, the higher rent might later induce the construction of more buildings on the same land; but that couldn't happen instantly when the bridge toll dropped.
"Rent" has a number of different complicated definitions in economics, most of which are pretty much equivalent for our purposes. I googled around briefly and picked one that I liked:
"The essence of the conception of rent is the conception of a surplus earned by a particular part of a factor of production over and above the minimum sum necessary to induce it to do its work." (Joan Robinson.)
Land doesn't need any inducement to go on existing and supporting buildings; the supply of land has nothing to do with the price of land. So any part of the price being paid to use a building, which derives just from the price of the land underneath the building, is "rent" in the economic sense. Whatever part of the cost is necessary to induce a janitor to keep the building clean, is not "rent" in the economic sense, even if it shows up in the monthly rent payment in the colloquial sense of rent.
I expect most readers coming in will already have heard of rents and have an idea that rent-seeking is a public choice problem. For purposes of discussing what to do about automation-driven unemployment, especially analyzing notions like the basic income, we are not interested in rent as a generic public choice problem. We are worried about a particular kind of rent that increases to soak up all the benefit whenever we try to help people. Also from Churchill's speech:
> In the parish of Southwark, about 350 pounds a year was given away in doles of bread by charitable people in connection with one of the churches. As a consequence of this charity, the competition for small houses and single-room tenements is so great that rents are considerably higher in the parish!
Now imagine that instead of being given bread, they'd been given a basic income. The result wouldn't have been exactly the same--if everyone in the country was getting the basic income, competition for those particular houses would not have been as great. But you can see why this *particular* kind of rent increase is of particular concern.
## The economic definition of a quantity of rent
Consider taxi medallions in New York City, before and after Uber (illegally) busted their (legal) cartel. Now that Uber is around in 2017, there are around 650,000 rides per day (300,000 taxi rides and 350,000 Uber+Lyft) in NYC. In 2010 there were around 450,000 taxi rides per day. The difference between these numbers is due to a previous legal limit on the number of rides; only 13,605 taxi medallions were allowed to exist. These medallions cost on the order of $1 million and were owned by holding companies that extracted huge portions of the taxi fares from the actual taxi drivers.
I expect you are probably already familiar with this overall situation, and I mention it just to exhibit the technical definition of rent.
In the non-medallion-constrained equilibrium, suppose that:
- The supply-demand equalizing price is \$5/ride.
- The 20,000 people who are willing to accept the least payment to drive cars, become drivers; the price that is so low as to induce all but 20,000 drivers to leave the market is \$5/ride.
- The 650,000 people who are willing to pay the most for rides, become riders. The price which is so high as to induce all but 650,000 people to give up on taxi rides, is \$5/ride.
We now restrict the number of taxi medallions to 13,605; an inelastic supply of taxi medallions is now required as a new factor of production for taxi rides. As a result:
- The number of rides goes down.
- The price of rides goes up.
- The 13,605 people who will accept the least payment to drive cars, become drivers; the price so low as to induce all but 13,605 drivers to leave the market is \$4/ride.
- The 450,000 people who will pay the most for rides become riders. The price so high as to induce all but 450,000 riders to give up is \$7/ride.
Then (\$7-\$4=\$3)/ride will go as **rent** to the owners of taxi medallions. This rent is derived from control of an inelastic bottleneck on the production of rides. To say that this supply is inelastic is to say that there would be no less of it, on the margins, if the price were marginally less; paying \$2/ride instead of \$3/ride to the medallion owners would not induce some medallions to give up on existing. It is this sense that, by the definition of rent, the **rentier** is *on the margins* being paid to do nothing; whatever part of the money is "rent", is by definition not there to induce somebody to do more work and create greater supply.
Imagine that aliens magically agree to fund the New York state government, and New York repeals the state income tax. The people in NYC become richer; they gain more purchasing power; they are willing to pay more for their goods; their demand curve is shifted to the right.
If taxi medallions exist:
- The inelastic supply of rides stays the same.
- The consumer price of rides goes up.
- All of the resulting increase in price is captured by taxi-medallion owners, none by taxi drivers.
If taxi medallions *don't* exist:
- The price of taxi rides increases, but not by nearly as much, because supply is elastic and can rise to meet the increased purchasing power.
- The increase in price goes to the taxi drivers.
- Taxi riders get more rides.
### Rent-collectors don't always look like idle rich
Although this essay is supposed to mostly *not* be about justice and morality, economic rents are an obvious flashpoint for concern about unfair deserts and social parasites. So I note parenthetically that rent passing through a person's hand doesn't mean that person corresponds to the stereotype of an idle rentier lying back and being lazy.
In many cases, the metaphorical taxi cab drivers also own the metaphorical medallions. In these cases part of their pay is the amount that they get as metaphorical taxi drivers for their elastic supply of labor, and part of their pay is the pure rent on controlling an inelastic supply of medallions. But all of it *looks to them* like they are being paid to do work.
This is what it is like to be an orthodontist in the USA when only 350 orthodontists are allowed to graduate per year. You are still running around all day putting on braces, but that is not where *most* of the money flowing into your office is really coming from. As an orthodontist you almost certainly won't have the slightest understanding of that; of course you deserve your salary, you work hard all day long!
One also observes that *somebody* has to be an orthodontist; somebody has to fill out the ranks of those 350 people and make the braces. An orthodontist is only personally guilty of socially destructive behavior if they personally make some choice that supports the limit on graduating 350 orthodontists per year.
Furthermore: The people who actually collect the enormous cash prizes are often dead and buried by the time the present day comes around.
My landlord owns a house in Berkeley. It was an expensive house at the time he bought it; the possible increase in future rents was already baked into the price. He had to take out a loan to buy it. He has to make ongoing payments on those loans. His ownership of the piece of land his house is standing on, is idly generating free rents; but my landlord is not seeing any of that money. He does not get free profits and an easy life.
In turn, the bank that gave my landlord a loan is getting some real interest on the actual investment, and some amount of rent in virtue of it being one of a limited number of entities that are legally allowed to be a bank and make loans to people like my landlord. But somebody who owns *shares* in the bank did not get those shares for free. And so on.
If Berkeley repealed all the housing restrictions tomorrow, my landlord would be screwed. That's what happened to the owners of taxi medallions in New York City, many of whom took out loans to buy the medallions, when Uber and Lyft came along.
When the original rent-seekers decades earlier talked bureaucrats into creating supply restrictions (for the good of the dear people who must be protected from bad suppliers, of course!) a new *necessary* factor of production was created. Decades later, people must take out loans to buy that factor of production; and so they make no excess profit. There are no twenty-dollar bills lying in the street; there is no easy way to become an idle rentier and never have to work again.
Observe that these people now have a *strong* incentive to keep the supply restrictions in place.
This is technically termed "**rent-seeking**." But that term sounds like we're talking about Elsevier grabbing all the academic journals and charging monopolistic rent for sitting back and doing nothing, where before journal subscriptions were cheap. Of course Elsevier is not unique; there are plenty of villainous rent-seekers trying to actively tighten supply restrictions, or building monopolistic fences around goods that are cheap to produce. But in practice, "rent-seeking" often also comes from people who aren't seeing any excess profit themselves and would be completely screwed over if the supply restriction were busted.
As it happens my landlord is a libertarian, pardon me, neoliberal. So far as I know, he has never personally voted to maintain a housing restriction. But I wouldn't see it as Elsevier-style mustache-twirling villainy if he did.
## Monopolistic rents and the Ferguson Police Department
Before the city of Ferguson became a national flashpoint due to Michael Brown being shot by the local police department, Ferguson had [32,975 outstanding arrest warrants for nonviolent offenses, in a town of 21,000 residents](http://www.huffingtonpost.com/nathan-robinson/the-shocking-finding-from-the-doj-ferguson_b_6858388.html). (By comparison Boston, with 645,000 people, issued 2,300 criminal warrants.)
Bluntly, the residents of Ferguson were being treated as cattle and milked for fines.
What happens if you try to give the residents of Ferguson a basic income?
Well, if the citizens of Ferguson were previously being milked for around as much as they can output... why wouldn't the Ferguson Police Department just issue more warrants, once the citizens can stand some more milking?
Stepping back for a moment and considering some less charged technical definitions, "**monopolistic rents**" arise when a single price-setter controls *all* of some factor of production; as if a single cartel owned all the taxi medallions in NYC. Then they can collect even higher rents, in some cases, by setting the supply lower than the maximum that factor of production allows. Maybe if you set the price of a ride at \$10, all but 300,000 taxi riders drop out of the market, who can be serviced by fewer taxi drivers, and those who remain are those willing to work for \$3.50/ride. Then the total rent you collect is (\$6.50 * 300,000), higher than the (\$3 * 450,000) you would receive if all medallions were being used.
This *is* mustache-twirling villainy: the gains from trade on 150,000 taxi rides are being destroyed in a socially negative-sum game.
(There are non-economists who imagine that this alone is the entire business of some evil force labeled "capitalism", but let's not go there.)
Elsevier collects *monopolistic rents* on its captive journals. There are other science journals in existence, but researchers tell the university that they need *particular* journals that Elsevier controls, and the university cannot substitute other journals instead.
If you've been following along with the rest of this document, you may have previously wondering something like, "According to this overall outlook on price, if \$2,000/year is the supply-demand equalizing price on a journal, why blame Elsevier for charging that much?" The answer is that the marginal cost of production for a journal is very low; if lots of people were competing to supply a particular journal, more universities would have that journal and the price would be much lower. Since Elsevier has total control of the supply of a good that costs them very little to produce, they can make the supply curve be anything they like; and so they really are solely to blame for the point where that supply curve intersects demand.
For purposes of talking about hypothetical productivity-driven unemployment and remedies like the basic income, monopolistic rents especially matter because *they can rise to consume any increase in productivity or purchasing power.* If you have total control of a necessary factor of production, if you can at will say "no" and shut down the entire trade, you can charge whatever tariff you like on that trade. You can try to capture as much money as the trade can stand without shutting down; take nearly all of the gains from trade for yourself. If the trade becomes more gainful, you can just grab more of the gains, and all the other traders are no better off.
We could try to shoehorn the Ferguson Police Department into this view, by saying that they have a monopoly on "not being in jail" and that this is a necessary factor of production for which they can demand any price they like. But we don't actually need an economic view of the Ferguson PD; the point I mean to convey is that the Ferguson Police Department can take whatever they want from you, and the more you can afford, they more they can take.
Now consider what happens if there's *more than one* person who has sole control of one of your factors of production.
In this case, everyone predating on you faces a kind of commons problem. They all want you to survive to go on being milked, so they don't want the milking collective to take too much; but if they demand any less money themselves, that just leaves you with more money for a different milker to seize.
However this situation resolves itself, it's *really* not going to help if you try to give that person a basic income. It doesn't even help much to pry *one* of the milkers off their back, if there's at least two more poorly coordinated milkers remaining.
I sometimes go around saying, "Reading essays written by actual poor people in the US suggests that the *first* thing we could do to help them is to outlaw towing fees." But even this is just a gloss on an even worse situation: maybe it doesn't help to outlaw \$100/day towing fees so long as that just makes court costs and late payment costs go up somewhere else.
Now consider the present system of intellectual property rights. In particular, patents.
The problem with this system is not just that the US patent office goes around issuing patents on "a system that uses the letter 'e' in online communications" or whatever.
The problem is that it creates *many* parties each of whom has the theoretical ability to individually say "no" to, and shut down the production of, any good or service complicated enough to depend on more than one patent.
If you do *not* have [compulsory patent licensing with court-set fees](https://en.wikipedia.org/wiki/Compulsory_license), then why should any one patent troll--or even the holder of a rare real patent--stop short of demanding the company's entire profit?
## Bigger companies and a decline of competition
Economists generally expect there to be a story behind a monopoly rent: a **barrier to entry,** or a barrier to price competition, that prevents anybody else from strolling in and selling similar or substitutable goods more cheaply.
A number of alarming indicators seem to indicate that developed economies and the United States in particular are exhibiting something like stagnant, locked-up markets; fewer startups, fewer successful startups, more goods being sold into markets where there is less competition. This also goes along with other alarming indicators like people moving between states less often, but for now let's focus on the increasing lack of competition; those are the trends that most obviously threaten to keep prices high despite automation, or extract any increases in income.
If you are a libertarian, pardon me, neoliberal, you will loudly observe that quite often the *barrier to entry* for decreased competition takes the form of a law, since merchants are quite good at crossing obstacles like mere mountains and rivers. Such barriers indeed commonly arise from local or national laws:
- Making it illegal to sell any similar or substitutable good.
- Making it illegal to lower the price on any similar or substitutable good.
- Imposing high fixed costs or entry costs to potential competitors, in the form of:
- The expense of regulatory compliance, barring the market to bright young entrepreneurs or anyone else without deep pockets;
- The time and delay of regulatory compliance, barring the market to anyone without deep pockets to keep going through the long delay;
- The risk of not receiving regulatory approval;
- A litigious environment in which a corporation must be large enough to support a large legal staff in order to survive.
The libertarian will then observe the existence of a "regulatory ratchet" in which the volume of law and bureaucracy seems to only increase over time within a country (or, for that matter, an individual large corporation); and suggest that this will be a force for decreased competition.
Conventional economics says that this is all obviously qualitatively correct, and that only the quantitative degree to which it is *the key* force responsibly for an increasing lack of competition and dynamic turnover could reasonably be disputed.
Other classical forces producing large corporations are **economies of scale** and **network effects.** The ways in which regulatory burdens produce large corporations per se, and not just more expensive goods, are just because of the economies of scale and network effects in dealing with laws and regulators.
All fixed costs or up-front costs imply economies of scale, and advanced technology often has a fixed support cost or a large up-front cost. Larger markets in which it's easier to sell to all the consumers, likewise imply "economies of scale" in this sense: you pay a one-time cost in time and effort to set up with Amazon, and then you get access to Amazon's whole market. Whoever has the cheapest price on a standardized good might capture nearly all of Amazon's market within a nationality, which is the **winner-take-all** special case of economies of scale. To the extent that more goods are supplied through Amazon and fewer through regional malls, that much concentration of the market will emerge without regulatory forces.
(Legal monopolies a la patent rights and copyrights would be filed by libertarians under "Whether or not you think those laws are good ideas, they are certainly instances of the government being responsible for the largeness of the corporation.")
There also other, weirder factors that might possibly be producing increasingly noncompetitive markets:
- It's been suggested that the rise of index funds and other broad-based funds, such that shareholders in one company are usually shareholders in that company's competitors, is statistically implicated in a decreased enthusiasm for price competition between those companies.
- Regulatory burdens often produce a duopoly or small-N-opoly instead of a monopoly. In these cases modern computing and networking seems to have enabled these few players to cooperate on what is, to them, a Prisoner's Dilemma, and do so without explicit price-fixing agreements that violate the letter of the law on antitrust. An airline with few competitors in a region, but more than zero competitors, will try raising its ticket prices a dollar overnight; the competitor then possibly raises their ticket prices a dollar; and if not, the original airline brings its price back down. This is apparently easier to do now that all the prices are online being updated every thirty seconds.
- 'Confusopolies' such as the mattress industry (Scott Adams's term, I don't know if there's an official alternative) arise when a small number of market players simultaneously act to reduce information clarity for all customers. E.g. by requiring every mattress to have a different name and price inside every store, in order to diminish the ability of customers to compare alternatives and select the cheapest price for the same alternative. The few players tacitly cooperate on confusion, rather than engaging in more visibly illegal price collusion.
- When an area of economic of activity is already dominated by large companies for whatever reason, these large companies are often highly bureaucratic themselves. The purchasing departments of these large companies then act like their own little regulatory environments, or they prefer to deal with existing partners, or with big-name reputable suppliers, or you'd need a costly specialized sales team to glad-hand the purchasing department, etcetera. So bigness is in this sense contagious: big bureaucratic companies with highly bureaucratic purchasing departments, will thereby advantage big suppliers in that segment of the economy.
Eyeballing this landscape myself, it seems to me that there's a lot of force to the libertarian-pardon-me-neoliberal thesis which suggests that a pretty large amount of this non-competitiveness phenomenon would somehow go away if we could, e.g., diminish regulatory and litigatory burdens by a factor of 10 and reform intellectual property rights. Arguendo, regulatory barriers to entry are what initially create an equilibrium where only one, two, or a small handful of companies sell the goods you need. Then once the market is already controlled by large companies, other factors like coordinated pricing can become a problem for consumers, *and* big corporate bureaucracies become a new barrier to entry for any related companies that need to interact with the big players.
But so far as principle goes, there are forces listed that have nothing obvious to do with the government, such as index fund shareholders. There are forces creating large corporations that aren't *just* about regulatory barriers, like economies of scale. I can't think of anything offhand I've read that presents a well-researched case about the quantitative extent to which all these forces are "the problem". Eyeballing the landscape, it seems to me that there are a *lot* more entrepreneurial dogs not barking which are silenced by a law or regulatory compliance burden or threat of litigation, than are being silenced by Google and Facebook being big enormous companies; but this kind of eyeballing is not a substitute for careful investigation.
If you hear anyone mentioning that not all inequality is bad, they are, hopefully, referring to the point that some inequality comes from manufacturing economies of scale that have nothing to do with regulators, and winner-take-all markets that are being won by lowest prices or best goods. At least in isolation, and not considering knock-on effects, these inequality-producing forces are positive-sum and generally beneficial. Other inequality is produced by monopolistic rent extraction that ultimately derives from regulatory barriers or other declines in price competition, which is a negative-sum phenomenon. The good reason to be alarmed by rising inequality is if it's coming from negative-sum rent extraction driven by decreasing price competition. The statistics on dynamism suggest that this may in fact be the case in an increasingly large slice of the economy.
And again to restate the main point: this in-practice empirical trend toward more and more economic interactions being with larger and larger big corporations that stay around for longer and longer, is not just a problem because of inequality per se or a less dynamic society. It's a problem because it increases the extent to which (a) technological productivity increases are unlikely to decrease prices, and (b) any increased income is liable to be extracted in the form of higher prices elsewhere.
All the technology in San Francisco has not made computer programmers there *nearly* as much better off as one might naively expect from comparing their nominal salaries to Montana salaries. A huge class of non-computer-programmers ended up actively worse off than before Google came into their lives, *mostly* because of skyrocketing apartment rents. Today, increased productivity is being converted into gloom by housing restrictions; but tomorrow it could be decreasing competition and an increasing prevalence of effective monopolies and duopolies.
# Labor markets failing to clear
Markets are said to **clear** when all the matched buyers and sellers who would be willing to trade at a mutually agreeable price, are actually trading.
Clearing a market doesn't happen automatically. Clearing a market that wasn't previously clearing can be a huge innovation and sometimes even a profitable one. Craigslist didn't literally *clear* the market for everyone willing in principle to sell old laptops or buy old laptops, nor did Ebay, but they moved the market closer to clearing and sparked a lot of trades that didn't happen before.
If I am shouting "I'd love to sell an apple for 40 cents!" and you are shouting "I'd love to buy an apple for 40 cents!" and the two of us are separated by a gaping chasm in the Earth that prevents us from ever meeting one another, we can say this apple market has **failed to clear.**
Of course one could also say that we're merely unwilling to pay the non-monetary prices of climbing down and up the chasm. But at that rate, you might as well say that when it's illegal to sell apples for less than fifty cents, we're merely unwilling to pay the non-monetary price of risking jail. So to make the definition non-tautologous, and allow us to talk about markets sometimes *not* clearing, we shouldn't consider it mandatory to use all-embracing definitions of price.
What about sales tax? If there's a city sales tax of two cents an apple, you are effectively buying the apple for 42 cents (from you) and I am effectively selling it for 40 cents (to me). We can't sell/buy for 41 cents, even if that's a mutually agreeable price. It might seem like smuggling libertarianism in sideways to declare that things like sales taxes are not allowed to count as a legitimate part of the price of an apple. Why not say that the apple comes packaged with the extra good of complying with local laws and supporting your city government? Someone has to pay for the police that prevent you from being outright murdered, so why engage in the fantasy of an entirely taxless environment etcetera etcetera.
I'm not sure how academically standard the following stance is, but:
It seems to me that the notion of a 'clearing market' and what exactly counts as a 'price', is a flexible point of view; we can change the variable definitions and get a different but still reasonable answer. For example, there are people on Silk Road 2 who *are* willing to accept a risk of arrest as part of the price of buying and selling drugs. If you define prices as being allowed to include the risk of arrest, we can talk about how close Silk Road 2 comes to clearing that market. If we then tilt our head the other way and see the 'market' as the people who would buy and sell drugs at purely nominal prices if that were legal and carried no risk of arrest, Silk Road 2 isn't remotely close to clearing that market so defined.
For purposes of *considering technological unemployment* and what we will consider to be 'clearing the labor market', I think it makes a lot of sense to factor out literally everything from the prices that could possibly be factored out. Not just minimum wage laws, not just sales taxes, but even things like income taxes and associated paperwork. If I would trade an hour of babysitting for 10 apples, and you would sell me 10 apples for an hour of babysitting, but we aren't willing to trade if you need a business license and I have to work an extra half-hour because of income taxes, I think it makes sense to view this conceptually as a trade that could in principle clear, but isn't clearing. %%note: If you assume that we're considering markets clearing to always be good things (false: consider the market for nonconsensual sex), it would be an aggressive moral stance even from the standpoint of big-L Libertarianism for me to define a labor market as not clearing *because of income tax* and claim that this already establishes a problem that must be fixed by any means necessary.
I however am not advocating this as a moral stance in which we assume it must be desirable for the market to completely clear and that every possible means to that end must be taken. I think that defining the clearing of the labor market in the most aggressive possible way, is a *conceptually useful* way to organize our thoughts about potential consequences of technological unemployment and policy responses. %%
Because: If the labor force participation rate is already dropping like a stone, and some people are making not-outright-stupid predictions of a huge new tidal wave of automation coming in, then you should be willing to consider a *lot* of options, cast a very wide net for policies to analyze. And this includes looking at possibilities like e.g. decreasing payroll taxes or sales taxes on humans who are having trouble trading their labor for apples. So it makes sense to take a step back and look at everything that contributes to or interferes with "the labor market clearing" in the broadest possible sense: literally everyone who'd be willing to trade babysitting and apples if there were literally no other obstacles in the way.
## Labor mobility
One of the things that can prevent a labor market from clearing is if we are both willing to trade with one another, but you are at point A and I am at point B, and it is hard for either of us to move.
### Once again, skyrocketing rents
Okay, yes, I know, you're probably a little sick of hearing about it by now, but *once again:*
If I want to trade my babysitting for your apples, and you live in San Francisco, and I live in Montana, and I can't afford to move to San Francisco, we can view this as a labor market failing to clear.
This is conceptually a different problem from rents being extracted from the people who actually do live in San Francisco. We are looking here at an entirely different source of lost value, the trades that don't occur because people can't afford to live near San Francisco at all.
I've seen estimates on "how much would be gained in the US if people could afford to move to where all the new jobs are" in the range of 5-10% of US GDP, and if anything I suspect that's a severe understatement; not to mention that these gains would flow disproportionately to people who are now disadvantaged.
(No, really, those anti-housing laws are a *big damn problem.*)
### Other problems that are extra-bad problems because they also inhibit labor mobility
- Occupational licensing is extra bad because often your license in one state doesn't carry over to another state.
- No national markets in health insurance are extra bad, because you will need to redo your health insurance if you move to another state.
- State benefits such as disability or Medicaid are not portable, meaning an enormous effective income hit from crossing state lines.
### International labor mobility
For people whose circle of concern does not stop at national borders:
Estimated economic gains if all your fellow Earth humans could easily move to anywhere on Earth where jobs for humans can be found: 50% of planetary GDP.
Again, pretty high on the list of points one ought to ponder if you otherwise expect mass disemployment all over Earth.
This is one place where there's an obvious way that the AI and machine learning community in particular can try to intervene to make things better: develop better automatic language translation. Build apps for those allegedly approaching augmented reality headsets that provide subtitles for speakers, or automatically translate visible text. This will make it easier for people to relocate across national boundaries in seach of jobs, in the cases where that *is* legal; and some jobs and some labor can more easily move to whatever few countries make them simultaneously welcome. Governments are not the *only* forces that ever prevent people from doing things; problems and inconveniences like language barriers also count. %%note: Be warned: nationalists will take offense that you are making it easier to trade with non-national people much poorer than themselves. They will almost certainly see it in terms of you benefiting the rich corporations inside their country, at the expense of relatively poor people inside their country whom the rich corporations would otherwise need to trade with instead. My model of their mental model is not so much that they're evil or uncaring, as that their emotions do not believe on a core level that the extremely poor people being benefited actually exist. %% %%note: Be warned: many leftists do not believe emotionally that anyone can possibly be benefiting from a trade where they are still poor at the end and the corporation is still rich at the end. Clearly the poor people are being exploited since they are still poor at the end, so clearly a bad exploitative transaction is taking place and the poor people would be better off if this did not happen. It will appear to them that your automatic translators are facilitating these evil transactions.%%
# Conclusions
Suppose tomorrow the heads of Google and Facebook and Apple and Amazon and Microsoft and Tesla and Uber and YCombinator came to me and said, "If we were to all act with one accord, is there anything we can do right now to make life better in the United States, without needing to massively reform the whole US government? You're not allowed to talk to us about anything else, just that one problem."
Then I would reply:
"You should all get together and build a new city someplace with low rents, in a state with no state-level anti-housing laws; and try to organize an understanding and commitment among the new citizens of this city to not pass any laws against building as many skyscrapers as needed. You should move as much of your companies there as you can manage, and as much of the tech industry as you can persuade to follow you, all at the same time. You should put body cameras on the police officers, and not have Mafia-run towing companies with late fees; you should insure your poorer citizens for 7 days of Uber if their cars break down, and let nothing stand in the way of cheap babysitting. You should build a university there, with serious prestige because of all the prestigious researchers you will bring there; and make sure that university is ready to take in as many students as can possibly managed, at tuitions not far from the real cost of teaching, even if that means not having giant LED boards in the athletic centers."
"*And,*" I would continue, "*you should locate this new city in a swing state,* the largest swing state you can find such that you think this new population will be able to threaten to tilt the vote there in election years. Then your faction will not have zero political power the way it does within California and New York where your votes are worthless, and you can actually start pushing on changes like NGDP level targeting that would help with all the *other* problems."
After another moment's thought, I would add: "Your first priority on the *state* level should be unclogging the supply lines on medicine within that state; get as close as you can to outright occupational delicensing. Push price transparency laws. Build nonprofit hospitals that can use nonprofit H1B visas to bring in doctors from India and the UK: some countries produce excellent doctors that can pass the occupational licensing filter, and have supply pipelines that are less clogged. Remember, in the end, everything you do adds up to nothing for the country as a whole if it all gets sucked up in a nationally increased cost of healthcare because you are using goods in limited national supply. Now can I *please* have a word with you about--"
Honestly, if you want a story about how the tech industry managed to screw over the rest of the United States, locating all the new jobs in a region with anti-housing laws would be *number one* on my list.
We can even frame a story about how staying in the high-rent regions was selfish, a case of bad inequality in action, a decision for which one might be held morally culpable. When I asked a friend why Google didn't set up a campus outside the Bay Area so it could employ all the programmers who don't want to live near Mountain View, my friend replied that no project manager would want their project to be located outside the Bay Area because then they would be too remote from the center of political power in Mountain View. (I was surprised that court politics dominated Google to that extent, but I checked with other Googler friends and they agreed.)
And once you start looking at it from that angle, you realize that the people who decide Where The Company Shall Be Located, or where the company stays once it's established, are disproportionately people who can afford the rents. Sure, there are network effects to being in New York, there's some real benefit to the company of being there. But there are also many programmers and non-programmers who would be willing to work for less nominal money if they got to have non-cramped houses and sane commutes. Could the company grow faster that way, if it's not one of the companies that really *need* to be in New York? And I don't know the answer to that. But it's worth observing that the people who make that decision, regardless of what decision would benefit the *company* the most, are the people who personally benefit the most and suffer the least from being in New York. They personally live in apartments that aren't tiny, they personally get the city amenities, and they're not personally driving the 2-hour commutes.
Of course if I'm *not* telling a moral story and indulging in some pleasant righteous indignation, then I'd personally chalk up the observed outcome to maybe 5% personal selfishness, tops. In reality it's probably more like 95% network effects where the startup needs to be in easy car range of the venture capitalists; followed by inertial effects where the company grows but it can't move away from the city because employees have already put down roots, or it's hired people with roots. Above all, it's the Nash equilibrium where you do worse by unilaterally moving out of the high-rent region, unless everyone else you're tangled up with moves at the same time.
Nonetheless, for whatever reason it happened, it *really* screwed over the rest of the USA.
Another way in which the poor states got screwed over is that they're in the same currency region as the rich states, and their wages are anchored to the national minimum wage. Flowing money is required to pay wages; and the national minimum wage means that wages in poor states require minimum amounts of flowing money to animate. When lots of money flows to the rich states, the Federal Reserve estimates that enough money is flowing in the country as a whole, and doesn't want to create any more because then they think there would be too much inflation. Then residents of poor states can't move to the rich states *or* print more dollars *or* decrease local wages so that less flowing money can animate each job. The European Union has similar problems with enough euros flowing in Germany and not enough euros flowing in Spain or Italy or Greece.
## Can basic income defeat the mysterious poverty equilibrium?
If you'll give me a moment to depart the path of standard economics, I personally have a question regarding a certain bizarre situation, regarding which economics gives no standard answer so far as I know:
Why are there still poor people in developed countries?
A thousand years ago, it used to be that 98% of the population were farmers. Between then and now, agricultural productivity went up by a factor of 100 in developed countries. If you told somebody back in 1017 CE that this would happen, they might naively imagine that there wouldn't be poor people any more. They might naively imagine that very few people, if any, would be forced to work hard from dawn to dusk. Now blink and see the paradox, the bizarre state of affairs: *Why was that prediction naive?*
*Yes,* poor people in today's developed countries have nicer shoes. They die less often. They have 20 changes of worn clothing instead of 1. Some of their houses have television sets, which was a great luxury in the 1950s and something that nobody from 1017 CE could have had at any price. *They're still poor.* Nobody from 1017 would mistake them for being 2017's rich people after five minutes of conversation. They'd be amazed that poor people in 2017 have so much stuff, sure. They would also recognize the hunted haunted looks, the debts bearing down, the desperate scrabble for work, the exhaustion and despair and the towing fees; and they would perceive that these were not the future's equivalent of a thriving, upright farmer with something to be proud of.
If a 100-fold increase in productivity did not manage to give almost everyone at least as much pride as a thriving farmer, can basic income be the last straw that breaks poverty's back? Before we can even begin to answer that, we'd need a good analysis of what the hell happened over the last thousand years that *didn't* eliminate poverty.
To my present state of personal knowledge, this looks like one of the giant inexplicable mysteries that has been staring us in the face for so long that people forget to be confused by it. By which I mean: consider how, until the early twentieth century, nobody said "Wait, what the hell is syntax and how the hell do human children learn it and go around generating sentences?" In principle, this incredibly deep scientific question could have been asked much earlier, far back in the nineteeth century, by some scientist in search of an important problem on which to found their career. *If* someone had earlier noticed what they didn't understand, and seen the incredible mystery staring them in the face, in the form of children walking around doing what everyone expected them to do and took for granted was the way that things had always been.
Why the hell doesn't a 100-fold increase in productivity eliminate lives of desperation, despair, exhaustion, hunted looks, hand-to-mouth living, and an unending fear of your car being towed?
I think the existence of a class of people that can't defend themselves from more than one milker, is probably part of the answer. It doesn't seem like nearly a complete answer. I suspect there's also some kind of weird equilibrium in which societies feel freer to destroy more wealth as people would otherwise become richer. People in 1850 didn't give themselves modern levels of regulatory burdens.
But there is *some kind* of poverty equilibrium, with restoring forces powerful enough to defeat a 100-fold improvement in productivity. I am skeptical that, after the last thousand years, a basic income will *finally* be the force that defeats poverty once and for all, especially since we don't know why poverty shrugged off all the previous assaults. I wonder if you could actually give everyone in Niger a basic income and actually have everyone in Niger be better off, rather than the village chief and the national government competing for who can seize it first, and the land rents going up, and Monsanto charging more for seeds.
Of course there are these lovely things called "experiments" that mean we can actually try things instead of just theorizing about them. And doing those with basic income still seems like a good idea. I'm just registering my worry that the restoring forces of the poverty equilibrium may not act instantly, and some of them may be dependent on regional rather than local income levels.
So you'd want to test giving the basic income to all the people in a region at once, not one person in a village (this part seems to be getting tested properly in some cases, yay).
But more importantly you'd want to watch out for the people seeming better-off at first, but then a little poorer, and then a little poorer, by the time the experiment ended 5 years later.
|
bbc35422-f20d-47c9-9a87-7fe09876275f
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
individuallyselected_7ujun-by Vael Gates-date 20220318
# Interview with AI Researchers individuallyselected\_7ujun by Vael Gates
\*\*Interview with 7ujun, on 3/18/22\*\*
\*\*0:00:03.4 Vael:\*\* Alright. So jumping right in, my first question is, can you tell me about what area of AI you work on in a few sentences?
\*\*0:00:11.1 Interviewee:\*\* \[Interviewee describes working on natural language processing research\]
\*\*0:01:20.4 Vael:\*\* Indeed. Thanks. And then so my next question is, what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?
\*\*0:01:36.4 Interviewee:\*\* The biggest risks are that there is a lot of people who don\'t really\... Who\... The biggest risk in AI is that it\'s a field with a lot of money and attention and social power right now, and there are a lot of people who have positions of power who\... don\'t seriously consider what they do and the impacts of what they do. And AI models are already being used to violate people\'s human rights in the United States and in other countries, to commit crimes, and that\'s bad. \[chuckle\] Yeah, there\'s\... One of the worst applications is that there has been a revival in phrenology recently, so there are a lot of police departments that they have gotten really into the idea that they can use AI analysis of video cameras to determine who is \\*going\\* to commit crimes. And this, shockingly, results in over-surveillance of minority populations and violation of human rights left and right, and it\'s a huge clusterfuck and the police don\'t care.
\[pause\]
\*\*0:03:10.2 Vael:\*\* Awesome. Well, not awesome, but. So that question was, what are you most excited about AI and what are you most worried about; biggest benefits and risks?
\*\*0:03:19.8 Interviewee:\*\* Gotcha. What am I most excited about? I\'m excited about the opportunity to interact with computers via natural language. So one of the really interesting things about some recent research is that we\'ve been able to move away from traditional coding interfaces for certain tasks due to the way we\'ve able to automate things. Probably the most prominent example of this is that there is a burgeoning online AI-generated art community, where they take pre-trained models and they write English sentences and they provide the\... What the model does is it takes a English sentence input and draws a picture, and it\'s shockingly good and has an understanding of styles. If you want it to be\... If you say in the style of Van Gogh, or high contrast, or low-poly render. You can induce visual effects by using language like that, and I think that\'s phenomenally cool, and it\'s gotten a lot of people\... There\'s a lot of people who\'ve gotten into using this kind of technology who otherwise wouldn\'t have\... \[who it\] really wouldn\'t have been accessible to because of their lack of coding knowledge and understanding of AI. They couldn\'t have developed the algorithms that run on the backend for this on their own. Recently, yesterday I saw another blog post about how they were able to develop simple video games using GTP-3. It just wrote the code for them. I think that the ability to write a text description of something that you\'re interested in, which is a medium that everyone can relate to and interact with, or that most people can relate to and interact with far more than regular programming, for example, is really powerful and really awesome.
\*\*0:05:30.8 Vael:\*\* Yeah, I see a lot of themes of accessibility in all of these risks and benefits and work. I thought you were going to bring up Codex but yes, art generation. It\'s very cool.
\*\*0:05:41.7 Interviewee:\*\* Oh yes.
\*\*0:05:43.8 Vael:\*\* Yeah. So, thinking about a future AI, putting on a science fiction forecasting hat, say we\'re 50 plus years into the future. So at least 50 years in the future, what does that future look like?
\*\*0:05:56.3 Interviewee:\*\* I have absolutely no idea, and anyone who says otherwise is wrong.
\*\*0:06:01.3 Vael:\*\* Okay. Do you think AI will be important in it, or probably not?
\[pause\]
\*\*0:06:16.6 Interviewee:\*\* I think that\'s more of a sociological question than it is a technical question. The class of problems and the class of algorithms that are considered AI has changed dramatically over the past 50 years, and there\'s entire books have been written about this topic. At a basic level, my hesitancy is that I don\'t know people will consider AI in 50 years. There\'s a very real possibility in my mind that GTP-3 will no longer be considered an AI.
\*\*0:06:48.5 Vael:\*\* What will be considered?
\*\*0:06:53.3 Interviewee:\*\* A text generation algorithm? A good example of this is simple game-playing agents, so you can write an algorithm that can play Tic-Tac-Toe perfectly or can play Connect Four or Checkers really well. Like, will beat any human. And a lot of people don\'t call those AIs anymore because they don\'t\... \'Cause they\'re search algorithm-based. They apply a lot of computational power to look through a space of possible events, and they find the best event. And they don\'t really\... The argument is that they don\'t reason, or they don\'t know anything about strategy. And this is often to contrast it with more recent AIs for playing games like the AlphaGo, AlphaZero models that DeepMind has produced where top level chess players certainly get beat by these algorithms just like they get beat by\... have been beaten by algorithms for 20 years, but for kind of the first time people are able to study and interpret these algorithms and learn and improve their play as a human. Which is really cool. But that kind of dichotomy is often used to dismiss or remove the label of \"AI\" from prior work and stuff that have been considered AI at that time. And I could certainly see that happening with GTP-3, for example, because it\'s really, really terrible at reasoning. And if in 50 years we have chatbots that can answer knowledge-based questions the way, say, a sixth grader could, and reason about basic word problems and stuff, pass some kind of reasoning examination, then I could easily see people no longer considering GTP-3 an AI, because it\'s not intelligent, it\'s just babbling and making up words.
\*\*0:08:54.4 Vael:\*\* Right. Cool, I\'m now going to go on a spiel. So people talk about the promise of AI, by which they mean a bunch of different things, but the thing that I\'m most thinking about is a very general system with the capabilities to replace all current day human jobs, so like a CEO AI or a scientist AI, for example. Whether or not we choose to replace human jobs is a different question, but I usually think about this in the frame of\... in 2012 we had AlexNet, deep learning revolution. You know, here we are 10 years later, we have GPT-3, like we\'re saying, which has some weirdly emergent capabilities: new language translation, and coding and some math and stuff, but not very well. And then we have a bunch of investment poured into this right now, so lots of young people, lots of money, lots of compute, lots of\... and if we have algorithmic improvements at the same rate, and hardware improvements at the same rate, like optical or quantum, then maybe we reach very general systems or maybe we hit a ceiling and need to do a paradigm shift. But my general question is, regardless of how we get there, do you think we\'ll ever have these very general AI systems, like a CEO or a scientist AI? And if so, when?
\[pause\]
0:10:07:5\...Oh, you\'re muted. I think. Oh, no. Oh, no. Can\'t hear you. I don\'t think anything\'s changed on my end. Okay.
\*\*0:10:25.2 Interviewee:\*\* Hello?
\*\*0:10:26.2 Vael:\*\* Yeah. Cool.
\*\*0:10:27.2 Interviewee:\*\* Okay. I think my headphones may have done something wacky. I don\'t know, I would be extremely surprised if the answer\... like I know that there are people who say that the answer is less than 10 years, and I think that\'s absurd. I would be surprised if the answer is less than 50 years, and I don\'t feel particularly confident that that will ever happen.
\*\*0:10:47.1 Vael:\*\* Okay. So it may or may not happen. Regardless, it\'s going to be longer than 50 years. Is that right?
\*\*0:10:54.1 Interviewee:\*\* Hello?
\*\*0:10:55.2 Vael:\*\* Hello, hello, hello?
\*\*0:11:00.7 Interviewee:\*\* Yes.
\*\*0:11:00.8 Vael:\*\* Okay, cool. So my question was like, all right, you don\'t know whether or not it will happen. Regardless, it will take longer than 50 years. Is that a summary?
\*\*0:11:07.3 Interviewee:\*\* Mm-hmm.
\*\*0:11:09.0 Vael:\*\* Yeah. Okay, cool. So one of my question is like, why wouldn\'t it eventually happen? I kind of like believe in the power of human ingenuity, and people following economic incentives such that\... These things are just really quite useful, or systems that can do human tasks are generally quite useful, and so I sort of think we\'ll get there eventually, unless we have a catastrophe in some way. What do you think about that?
\[pause\]
\*\*0:11:45.0 Interviewee:\*\* It\'s going to be extremely difficult to develop something that is sufficiently reliable and has an understanding of the world that is sufficiently grounded in the actual world without doing some kind of mimicking of human experiential learning. So I\'m thinking here reinforcement learning in robots that actually move around the world.
\*\*0:12:13.0 Vael:\*\* Yeah.
\*\*0:12:13.9 Interviewee:\*\* I think without something like that, it\'s going to be extremely difficult to tether the knowledge and the symbolic manipulation power that the AIs have to the actual contents of the world.
\*\*0:12:29.5 Vael:\*\* Yep.
\*\*0:12:29.9 Interviewee:\*\* And there are a lot of extremely, extremely difficult challenges in making that happen. Right now, cutting-edge RL techniques are many orders of magnitude\... Require many orders of magnitude too much data to really train in this fashion. RL is most successful when it\'s being used in like a chess context, where you\'re playing against yourself, and you can do this in parallel, and that you can\... When you can do this over and over and over again. And if you think about an actual robot crossing the street, if an attempt takes 10 seconds, and I think especially early in the learning process, that\'s an unreasonably small amount of time to estimate. But if an attempt takes 10 seconds and\... Let me pull out the calculator for a second.
\*\*0:13:28.1 Interviewee:\*\* And you need one million attempts\... then that would take you\... about a third of a year to do. And I think that both of those numbers are wrong. And I think the number of attempts is orders of magnitude too small. There\'s very, very little that we can learn via reinforcement learning in a mere one million attempts. And this is just one task. If you want something that can actually move around and interact with the world, even if you\'re using these highly optimistic, currently impractical estimates, you can\'t take four or five months to learn how to cross the street. If that\'s your paradigm, you\'re never going to be able to build\-- you\'re never going to get to stuff like managing a company. \[chuckle\]
\*\*0:14:38.0 Vael:\*\* Yeah. That makes sense. Yeah, I think\-- I think\... this makes sense to me, and I\'m like, \"Wow, our current state systems are really not very good.\" But also, I think I often view this from a lens of pretty far back. So I\'m like, 10000 years ago, humans were going around and the world was basically the same from one generation to the next, and you could expect things to be similar. And now we\'ve had the agriculture revolution and industrial revolution, in the past couple of hundred years, we have done\... We\'re kind of on an exponential curve in terms of GDP, for example, and I would expect that\... And we\'ve only been working on AI for, I don\'t know, less than 100 years. And we have\... We now have something like GPT-3, which sounds sort of reasonable, if you\'re just looking at it, and of course it\'s not very\... It\'s not, like, grounded, which is a problem.
\*\*0:15:28.3 Vael:\*\* But I sort of just expect that if we spend another\... I don\'t know, you could spend hundreds of years working on this thing, and if it continues to be economically incentivized\... This is kind of how human progress works. I just kind of expect us to advance up the tech tree to solve the software improvements, to solve the hardware improvements. Or new paradigms maybe. Even at the worst case, I guess we advance enough in neuroscience and scanning technologies to just scan human brains and make embodied agents that way or something. I just expect us to get to some capabilities like this eventually.
\*\*0:16:03.5 Interviewee:\*\* In my mind, really the fact that there\'s only so fast we can move around in the real world is a huge constraint. Even if you can learn extremely complicated and abstract things embedded in the real world as an actual robot, take my crossing street example, even if you could\... doing an attempt at\... So even if you could learn pretty much any task in a thousand iterations, some tasks take a very long time to develop. Humans don\'t learn to be CEOs of companies very quickly, and it doesn\'t seem like it\'s very shortcut-able to me. I also don\'t think CEOs of companies is perhaps the best example, but let\'s say\...
\*\*0:16:58.1 Interviewee:\*\* Let\'s say you wanna train a robot to operate a McDonalds. That\'s a very large amount of destroyed meat that you need to buy, it\'s a very large amount of time and materials to even set up the apparatus in which you could actually train a robot to perform that task. And you\'re talking about economic incentives, where is the economic incentive to burning million patties to get to the point where your robot can flip one over successfully? When we\'re talking about moving around and interacting in the real world, those interactions have costs that are financial in addition to being time-consuming. If we want to train an AI to\... Via reinforcement learning technique, which is certainly a caveat that have to add to a lot of what I\'m saying. But if we wanna train a robot to drive a car via a reinforcement learning-like technique, at some point you need to put it behind the wheel of a car and let it drive 100, 1000 cars. And you\'re going to destroy a lot of cars doing that, and you\'re probably going to kill people. So that\'s a very large disincentivizing cost.
\*\*0:18:30.1 Vael:\*\* Okay. Alright. So the idea is like\... if we\'re doing robotics, then we need to\... and the training paradigm is not, like, humans where you can kind of sit them down, and\... Humans don\'t actually crash cars, usually\... I mean, sometimes. Teenage humans crash cars sometimes. But in their training process, they don\'t usually require that many trials to learn, and they can do so kind of quickly. So I\'m like, I don\'t know. Do we expect algorithms at some point to require much less training data than current ones do? Because current ones require a huge amount of training data, but I kind of imagine we\'ll get more efficient systems as\... More efficient per data as we go along.
\*\*0:19:24.9 Interviewee:\*\* Are you saying that you think that you can sit down and explain to someone how to drive a car and they can drive it without crashing?
\*\*0:19:30.9 Vael:\*\* I think, that.. We have\... I think that if we take a human, and I\'m like, \"All right, human, I\'m going to\... I want you to learn how to drive this car. And I\'m going to sit next to you. And I\'m going to tell you what to do and what not to do, and you\'re going to drive it.\" I think they can, indeed, after practicing some period of time, which for humans, it\'s like hours. It\'s on the order of tens of hours, then they can basically sit there and not crash a car. And I kind of expect similar paradigms eventually for AI systems.
\*\*0:20:04.4 Interviewee:\*\* That seems extremely non scalable.
\*\*0:20:11.8 Vael:\*\* Uh\... Okay. You\'re like, look, if it takes tens of hours to train every AI system?
\*\*0:20:17.6 Interviewee:\*\* No, I\'m thinking mostly about the human sitting next to them giving them constant feedback actually.
\*\*0:20:28.5 Vael:\*\* But the nice thing about AI is you can copy them as soon as one person spends that many hours. You can just take that, takes the thing that\'s\-- like its new neural net, pass it onto the next one.
\[pause\]
\*\*0:20:48.0 Interviewee:\*\* \...Maybe.
\*\*0:20:50.1 Vael:\*\* And I don\'t think this has to happen anytime soon. But I do think eventually given that\... I don\'t know, I can\'t imagine humans being like, \"All right, cool. We\'re efficient enough. Let\'s just stop now. We\'ve got like GPT-3. Seems good. Or GPT-5, let\'s just stop here.\"
\*\*0:21:08.4 Interviewee:\*\* So nobody has ever taken two different robots, trained one of them in the real world to perform a task, and then transferred the algorithm over and allowed the other one to perform the same task as successfully, as far as I\'m aware.
\*\*0:21:23.6 Vael:\*\* Yep. I totally believe today\'s systems are not very good.
\*\*0:21:31.2 Interviewee:\*\* It is, I think, I think that.. Anything we can really say about this is inherently extremely speculative. I\'m certainly not saying it could never happen. I\'m just\... Sorry. I\'m certainly not saying it can\'t happen. I\'m just saying it could never happen. There we go.
\*\*0:21:45.0 Vael:\*\* Okay. All right. Okay. That makes sense. How likely do you think it is that we\'ll get very capable systems sometime ever in the future?
\*\*0:21:53.5 Interviewee:\*\* I have no idea.
\*\*0:21:56.5 Vael:\*\* \...Well, you have some idea because you know that it\... Well, okay. You said that it can\'t\... You\'re like, it\'s higher than zero.
\*\*0:22:04.9 Interviewee:\*\* Yes.
\*\*0:22:05.9 Vael:\*\* Yes. And you don\'t sound like you think it definitely will happen, so it\'s less than 100.
\*\*0:22:12.6 Interviewee:\*\* Yes.
\*\*0:22:13.8 Vael:\*\* Okay. And it\'s anywhere in that scale? I mean, slightly higher than zero and slightly less than 100.
\*\*0:22:22.5 Interviewee:\*\* That sounds like an accurate description of my current level of uncertainty.
\*\*0:22:26.8 Vael:\*\* Interesting. Man. Is it.. Hard\-- I mean like how\-- You do have predictions of the future though, for the near future, presumably, and then it just like tapers off?
\*\*0:22:36.7 Interviewee:\*\* Mm-hmm.
\*\*0:22:38.2 Vael:\*\* Okay. And anything\... And you say definitely not 10 years, but after. And then like 50 years. So like 50 years out, you\'re\... It starts going from approximately zero to approximately 100?
\*\*0:22:56.6 Interviewee:\*\* Um\... I think it is unlikely to happen in the next 50 years.
\*\*0:23:00.0 Vael:\*\* Okay.
\*\*0:23:05.7 Interviewee:\*\* I would assign a less than 25% probability to that. But I don\'t think I can deduce anything about my expectation at 100 years based on that information. Other than it\... yeah.
\*\*0:23:21.6 Vael:\*\* Great. Thanks. All right. I think that\'s good enough for me to move on to my next question.
\*\*0:23:28.6 Vael:\*\* So my next question is thinking about these highly intelligence systems in general, which we\'re positing maybe will happen sometime. And so say we have this sort of CEO AI through, I don\'t know, maybe hundreds of years in the future or whatever. I\'m like, \"Alright, CEO AI. I want you to maximize profits and try not to run out of money and try not to exploit people and try to avoid side effects.\" And currently, obviously this would be very technically challenging for many reasons. But one of the reasons is that we currently don\'t have a good way of taking human values and preferences and goals and stuff, and putting them in mathematical formulations that AI can optimize over. And I worry that this actually will continue to be a problem in the future as well. Maybe even after we solve the technical problem of trying to get an AI that is at all capable. So what do you of the argument, \"highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\"
\*\*0:24:26.0 Interviewee:\*\* I mean I think that the statement that highly intelligent systems will fail to optimize what their designers intend them to is a slam dunk. Both human children and current AIs do not do that, so I don\'t see any particular reason to think we will\-- that something that\'s like, in some sense in between those, we\'ll have a whole lot more success with.
\*\*0:24:50.4 Vael:\*\* Interesting. Okay, cool.
\*\*0:24:58.5 Interviewee:\*\* Yeah. Did you turn out exactly the way your parents wanted you to? \[laughter\] I didn\'t. I think the overwhelming majority of people don\'t, and that\'s not a flaw on their part. But\... yeah.
\*\*0:25:15.2 Vael:\*\* All right. Yep. Yeah, certainly there\'s some alignment problems with parents and children. Within human-humans even. And then I expect\-- I kind of expect the human-AI one to be even worse? My intuition is that if you\'re having an AI that\'s optimizing over reality in some sense, that it\'s going to end up in alien parts of the space\-- alien to humans, because it\'s just optimizing over a really large space. Whereas humans trying to align humans have at least the same kind of evolutionary prior on each other. I don\'t know. Do you also share that?
\*\*0:25:47.7 Interviewee:\*\* I\'m not sure. I think that you\'re going to have to get a lot of implicit alignment to end up in a place where you\'re able to train these things to be so intelligent and competent in the first place.
\*\*0:26:07.6 Vael:\*\* That makes sense to me. Kind of like\--
\*\*0:26:09.8 Interviewee:\*\* Yeah. What percentage of the way that gets you there is a very important and totally unknown question. But I don\'t think that the value system of one of these systems is going to be particularly comparable to like, model-less RL, where they\'re trying to optimize over everything.
\*\*0:26:33.4 Vael:\*\* Could you break that one down for me?
\*\*0:26:38.2 Interviewee:\*\* In what way?
\*\*0:26:41.6 Vael:\*\* I didn\'t\... I don\'t quite understand the statement. So the value system will not be the same that it is in model-less RL. I don\'t have a super good idea of what model-less RL is and how that compares to human systems or human-machine\--
\*\*0:26:56.1 Interviewee:\*\* Okay. So model-less RL is a reinforcement learning paradigm in which you are basically trying to learn everything from the ground up, via pure interaction.
\*\*0:27:06.2 Vael:\*\* Okay.
\*\*0:27:07.1 Interviewee:\*\* So if you\'re thinking of a game-playing agent, this is typically an agent that you\'re not even programming with the rules. It learns what the rules are because it walks into a wall and finds that it can\'t walk further in that direction. That\'s the example in my head of something that\'s optimizing over all possible outcomes currently. \...Sorry, I lost the train of the question.
\*\*0:27:39.7 Vael:\*\* I was like: how does that relate to human value systems?
\*\*0:27:46.4 Interviewee:\*\* I think that the work that we will have to do to train something to move around and interact in the world and perform these highly subjective and highly complex tasks that require close grounding in the facts of the world will implicitly narrow down the search space. Significantly.
\*\*0:28:10.1 Vael:\*\* Okay. Yeah\--
\*\*0:28:11.6 Interviewee:\*\* I do think that there\'s a\... Yeah.
\[pause\]
\*\*0:28:25.8 Interviewee:\*\* Yeah.
\*\*0:28:26.3 Vael:\*\* Yeah. Yeah, I often think of this in terms of like, you know how the recommender systems are pretty close to what humans want, but they\'re also maybe addictive and kind of bad and optimizing for something a little bit different than human fulfillment or something. People weren\'t trying to maximize them for human fulfillment per se. But yeah, I like\-- like that sort-of off alignment is often something I think about. Alright.
So this next question is back to the CEO AI, so imagine that the CEO AI is good at multi-step planning and it has a model of itself in the world, so it\'s modeling other people modeling it, \'cause that seems pretty important in order for it to do anything. And it\'s making its plans for the future, and it notices that some of its plans fail because the humans shut it down. And it\'s built into this AI that it needs human approval for stuff \'cause it seems like a basic safety mechanism, and the humans are asking for a one-page memo to describe its decision.
\*\*0:29:21.4 Vael:\*\* So it writes this one-page memo, and it leaves out some information because that would reduce the likelihood of the human shutting it down, which would increase the likelihood of it being able to achieve the goal, which is like, profit plus the other constraints that I mentioned. So in this case, we\'re not building in self-preservation to the AI itself, it\'s just, self-preservation is arising as a function\... \[as a\] instrumental incentive of an agent trying to optimize any sort of goal. So what do you think of the argument, \"highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\"
\*\*0:30:00.4 Interviewee:\*\* It seems likely correct.
\*\*0:30:02.3 Vael:\*\* Interesting. Okay. \[chuckle\] \...I\'m not excited about that answer, \'cause other instrumental incentives are acquiring resources and power and influence, and then also not wanting\... Having a system that\'s optimizing against humans seems like a very bad idea in general, which makes me worried about the future of AI. If the thing that we\'re going to build is eventually by default, maybe not going to want to be corrected by humans if we get the optimization function wrong the first time.
\*\*0:30:32.5 Interviewee:\*\* Yeah.
\*\*0:30:36.2 Vael:\*\* \[laughter\] Okay. Have you thought about this one before?
\*\*0:30:38.6 Interviewee:\*\* Yes.
\*\*0:30:39.4 Vael:\*\* Yeah. Cool. Have you heard of AI alignment?
\*\*0:30:42.2 Interviewee:\*\* Yes.
\*\*0:30:43.2 Vael:\*\* Yeah. And AI safety and all the rest of it?
\*\*0:30:45.6 Interviewee:\*\* Mm-hmm.
\*\*0:30:46.0 Vael:\*\* Yeah. How do you orient towards it?
\*\*0:30:49.8 Interviewee:\*\* I think that most people who work in it are silly. And don\'t take the right thing seriously.
\*\*0:30:57.9 Vael:\*\* Mm. What should they take seriously? And what don\'t they?
\*\*0:31:02.9 Interviewee:\*\* I know a lot of people who are afraid that future research along the lines of GTP-3 is going to rapidly and unexpectedly produce human-like intelligence in artificial systems. I would even say that that\'s a common, if not widespread attitude. There are pretty basic kinds of experiments that we\'ll need to do to test the plausibility of this hypothesis, that nobody seems really interested in doing.
\*\*0:31:48.5 Vael:\*\* Hm. Seems like someone should do this?
\*\*0:31:51.2 Interviewee:\*\* Yeah. When I talk to most people who describe themselves as alignment researchers, and I try to put myself in their shoes in terms of beliefs about how agents work and what the future is likely to look like. The things I see myself experimenting with and working on, are things that nobody is working on. And that really confuses me. I don\'t understand\... So here\'s an interesting question: how much experience do you have actually using GTP-3 or a similar system?
\*\*0:32:31.9 Vael:\*\* Yeah, not hardly at all. None. So I\'ve seen examples, but haven\'t interacted with it myself.
\*\*0:32:38.4 Interviewee:\*\* Okay, um\... Would you like to?
\*\*0:32:47.2 Vael:\*\* Uh\... Sure? I mean, I guess I\'ve messed around with the Dungeon AI one, but\... Does seem interesting.
\*\*0:32:57.1 Interviewee:\*\* Hm. \...So my experience is that\... A widespread observation is that they don\'t seem to have a worldview or a perspective that they\-- are expressing words, so much as many of them. Some people like to use the term multiversal. It\'s\... kind of the way I think about it is that there are many people inside of GTP-3 and each time you talk to it, a different one potentially can talk to you.
\*\*0:33:42.1 Vael:\*\* Yep.
\*\*0:33:43.8 Interviewee:\*\* This seems to be an inherent property of the way that the model was trained and the way that all language models are currently being trained. So a pressingly important question is, to what extent does this interfere with\... Let\'s, to make language easier, call it one of its personalities. Let\'s say one of its personalities wants to do something in the world: kill all the humans or even something mundane. To what extent does the fact that it\'s not the only personality interfere with its ability to create and execute plans?
\*\*0:34:28.2 Vael:\*\* \...Ah\... Current systems seem to not\... Well, okay. It depends on how we\'re training it, because GPT-3 is confusing. But AlphaGo seems to kinda just be one thing rather than a bunch of things in it. And so it doesn\'t seem like it has conflicts there?
\*\*0:34:46.1 Interviewee:\*\* I would generally agree with that.
\*\*0:34:48.2 Vael:\*\* Okay. But we\'re talking about scaling up natural language systems and they don\'t\... And they don\'t\... They have lots of different types of responses and don\'t\... on one personality. Uh\... Well, it seems like you could train it on one personality if you wanted to, right? If you had enough data for that, which we don\'t. But if we did. And then I wouldn\'t really worry about it having different agents in it.
\*\*0:35:17.6 Interviewee:\*\* That\'s a very, very, very, very, very, very, very, very large amount of text.
\*\*0:35:23.8 Vael:\*\* Yeah. \[Interviewee laughter\]
\*\*0:35:25.0 Vael:\*\* Yeah, yeah that\'s right!
\*\*0:35:26.5 Interviewee:\*\* Do you any\-- do you have any scope of understanding for how much text that is?
\*\*0:35:32.8 Vael:\*\* Yeah, I\'m actually thinking something like pre-training on the whole internet, and then post-train on a single person, which already doesn\'t work that well. And so then it wouldn\'t actually help if that pre-training procedure is still on\... Still on the whole thing. Um, okay\--
\*\*0:35:48.4 Interviewee:\*\* So a page of written text is about 2 kilobytes in English. And these models are typically trained for between one and five terabytes, so no human has come anywhere close to putting out five billion pages of total text.
\*\*0:36:13.7 Vael:\*\* Yeah.
\*\*0:36:18.1 Interviewee:\*\* It\'s so astronomically far beyond what any human would actually ever write, that it doesn\'t seem very plausible unless something fundamentally changes about the way humans live their lives.
\*\*0:36:30.8 Vael:\*\* Or about different training procedures. But like\--
\*\*0:36:33.8 Interviewee:\*\* Yeah, yeah, yeah, yeah. But like the idea that one could do something similar to current pre-training procedures that is meaningfully restricted to even say a 100 people that have been pre-screened for being similar to each other. 100 people are also not going to put out five billion pages of text.
\*\*0:36:49.6 Vael:\*\* Yeah.
\*\*0:36:51.6 Interviewee:\*\* \[laughter\] It\'s just so much data\...
\*\*0:36:54.1 Vael:\*\* Yeah. Yeah, I don\'t know how efficient systems will be in the future, so\... Yeah. Let\'s take it as\... Yeah, sure. But they\'re going to have multiple personalities in them, in that they are trained on the internet.
\*\*0:37:05.1 Interviewee:\*\* Mm-hmm.
\*\*0:37:06.1 Vael:\*\* And then you\'re like, \"Okay. Does that mean that\... \" And then there\'s a frame here that is being taken where we have different\... Something like arguing? Or like different agents inside the same agent or something? And so then you\'re like, \"Well, has anyone considered that? Have we tested something like that?\"
\*\*0:37:26.9 Interviewee:\*\* Yeah, that\'s kind of close to what I\'m saying.
\*\*0:37:29.6 Vael:\*\* Hmm.
\*\*0:37:31.9 Interviewee:\*\* So, to take your CEO example. In order for it to be successful, it needs to\... at no point\... There\'s certain information it needs to consistently hide from humans. Which means that every time it goes to generate text, it needs to choose to not share that information.
\*\*0:37:47.1 Vael:\*\* Yeah.
\*\*0:37:48.1 Interviewee:\*\* So if the system looks even vaguely like GTP-3, it seems to me like it will not be able to always act with that\... generate text with that plan. And so there\'s a significant risk in it compromising its own ability to keep the information hidden.
\*\*0:38:13.7 Vael:\*\* Okay.
\*\*0:38:13.7 Interviewee:\*\* Alternatively, even if it\'s\... That\'s a more direct way that they can interfere with each other. But even less directly, if I have somewhere I want to go and I go drive the car for a day, and then you have somewhere you want to go and you drive the same car for a day, and we trade off control, there are things I\'m going to want to do that I have trouble doing because I only control the body and the car at the end of the day.
\*\*0:38:40.8 Vael:\*\* Quick question. Are you expecting that AI systems or multi-agent properties are more\... have more internal conflict than humans do? Which can also be described in some sense as having multiple agents inside of them?
\*\*0:38:54.7 Interviewee:\*\* Yes.
\*\*0:38:55.7 Vael:\*\* Okay.
\*\*0:38:57.4 Interviewee:\*\* I think that anyone whose worldview is as fractured and inconsistent as GPT-3 probably has a clinical diagnosis associated with that fact.
\*\*0:39:08.8 Vael:\*\* Yeah. And you don\'t think that these will get more targeted in the future as we direct language models to do specific types of tasks, something like math?
\*\*0:39:24.2 Interviewee:\*\* I think that achieving, even\... achieving 95, 99%, let\'s say, coherency between generalization, so if you imagine every time the model is used to generate text, there\'s some worldview it\'s using to generate that text, and you want each time those different worldviews used to be consistent with each other. Even achieving 99% consistency, I\'m not asking for 100% consistency but 95, 99 seems like something necessary for it to make multi-year long-term plans.
\*\*0:40:10.7 Vael:\*\* That seems right.
\*\*0:40:13.5 Interviewee:\*\* This is exceptionally difficult and there are very likely fundamental limitations to the extent to which a system can achieve that level of coherence in the current training paradigms. And\...
\*\*0:40:31.7 Vael:\*\* Seems plausible.
\*\*0:40:34.7 Interviewee:\*\* That would be very good news to people who are afraid that GTP-7 is going to take over the world.
\*\*0:40:43.4 Vael:\*\* Yeah, yeah. Okay, alright, \'cause I\'m like, I don\'t know, I feel I\'m kind of worried about any future paradigm shift. But current people definitely are worried about GPT-3 specific or GPT systems, and the current paradigms, specifically.
\*\*0:40:56.3 Interviewee:\*\* I\'ve spoken to these people at length and I\'ve talked to them about what they\'re afraid of and stuff. \...There seem to be a significant number of people in the alignment community who\... If you could put together a convincing argument that the current pre-training methodology, as in, the fact that it\'s trained on a widely crowdsourced text generation source, instills some kind of fundamental worldview inconsistency that is exceptionally difficult if even possible to resolve, would alleviate a lot of the anxiety. It would actively make these people happier and less afraid about the world.
\*\*0:41:38.7 Vael:\*\* That seems true. I think if you can\... If there\'s a fundamental limit on capabilities, just like, of AI, then that\'s good for safety because then you don\'t get super capable systems. And I\'m like, \"Yeah, that makes sense to me.\" And do you think that this capability issue is going to be something like\... coherence of generated text. And that might be a technically fundamental limitation. Cool\--
\*\*0:42:06.1 Interviewee:\*\* I know people who have the tools and resources and time to test, to run experiments on things like this, who I\'ve even directly proposed this to. And they\'ve gone, \"Oh, that\'s interesting.\" And then not done it.
\*\*0:42:22.3 Vael:\*\* Yeah, my intuition is that they don\'t\... I think you have to have a pretty strong prior on this particular thing being the thing that is going to have like a fundamental limit in terms of capabilities in order to want to do this compared to other things, but\... That makes sense, though. It sounds like you do have a\... You do think this particular problem is pretty important. And pretty hard to\... Very, very difficult.
\*\*0:42:45.4 Interviewee:\*\* I think that this coherency problem is a serious issue for any system that is GTP-3 like, in the sense that it\'s trained to produce tokens or reasoning or symbols or whatever you want to say, but that produce outputs that are being fit to mimic a generative distribution\-- sorry, it\'s being generatively trained to produce outputs that mimic a crowdsourced human distribution.
\*\*0:43:19.8 Vael:\*\* Yeah. Cool, awesome. Yup, makes sense to me as a worldview, is pretty interesting. I haven\'t actually heard about that problem\-- of people thinking that that problem specifically, the coherency problem, is one that\'s going to fundamentally limit capabilities. Seems plausible, seems like many other things might end up being the limit as well. And then you\'re like, \"Well, people should like\... If this is the important thing, then people should actually test it. And then they\'ll feel better. Because they\'ll believe that these systems won\'t be as capable and then less likely to destroy the world.\" Yeah, this makes sense to me.
\*\*0:44:02.4 Interviewee:\*\* Yeah. Another aspect of this is that research into the functional limitations, in a sense, is extremely difficult to convert into capabilities research, which is something that a lot of people say that they\'re highly concerned about. And that they don\'t want to do many types of research because\... There was that Nature article where they were creating a medical AI and they were like, \"Let\'s put a negative sign in front of the utility function.\" And it started designing neurotoxins. Do you know what I\'m referring to?
\*\*0:44:35.2 Vael:\*\* No, but that sounds bad.
\*\*0:44:36.8 Interviewee:\*\* Oh yeah, no, it\'s just\... That was the Nature article. It was like \"we were synthesizing proteins to cure diseases, and we stuck a negative sign in front of the utility function\-- (Vael: Oh, was that last week or something?) Yeah.
\*\*0:44:46.2 Vael:\*\* Yeah, okay, so I did see that, yeah. Huh.
\*\*0:44:48.1 Interviewee:\*\* Yeah. Gotta love humans. Gotta love humans.
\[chuckle\]
\*\*0:45:02.6 Vael:\*\* \...Awesome. Ah, I think I\'ll.. Maybe.. Hm. Okay. So. What would\... make you want to work on alignment research as you think it can be done?
\[pause\]
\*\*0:45:25.2 Interviewee:\*\* That\'s an interesting question. \[pause\] I guess the main thing would be being convinced of the urgency of the problem.
\*\*0:45:50.4 Vael:\*\* That makes sense. Very logical.
\*\*0:45:56.4 Interviewee:\*\* To be blunt, I don\'t tend to get along with the kind of people who work in that sphere, and so that\'s also disincentivizing and discouraging.
\*\*0:46:12.2 Vael:\*\* Yeah, that makes sense. I\'ve heard that from at least one other person. Yeah. Alright, so timelines and also nicer research environment. Makes sense.
\*\*0:46:27.4 Interviewee:\*\* You could even say nicer researchers.
\*\*0:46:31.0 Vael:\*\* Yep. Nicer researchers. Apologies? \...Yeah. Cool. And then my last question is, have you changed your mind on anything and during this interview, and how was this interview for you?
\*\*0:46:45.9 Interviewee:\*\* The interview was fine for me. I don\'t think I\'ve changed my mind about anything.
\*\*0:46:56.0 Vael:\*\* Great. Alright, well, thank you so much for being willing to do this. I definitely\... Yeah. No. You have a very coherent kind of worldview thing that\'s\... That I\... Yeah. I appreciate having the ability to understand or have access to or listen to, rather.
\*\*0:47:11.7 Interviewee:\*\* My pleasure.
\*\*0:47:13.5 Vael:\*\* Alright. I will send the money your way right after this, and thanks so much.
\*\*0:47:17.2 Interviewee:\*\* Have a good day.
\*\*0:47:17.6 Vael:\*\* You too.
\*\*0:47:17.8 Interviewee:\*\* Bye.
|
dbeba29b-0fcc-4c08-9e94-8627d4148354
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How can we lobby to get a vaccine distributed faster?
Fellow Rats,
Tyler Cowen has argued that we can release vaccines now without compromising phase III trials through randomization. We could thus benefit from the expected value of innoculating more people earlier and of getting an answer sooner. He has proposed two mechanisms.
1. Randomly distribute treatments and placebos to at risk groups like bus drivers. This seems like a great idea, since bus drivers are in unusual danger and need.
2. Use a "tie-breaker" design, which is a hybrid of regression discontinuity and randomized control trial. Basically you want to treat some subset of the population, but want an impact assessment. So you randomize only near the cutoff for service. So we could vaccinate some of the most at-risk persons and randomize at the liminal cases, achieving an optimal tradeoff between present benefits and information. The abstract of Owen and Varians article is below.
For some reason, the US is currently implementing neither ideas. Our approach is to stockpile lots of vaccines and wait for a greenlight from a single conventional information-only trial. Cowen is mostly being ignored.
Lobbying
We can lobby the government and Pfizer to change this. The US gov has plenty of avenues for lobbying to force discussion on these ideas. Here are a few, off the top of my head.
* Tweet at public health experts in the style of 1 day sooner
* Call our senators, complain about FDA regulations
* Call our representatives, complain
* Go to our representatives office and demand to speak to the staff. Show him the paper. Demand a meeting.
* Call local television stations. Read the paper in detail and prepare a speech. Build publicity
* Tweet at Donald Trump directly
* Call the FDA
* Call think tanks affiliated with party leadership
I am uncertain which body needs to approve such a policy change. It could be mandatable from the white house, require legislation, be mandatable from the FDA, or be entirely under the pharma companies control. The easies
|
ffb28a9e-6328-468c-ac27-4abc7625ce20
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Consequences of the Non-Existence of Perfect Theoretical Rationality
Caveats: Dependency (Assumes truth of the arguments against perfect theoretical rationality made in the previous post), Controversial Definition (perfect rationality as utility maximisation, see previous thread)
This article is a follow up to: The Number Choosing Game: Against the existence of perfect theoretical rationality. It discusses the consequences of The Number Choosing Game, which is roughly, that you name the decimal representation of any number and you gain that much utility. It takes place in a theoretical world where there are no real world limitations in how large a number you can name or any costs. We can also assume that this game takes place outside of regular time, so there is no opportunity cost. Needless to say, this was all rather controversial.
Update: Originally I was trying to separate the consequences from the arguments, but it seems that this blog post slipped away from it.
What does this actually mean for the real world?
This was one of the most asked questions in the previous thread. I will answer this, but first I want to explain why I was reluctant to answer. I agree that it is often good to tell people what the real world consequences are as this isn't always obvious. Someone may miss out on realising how important an idea is if this isn't explained to them. However, I wanted to fight against the idea that people should always be spoonfed the consequences of every argument. A rational agent should have some capacity to think for themselves - maybe I tell you that the consequences are X, but they are actually Y. I also see a great deal of value from discussing the truth of ideas separate from the practical consequences. Ideally, everyone would be blind to the practical consequences when they were first discussing the truth of an idea as it would lead to a reduction in motivated reasoning.
The consequences of this idea are in one sense quite modest. If perfect rationality doesn't exist in at least some circumstances(edited), then if
|
59c50488-33e8-4c18-83a7-5d3d23fe513a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are we doing when we do mathematics?
I am a mathematician turned cognitive scientist/AI researcher. I wrote a short essay on the philosophy of mathematics that I think will be of interest to many people here. I combine platonist and formalist ideas and frame mathematics as an experimental activity, extremely similar to physics and other hard sciences with computation playing a parallel role to experimentation, and the creation of definitions and axiomatic frameworks being similar to the creation of physical theories like Newtonian gravity or Quantum Mechanics.
This perspective also stresses the non-formalistic aspects of mathematical research by gesturing at a strong division of intellectual activity into semantics and syntactics. Although this is by far the least developed part of the essay, it is the secret motivation behind me writing the essay in the first place, and my core interest. I am secretly using mathematics as a test case to probe the nature of general cognition.
|
a54ab71d-696b-46fc-b0af-8da58ca84ae5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
D&D.Sci (Easy Mode): On The Construction Of Impossible Structures [Evaluation and Ruleset]
This is a followup to the D&D.Sci post I made last Friday; if you haven’t already read it, you should do so now before spoiling yourself.
Below is an explanation of the rules used to generate the dataset (my full generation code is available here, in case you’re curious about details I omitted), and their strategic implications.
----------------------------------------
Ruleset
Impossibility
Impossibility is entirely decided by who a given architect apprenticed under. Fictional impossiblists Stamatin and Johnson invariably produce impossibility-producing architects; real-world impossiblists Penrose, Escher and Geisel always produce architects whose works just kind of look weird; the self-taught break Nature's laws 43% of the time.
Cost
Cost is entirely decided by materials. In particular, every structure created using Nightmares is more expensive than every structure without them.
Strategy
The five architects who would guarantee an impossible structure are D, E, G, H, and K. Of these, G - and only G - intends to use Nightmares as construction material. The optimal choices given the Duke's stated preferences are therefore [D, E, H, K].
Reflections
This challenge was created with the intent of being egregiously easy and anomalously accessible. From the performances I saw, it looks like it fit the bill: congratulations to everyone who played publicly on reaching my intended solution. (Particular congratulations to aphyer for providing said solution within an hour of me posting the challenge, and to new player Lorxus for managing to correctly identify every named impossiblist on their way to a perfect answer.)
I hope this scenario managed to be fun despite - or because of? - the simplicity and lack of greater point. If you liked it, in lieu of correctly attributing thanks, please be slightly kinder to everyone you meet: after all, you have no way of being certain any given person wasn't my sponsor. (Conversely, if you disliked it, please wander the streets o
|
c8c8b4ea-b6f2-4f8f-824f-f5de3ac1142d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Conditional Importance in Toy Models of Superposition
Abstract
This post summarises my findings from investigating the effects of conditional importance on superposition, building on Anthropic's Toy Models of Superposition work. I have summarised my takeaways from the Toy Models of Superposition paper in this blog post and explained the key concepts necessary for following my work. The following assumes you are familiar with those ideas.
Why is this important for AI Safety?
I believe that the interpretability of AI systems is key to AI safety, since it could allow us to detect and mitigate misaligned behaviours. If our ability to understand advanced intelligences is limited to interpreting their output, we may not be able to find out when a model is being deceptive or has ulterior motives.
Understanding superposition appears likely to be one of main stepping stones in the pursuit of interpretability, as it allows us to understand how features are represented, which is necessary for tackling circuits.
My Theory of Change for this post can be understood by these three goals:
* I hope to teach readers (ideally aspiring interpretability contributors) some intuitions for thinking about features, circuits, and superposition
* I hope to get newcomers that stumble across this post excited about working on superposition and interpretability more generally, and...
* (perhaps ambitiously) I'd like to contribute a small piece of research to the existing literature on superposition, though I accept that any ideas I explore in this post are unlikely to be novel to experts in the field
What Do I Mean by Conditional Importance?
To define Conditional Importance, we must first recap the toy model setup (but please read my blog post for a deeper dive). In the Toy Models of Superposition piece, the basic model that we consider projects higher-dimensional vectors into a lower-dimensional latent space, and then attempts to recover them:
Rn→Rm→Rn
X↦WX↦ReLU(WTWX+b) (=:X′) W∈Rm×n, b∈Rn, m<n
The loss is d
|
9f785c55-6563-42fa-a5f8-732bd67ef0d4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 87. The previous thread has passed 500 comments.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
> You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
>
> If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
|
339f72f6-ed92-44bb-b731-8c10f831461e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Deconstructing overpopulation for life extensionists
|
22864544-b8d6-44ca-a6fb-3c827be4522d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Link] 2015 modafinil user survey
I am running, in collaboration with ModafinilCat, a survey of modafinil users asking about their experiences, side-effects, sourcing, efficacy, and demographics:
https://docs.google.com/forms/d/1ZNyGHl6vnHD62spZyHIqyvNM_Ts_82GvZQVdAr2LrGs/viewform
This is something of a followup to the LW surveys which find substantial modafinil use, and Yvain's 2014 nootropics survey. I hope the results will be useful; the legal questions should help reduce uncertainty there, and the genetics questions (assuming any responses) may be interesting too.
|
8a260861-0eb4-435f-ae55-c87c453f7efc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Networked Memory Hierarchy Overhang
tl;dr
It seems likely, based on current capabilities of Transformers, that in the near future humans will engineer systems to maximize real time communication bandwidth between AI and all physical systems (including humans) by saturating networked models of hierarchical size between data centers and edge devices with streaming inference computation. This could accelerate AI’s gathering of new training data through rich interactions with external systems, increase the effectiveness of existing alignment capabilities, and amplify human intelligence. Humans can build infrastructure to steer this capability toward net beneficial applications and alignment of existing systems.
In this post, I:
* Consider that AI communication bandwidth and predictive power may enter a recursive loop
* Reason about implications
* Present experimental results and analyze the Transformer's potential to scale AI communication bandwidth
* Discuss useful human actions to hedge against the potential risks of this overhang collapse
What's an example of high bandwidth communication?
Borrowing from Connor Leahy’s post, let’s start with three (unqualified) assumptions:
1. “All human behavior and thought is fully describable by some Turing-complete computation.
2. Computing hardware and algorithms will continue to improve until they hit some physical limit.
3. That limit is still very far away, and the human brain is nowhere near it.”
From these assumptions, we’ll construct an example scenario and work toward deriving the following statement: at some point in time, an AI can approximately simulate human behavior far enough into the future, and with enough accuracy, to mask network latency, resulting in imperceptible human-AI communication latency.
A human is in an empty room with a personal computer, a webcam, and a microphone. An AI is connected to a powerful data center across the internet and the not-so-powerful personal computer. The AI can listen to the webcam, microphone, and te
|
c2aa9a9a-11dd-4c34-b78a-fd77933fc61c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : First meetup in Stockholm
Discussion article for the meetup : First meetup in Stockholm
WHEN: 15 October 2015 03:00:00PM (+0000)
WHERE: Stockholms universitetsbibliotek, Universitetsvägen 14, 114 18 Stockholm, Sweden
To clarify, the time is 17:00 Swedish time.
I'm going to sit at a table for at least an hour and see how many people show up. There are meetups happening all around, so it ought to happen in Stockholm too.
When you enter the library, turn right. You'll find some tables -- mine has a big sheet of paper saying "LessWrong meet-up". Look for a guy with long hair.
The library is wondrous, and open until 21:30, so you could take the opportunity to explore after you meet us.
Discussion article for the meetup : First meetup in Stockholm
|
b0309fc9-0d95-4e56-a5b3-89e63774f37e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : London social meetup - possibly in a park
Discussion article for the meetup : London social meetup - possibly in a park
WHEN: 27 July 2014 02:00:00PM (+0100)
WHERE: Shakespeare's Head, Holborn, WC2B 6BG
The next LW London meetup will be on July 27th. Join us from 2pm to talk about the sorts of things that your other friends will look funny at you for talking about.
If the weather is nice, we'll head to Lincoln's Inn Fields, probably somewhere in the northwest quadrant. If not, we'll be in the usual Shakespeare's Head. If the weather is variable, we might move from one to the other - give me a call or text if you're not sure. My number is 07792009646.
Update: Yay park!
About London LessWrong:
We run this meetup almost every week; these days we tend to get in the region of 5-15 people in attendance. By default, meetups are just unstructured social discussion about whatever strikes our fancy: books we're reading, recent posts on LW/related blogs, logic puzzles, toilet usage statistics....
Sometimes we play The Resistance or other games. We usually finish around 7pm, give or take an hour, but people arrive and leave whenever suits them.
Related discussion happens on both our google group and our facebook group.
Discussion article for the meetup : London social meetup - possibly in a park
|
5bc80eb7-36e8-423b-90e6-032bed735fad
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Hiring Programmers in Academia
A professor I know through the EA community has been trying to hire a software engineer for their research, and they explained some privately about how this is tricky. The following are points I took away that might be useful to programmers considering job postings, people in academia looking to hire programmers, and people trying to understand why it's hard for EA projects to hire programmers despite there being a lot of them in EA.
* Market rates for good programmers are much higher than universities are used to paying for anything. For example, a Senior Software Engineer at Google earns ~$350k, while the most this professor's university would be willing to pay, full-time, would be ~$100k.
* That the money is coming from a grant doesn't resolve this: the university would still not let you pay a higher salary because you need to go through university HR and follow their approach to compensation.
* Universities will let you pay more when doing temporary arrangements, for example, as an employee on a six-month term or as an independent contractor, because you're no longer implicitly paying partly in job security. This might allow them to get up to $180k-$270k without benefits, though still below market rates.
* Depending on university rules, if you did want to hire someone full-time you might have to pick from a pool of internal candidates, even if those candidates weren't very good, or else make a strong case for why no internal candidate had the required skills. Since programmers earning university wages could be earning maybe three times as much if they were good enough to get hired in industry and wanted to switch, many of them are not great.
* They showed me a public job description that had a very long list of required skills, and seemed to be targeting a more experienced person than the work called for. Not knowing the situation, I initially thought that this was due to a misunderstanding about what kind of candidate they actually needed, but it's
|
a5452d2f-b5db-46cb-8d59-aad99f123f69
|
trentmkelly/LessWrong-43k
|
LessWrong
|
When will GPT-5 come out? Prediction markets vs. Extrapolation
So far, each generation of GPT has brought significant improvements. Thinking about the timeline for the next iteration, I noticed that there is a striking difference between extrapolating the past trend and what prediction markets seem to believe.
Forecasting short term AI Capabilities is important as current trends might continue to lead us to bigger changes. Also, it is nice to have testable predictions that we can use to calibrate our predictions and figure out who is worth listening to.
Extrapolating from old GPTs
Rule 1 of forecasting: stop thinking too much and look at some historical data.
We don't know how long it took to develop each GPT, but we can look at how much time passed between each iteration.
So far, more and more time has passed between GPTs. It took 8 months from GPT-1 to GPT-2 and roughly twice as long to GPT-3. And then it took twice as long again to get to GPT-4! This means it took almost 3 years to get from GPT-3 to GPT-4.
GPT ModelRelease Date [1]Months Passed Between Former ModelGPT-111.06.2018 GPT-2 14.02.20198.16GPT-328.05.202015.43GPT-414.03.202333.55
Naively Extrapolating from this, it should take till the beginning of 2029 to develop GPT-5.
shoutout to datapoint 4 for making this graph
Now this does seem rather far in the future. So far the trends seems surprisingly consistent, but there aren’t that many datapoints here and it seems like things can change somewhat fast in AI.
Prediction markets
Let’s look at what the prediction markets think:
Manifold Market seems to think, that there is an 62% chance that GPT-5 will come out before 2025.
Similarly, Metaculus puts the announcement date of GPT-5 to Sept 2024: [2]
So it seems like the forecasting sites expect a strong deviation from the historical trend.
Note, that if GPT-5 takes as long as GPT-4 did, we would expect it by the end of 2025. Not only do they expect that GPT-5 will not take longer than GPT-4, they seem to expect GPT-5 to be developed a whole ye
|
fab62af7-ff49-40f0-b341-20cceac7a620
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Washington DC: Singing
Discussion article for the meetup : Washington DC: Singing
WHEN: 20 April 2014 03:00:00PM (-0400)
WHERE: National Portrait Gallery, Washington, DC 20001, USA
We'll be meeting up to go singing!
Because this is probably not a good idea in the portrait gallery, we'll meet there, and then head out somewhere (Archives probably) after we've rendezvoused.
Discussion article for the meetup : Washington DC: Singing
|
40540853-d633-4957-9f39-055cd436d295
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Acoustic vs Electric Mandolin
When I bought my mandolin in 2013, I recorded some examples comparing it to my previous one. Now that I have an electric mandolin, I wanted to do something similar.
The acoustic mandolin is a Collings MT, and the electric is a Gold Tone GME-4. Here's how they sounded:
CHORDS
( acoustic mp3)
( electric mp3)
HIGH RIFF
( acoustic mp3)
( electric mp3)
LOW RIFF
( acoustic mp3)
( electric mp3)
HIGH MELODY
( acoustic mp3)
( electric mp3)
MEDIUM MELODY
( acoustic mp3)
( electric mp3)
LOW MELODY
( acoustic mp3)
( electric mp3)
The acoustic has relatively worn J74 medium phosphor bronze strings, while the electric has the steel strings that shipped with the instrument. I mic'd the acoustic the way I would play it live: about 2" from the 15th fret, where the neck meets the body, with a Sennheiser e835s. With the electric I had the tone knob at 10 (no low pass) and connected it to the board via a MXR M222 on bypass (details).
Overall, for most of the kind of playing I do, I strongly prefer the sound of the acoustic. It's much more complex, especially in the high end. On the other hand, the electric offers some options for sounds that I can't get on the acoustic, especially when paired with the talkbox, and it sounds good enough clean that I would be ok playing that way some. For Free Raisins gigs, where I'm playing mandolin almost all the time, I'm definitely going to continue bringing my acoustic, and might additionally bring my electric when that wouldn't be too much hassle. For Kingfisher gigs, however, where I mostly play keyboard and when I do play mandolin expect to mostly use pedals, I think I'll probably bring only my electric.
(My view might change after hearing how the electric sounds at a dance. Sometimes what sounds best in isolation isn't a good fit with other instruments, or in the chaos of a dance hall.)
|
43be7115-b401-493f-acc4-7e71d3326779
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Ritual
The room in which Jeffreyssai received his non-beisutsukai visitors was quietly formal, impeccably appointed in only the most conservative tastes. Sunlight and outside air streamed through a grillwork of polished silver, a few sharp edges making it clear that this wall was not to be opened. The floor and walls were glass, thick enough to distort, to a depth sufficient that it didn’t matter what might be underneath. Upon the surfaces of the glass were subtly scratched patterns of no particular meaning, scribed as if by the hand of an artistically inclined child (and this was in fact the case).
Elsewhere in Jeffreyssai’s home there were rooms of other style; but this, he had found, was what most outsiders expected of a Bayesian Master, and he chose not to enlighten them otherwise. That quiet amusement was one of life’s little joys, after all.
The guest sat across from him, knees on the pillow and heels behind. She was here solely upon the business of her Conspiracy, and her attire showed it: a form-fitting jumpsuit of pink leather with even her hands gloved—all the way to the hood covering her head and hair, though her face lay plain and unconcealed beneath.
And so Jeffreyssai had chosen to receive her in this room.
Jeffreyssai let out a long breath, exhaling. “Are you sure?”
“Oh,” she said, “and do I have to be absolutely certain before my advice can shift your opinions? Does it not suffice that I am a domain expert, and you are not?”
Jeffreyssai’s mouth twisted up at the corner in a half-smile. “How do you know so much about the rules, anyway? You’ve never had so much as a Planck length of formal training.”
“Do you even need to ask?” she said dryly. “If there’s one thing that you beisutsukai do love to go on about, it’s the reasons why you do things.”
Jeffreyssai inwardly winced at the thought of trying to pick up rationality by watching other people talk about it—
“And don’t inwardly wince at me like that,” she said. “I’m not trying to be a rationalist mys
|
7683204e-73e2-45e3-bb81-e5727000da04
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Confidence-aware motion prediction for real-time collision avoidance <sup>1</sup>
Introduction Motion planning serves a key role in robotics, enabling robots to automatically compute trajectories that achieve the specified objectives while avoiding unwanted collisions. In many situations of practical interest, such as autonomous driving and unmanned aerial vehicle (UAV) navigation, it is important that motion planning account not just for the current state of the environment, but also for its predicted future state. Often, certain objects in the environment may move in active, complex patterns that cannot be readily predicted using straightforward physics models; we shall refer to such complex moving objects as agents. Examples of agents considered in this paper include pedestrians and human-driven vehicles. Predicting the future state of these agents is generally a difficult problem. Some of the key challenges include unclear and varying intents of other agents, mismatches between dynamics models and reality, incomplete sensor information, and interaction effects. One popular approach to addressing the challenge of a priori unknown agent intent is to use rule-based or data-driven algorithms to predict individual trajectories for each agent, as in Schmerling et al. (2017) . Alternatively, Ziebart et al. (2009) , Bandyopadhyay et al. (2013) , and Kochenderfer et al. (2010) explicitly predict an agent's full state distribution over time; this representation may be better suited to capturing uncertainty in an agent's dynamics and the environment itself. Wang et al. (2019) and Fisac et al. (2018b) pose the prediction problem game-theoretically to model coupled human-robot interaction effects explicitly. Unfortunately, a significant problem still remains: if an agent suddenly moves in a way that is not predicted, or not assigned sufficient probability, the robot may not react appropriately. For example, in Figure 1 a pedestrian is walking around an obstacle that the robot, a quadcopter, cannot detect. To the robot, such behavior may be assigned very low probability, which could lead the robot to plan a dangerous trajectory. In this particular example, this inaccuracy caused the quadcopter to collide with the pedestrian (Figure 1 , left). To prepare for this eventuality, we introduce the idea of confidence-aware prediction. We argue that, in addition to predicting the future state of an agent, it is also crucial for a robot to assess the quality of the mechanism by which it is generating those predictions. That is, a robot should reason about how confident it is in its predictions of other agents before attempting to plan future motion. For computational efficiency, the quadcopter uses a simplified model of pedestrian dynamics and decision-making. Thus equipped, it generates a time-varying probability distribution over the future state of the pedestrian, and plans trajectories to a pre-specified goal that maintain a low probability of collision. Figure 1 (right) illustrates how this approach works in practice. The quadcopter maintains a Bayesian belief over its prediction confidence. As soon as the pedestrian moves in a way that was assigned low probability by the predictive model, the quadcopter adjusts its belief about the accuracy of that model. Consequently, it is less certain about what the pedestrian will do in the future. This leads the quadcopter's onboard motion planner, which attempts to find efficient trajectories with low probability of collision, to generate more cautious, and perhaps less efficient, motion plans. In order to improve the robustness of generated motion plans, we employ the recent FaSTrack framework from Herbert et al. (2017) for fast and safe motion planning and tracking. FaSTrack quantifies the maximum possible tracking error between a high-order dynamical model of the physical robot and the (potentially lower-order) dynamical model used by its motion planner. Solving an offline Hamilton-Jacobi reachability problem yields a guaranteed tracking error bound and the corresponding safety controller. These may be used by an out-of-the-box real-time motion planning algorithm to facilitate motion plans with strong runtime collision-avoidance guarantees. The remainder of this paper is organized as follows. Section 2 places this work in the context of existing literature in human motion modeling and prediction, as well as robust motion planning. Section 3 frames the prediction and planning problems more formally, and introduces a running example used throughout the paper. Section 4 presents our main contribution: confidence-aware predictions. Section 5 showcases confidence-aware predictions in operation in several examples. Section 6 describes the application of the robust motion planning framework from FaSTrack to this setting, in which predictions are probabilistic. Section 7 explores a connection between our approach and reachability theory. Section 8 presents experimental results from a hardware demonstration. Finally, Section 9 concludes with a discussion of some of the limitations of our work and how they might be addressed in specific applications, as well as suggestions for future research. 1 . When planning around humans, accurate predictions of human motion (visualized here in pink and blue, representing high and low probability, respectively) are an essential prerequisite for safety. Unfortunately, these approaches may fail to explain all observed motion at runtime (e.g., human avoids unmodeled spill on the ground), leading to inaccurate predictions, and potentially, collisions (left). Our method addresses this by updating its predictive model confidence in real time (right), leading to more conservative motion planning in circumstances when predictions are known to be suspect.
Prior work 2
Human modeling and prediction One common approach for predicting human actions is to collect data from real-world scenarios and train a machine learning model via supervised learning. Such techniques use the human's current state, and potentially her prior state and action history, to predict future actions directly. Amor et al. (2014) , Ding et al. (2011) , Koppula and Saxena (2013) , Lasota and Shah (2015) , and Hawkins et al. (2013) demonstrated the effectiveness of this approach for inference and planning around human arm motion. In addition, Hawkins et al. (2013) focused on multi-step tasks such as assembly, and Schmerling et al. (2017) and Driggs-Campbell et al. (2018) addressed the prediction problem for human drivers. Rather than predicting actions directly, an alternative is for the robot to model the human as a rational agent seeking to maximize an unknown objective function. The human's actions up to a particular time may be viewed as Bayesian evidence from which the robot may infer the parameters of that objective. Assuming that the human seeks to maximize this objective in the future, the robot can predict her future movements (e.g., Bai et al., 2015; Baker et al., 2007; Ng and Russell, 2000; Ziebart et al., 2009) . In this paper, we build on this work by introducing a principled online technique for estimating confidence in such a learned model of human motion.
Safe robot motion planning Once armed with a predictive model of the human motion, the robot may leverage motion planning methods that plan around uncertain moving obstacles and generate real-time dynamically feasible and safe trajectories. To avoid moving obstacles in real time, robots typically employ reactive and/or path-based methods. Reactive methods directly map sensor readings into control, with no memory involved (e.g., Belkhouche, 2009) . Path-based methods such as rapidly-exploring random trees from Karaman and Frazzoli (2011) and A* from Hart et al. (1968) find simple kinematic paths through space and, if necessary, time. These path-based methods of planning are advantageous in terms of efficiency, yet, while they have in some cases been combined with probabilistically moving obstacles as in Aoude et al. (2013) and Ziebart et al. (2009) , they do not consider the endogenous dynamics of the robot or exogenous disturbances such as wind. As a result, the robot may deviate from the planned path and potentially collide with obstacles. It is common for these plans to try to avoid obstacles by a heuristic margin of error. Herbert et al. (2017) and Fridovich-Keil et al. (2018) proposed FaSTrack, a recent algorithm that provides a guaranteed tracking error margin and corresponding errorfeedback controller for dynamic systems tracking a generic planner in the presence of bounded external disturbance. Our work builds upon FaSTrack to create an algorithm that can safely and dynamically navigate around uncertain moving obstacles in real time.
Problem setup We consider a single mobile robot operating in a shared space with a single human agent (e.g., a pedestrian or human-driven car). For simplicity, we presume that the robot has full knowledge of its own state and that of the human, although both would require online estimation in practice. As we present each formal component of this problem, we will provide a concrete illustration using a running example in which a quadcopter is navigating around a pedestrian.
Dynamical system models and safety We will model the motion of both the human and the robot as the evolution of two dynamical systems. Let the state of the human be x H 2 R n H , where n H is the dimension of the human state space. We similarly define the robot's state, for planning purposes, as x R 2 R n R . In general, these states could represent the positions and velocities of a mobile robot and a human in a shared environment, the kinematic configurations of a human and a robotic manipulator in a common workspace, or the positions, orientations, and velocities of human-driven and autonomous vehicles in an intersection. We express the evolution of these states over time as a family of ordinary differential equations: _ x H = f H (x H , u H ), _ x R = f R (x R , u R ) ð1Þ where u H 2 R m H and u R 2 R m R are the control actions of the human and robot, respectively. Running example: We introduce a running example for illustration purposes throughout the paper. In this example we consider a small quadcopter that needs to fly to goal location g R 2 R 3 in a room where a pedestrian is walking. For the purposes of planning, the quadcopter's 3D state is given by its position in space x R = ½p x , p y , p z , with velocity controls assumed decoupled in each spatial direction, up to v R = 0:25 m/s. The human can only move by walking and therefore her state is given by planar coordinates x H = ½h x , h y evolving as _ x H = ½v H cos u H , v H sin u H . Intuitively, we model the human as moving with a fixed speed and controlling their heading angle. At any given time, the human is assumed to either move at a leisurely walking speed (v H '1 m/s) or remain still (v H '0). Ultimately, the robot needs to plan and execute an efficient trajectory to a pre-specified goal state (g R ), without colliding with the human. We define the keep-out set K & R n H × R n R as the set of joint robot-human states to be avoided (for example, because they imply physical collisions). To avoid reaching this set, the robot must reason about the human's future motion when constructing its own motion plan. Running example: In our quadcopter-avoidingpedestrian example, K consists of joint robot-human states in which the quadcopter is flying within a square of side length l = 0:3 m centered around the human's location, while at any altitude, as well as any joint states in which the robot is outside the environment bounds defined as a box with a square base of side L = 3:66 m and height H = 2 m, regardless of the human's state.
Robust robot control Provided an objective and a dynamics model, the robot must generate a motion plan that avoids the keep-out set K. Unfortunately, this safety requirement is difficult to meet during operation for two main reasons. 1. Model mismatch. The dynamical system model f R will never be a perfect representation of the real robot. This mismatch could lead to unintended collision. 2. Disturbances. Even with a perfect dynamics model, there may be unobserved, external ''disturbance'' inputs such as wind or friction. Without accounting for these disturbances, the system is not guaranteed to avoid K, even if the planned trajectory is pointwise collision-free. To account for modeling error and external disturbances, we could in principle design a higher-fidelity dynamical model directly in a robust motion planning framework. Unfortunately, however, real-time trajectory optimization in high dimensions can be computationally burdensome, particularly when we also require some notion of robustness to external disturbance. Ideally, we would like to enjoy the computational benefits of planning with a lower-fidelity model while enforcing the safety constraints induced by the higher-fidelity model. To characterize this model mismatch, we consider a higher fidelity and typically higher-order dynamical representation of the robot, with state representation s R 2 R n S . This dynamical model will also explicitly account for external disturbances as unknown bounded inputs, distinct from control inputs. In order to map between this higher-fidelity ''tracking'' state s R and the lower-fidelity ''planning'' state x R , we shall assume a known projection operator p : R n S ! R n R . Fortunately, we can plan in the lower-dimensional state space at runtime, and guarantee robust collision avoidance via an offline reachability analysis that quantifies the effects of model mismatch and external disturbance. This framework, called FaSTrack and first proposed by Herbert et al. (2017) , is described in further detail in Section 6. Running example: We model our quadcopter with the following flight dynamics (in the near-hover regime, at zero yaw with respect to a global coordinate frame): _ p x _ p y _ p z 2 4 3 5 = v x v y v z 2 4 3 5 , _ v x _ v y _ v z 2 4 3 5 = a g tan u u Àa g tan u f u T À a g 2 4 3 5 ð2Þ where ½p x , p y , p z is the quadcopter's position in space and ½v x , v y , v z is its velocity expressed in the fixed global frame. We model its control inputs as thrust acceleration u T and attitude angles (roll u f and pitch u u ), and denote the acceleration due to gravity as a g . The quadcopter's motion planner generates nominal kinematic trajectories in the lowerdimensional ½p x , p y , p z position state space. Therefore, we have a linear projection map p(s R ) = ½I 3 , 0 3 s R , that is, x R retains the position variables in s R and discards the velocities.
Predictive human model In order to predict the human's future motion, the robot uses its internal model of human dynamics, f H . Under this modeling assumption, the human's future trajectory depends upon the choice of control input over time, u H ( Á ). Extensive work in econometrics and cognitive science, such as that of Von Neumann and Morgenstern (1945) , Luce (1959), and Baker et al. (2007) , has shown that human behavior (that is, u H ) can be well modeled by utility-driven optimization. Thus, the robot models the human as optimizing a reward function, r H (x H , u H ; u), that depends on the human's state and action, as well as a set of parameters u. This reward function could be a linear combination of features as in many inverse optimal control implementations (where the goal or feature weighting u must be learned, either online or offline), or more generally learned through function approximators such as deep neural networks, where u are the trained weights as in Finn et al. (2016) . We assume that the robot has a suitable human reward function r H , either learned offline from prior human demonstrations or otherwise encoded by the system designers. Thus, endowed with r H , the robot can model the human's choice of control action as a probability distribution over actions conditioned on state. Under maximumentropy assumptions (Ziebart et al., 2008) inspired by noisy-rationality decision-making models (Baker et al., 2007) , the robot models the human as more likely to choose (discrete) actions u H with high expected utility, in this case the state-action value (or Q-value): P(u H jx H ; b, u) = e bQ H (x H , u H ;u) P ũ e bQ H (x H , ũ;u) ð3Þ We use a temporally and spatially discretized version of human dynamics, fH . These discrete-time dynamics may be found by integrating f H over a fixed time step of Dt with fixed control u H over the interval. Section 5 provides further details on this discretization. Running example: The quadcopter's model of the human assumes the human intends to reach some target location g H 2 R 2 in a straight line. The human's reward function is given by the distance traveled over time step Dt, i.e., r H (x H , u H ; g H ) = À v H Dt, and human trajectories are constrained to terminate at g H . The state-action value, parameterized by u = g H , captures the optimal cost of reaching g H from x H when initially applying u H for a duration Dt: Q H (x H , u H ; g H ) = À v H DtÀ k x H + v H Dt½cos u H , sin u H > À g H k 2 : Often, the coefficient b is termed the rationality coefficient, because it quantifies the degree to which the robot expects the human's choice of control to align with its model of utility. For example, taking b # 0 yields a model of a human who appears ''irrational,'' choosing actions uniformly at random and completely ignoring the modeled utility. At the other extreme, taking b " ' corresponds to a ''perfectly rational'' human, whose actions exactly optimize the modeled reward function. As we will see in Section 4, b can also be viewed as a measure of the robot's confidence in the predictive accuracy of Q H . Note that Q H (x H , u H ; u) only depends on the human state and action and not on the robot's. Thus far, we have intentionally neglected discussion of human-robot interaction effects. These effects are notoriously difficult to model, and the community has devoted a significant effort to building and validating a variety of models (e.g., Sadigh et al., 2016; Trautman and Krause, 2010) . In that spirit, we could have chosen to model human actions u H as dependent upon robot state x R in (3), and likewise defined Q H to depend upon x R . This extended formulation is sufficiently general as to encompass all possible (Markov) interaction models. However, in this work we explicitly do not model these interactions; indeed, one of the most important virtues of our approach is its robustness to precisely these sorts of modeling errors.
Probabilistically safe motion planning Ideally, the robot's motion planner should generate trajectories that reach a desired goal state efficiently, while maintaining safety. More specifically, in this context ''safety'' indicates that the physical system will never enter the keepout set K during operation, despite human motion and external disturbances. That is, we would like to guarantee that (p(s R ), x H ) 6 2 K for all time. To make this type of strong, deterministic, a priori safety guarantee requires the robot to avoid the set of all human states x H which could possibly be occupied at a particular time, i.e., the human's forward reachable set. If the robot can find trajectories that are safe for any possible human trajectory then there is no need to predict the human's next action. Unfortunately, the forward reachable set of the human often encompasses such a large volume of the workspace that it is impossible for the robot to find a guaranteed safe trajectory to the goal state. This motivates refining our notion of prediction: rather than reasoning about all the places where the human could be, the robot can instead reason about how likely the human is to be at each location. This probabilistic reasoning provides a guide for planning robot trajectories with a quantitative degree of safety assurance. Our probabilistic model of human control input (3) coupled with dynamics model f H allows us to compute a probability distribution over human states for every future time. By relaxing our conception of safety to consider only collisions that might occur with sufficient probability P th , we dramatically reduce the effective volume of this set of future states to avoid. In practice, P th should be chosen carefully by a system designer in order to trade off overall collision probability with conservativeness in motion planning. The proposed approach in this paper follows two central steps to provide a quantifiable, high-confidence collision avoidance guarantee for the robot's motion around the human. In Section 4 we present our proposed Bayesian framework for reasoning about the uncertainty inherent in a model's prediction of human behavior. Based on this inference, we demonstrate how to generate a real-time probabilistic prediction of the human's motion over time. Next, in Section 6, we extend a state-of-the-art, provably safe, real-time robotic motion planner to incorporate our time-varying probabilistic human prediction.
Confidence-aware human motion prediction Any approach to human motion prediction short of computing a full forward reachable set must, explicitly or implicitly, reflect a model of human decision-making. In this work, we make that model explicit by assuming that the human chooses control actions in a Markovian fashion according to the probability distribution (3). Other work in the literature, such as that of Schmerling et al. (2017) , aims to learn a generative probabilistic model for human trajectories; implicitly, this training procedure distills a model of human decision-making. Whether explicit or implicit, these models are by nature imperfect and liable to make inaccurate predictions eventually. One benefit of using an explicit model of human decision-making, such as (3), is that we may reason directly and succinctly about its performance online. In particular, the entropy of the human control distribution in (3) is a decreasing function of the parameter b. High values of b place more probability mass on highutility control actions u H , whereas low values of b spread the probability mass more evenly between different control inputs, regardless of their modeled utility Q H . Therefore, b naturally quantifies how well the human's motion is expected to agree with the notion of optimality encoded in Q H . The commonly used term ''rationality coefficient,'' however, seems to imply that discrepancies between the two indicate a failure on the human's part to make the ''correct'' decisions, as encoded by the modeled utility. Instead, we argue that these inevitable disagreements are primarily a result of the model's inability to fully capture the human's behavior. Thus, instead of conceiving of b as a rationality measure, we believe that b can be given a more pragmatic interpretation related to the accuracy with which the robot's model of the human is able to explain the human's motion. Consistently, in this paper, we refer to b as model confidence. An important related observation following from this interpretation of b is that the predictive accuracy of a human model is likely to change over time. For example, the human may change their mind unexpectedly, or react suddenly to some aspect of the environment that the robot is unaware of. Therefore, we shall model b as an unobserved, time-varying parameter. Estimating it in real time provides us with a direct, quantitative summary of the degree to which the utility model Q H explains the human's current motion. To do this, we maintain a Bayesian belief about the possible values of b. Initially, we begin with a uniform prior over b and over time this distribution evolves given measurements of the human's state and actions.
Real-time inference of model confidence We reason about the model confidence b as a hidden state in a hidden Markov model (HMM) framework. The robot starts with a prior belief b 0 À over the initial value of b. In this work, we use a uniform prior, although that is not strictly necessary. At each discrete time step k 2 f0, 1, 2, . . .g, it will have some belief about model confidence b k À (b). 3 After observing a human action u k H , the robot will update its belief to b k + by applying Bayes' rule. The hidden state may evolve between subsequent time steps, accounting for the important fact that the predictive accuracy of the human model may change over time as unmodeled factors in the human's behavior become more or less relevant. As, by definition, we do not have access to a model of these factors, we use a naive ''e-static'' transition model: at each time k, b may, with some probability e, be re-sampled from the initial distribution b 0 À , and otherwise retains its previous value. We define the belief over the next value of b (denoted by b 0 ) as an expectation of the conditional probability P(b 0 jb), i.e., b k À (b 0 ) :¼ E b;b kÀ1 + ½P(b 0 jb). Concretely, this expectation may be computed as b k À (b 0 ) = (1 À e)b kÀ1 + (b 0 ) + eb 0 À (b 0 ) ð4Þ By measuring the evolution of the human's state x H over time, we assume that, at every time step k, the robot is able to observe the human's control input u k H . This observed control may be used as evidence to update the robot's belief b k À about b over time via a Bayesian update: b k + (b) = P(u k H jx k H ; b, u)b k À (b) P b P(u k H jx k H ; b, u)b k À ( b) ð5Þ with b k + (b) :¼ P(bjx 0:k H , u 0:k H ) for k 2 f0, 1, . . .g, and P(u k H jx k H ; b, u) given by (3). It is critical to be able to perform this update rapidly to facilitate real-time operation; this would be difficult in the original continuous hypothesis space b 2 ½0, '), or even in a large discrete set. Fortunately, our software examples in Section 5 and hardware demonstration in Section 8 suggest that maintaining a Bayesian belief over a relatively small set of N b = 5 discrete values of b distributed on a log scale achieves significant improvement relative to using a fixed value. The ''e-static'' transition model leads to the desirable consequence that old observations of the human's actions have a smaller influence on the current model confidence distribution than recent observations. In fact, if no new observations are made, successively applying time updates asymptotically contracts the belief towards the initial distribution, that is, b k À ( Á ) ! b 0 À ( Á ). The choice of parameter e effectively controls the rate of this contraction, with higher e leading to more rapid contraction.
Human motion prediction Equipped with a belief over b at time step k, we are now able to propagate the human's state distribution forward to any future time via the well-known Kolmogorov forward equations, recursively. In particular, suppose that we know the probability that the human is in each state x k H at some future time step k. We know that (according to our utility model) the probability of the human choosing control u k H in state x k H is given by (3). Accounting for the otherwise deterministic dynamics model fH , we obtain the following expression for the human's state distribution at the following time step k + 1: P(x k + 1 H ; b, u) = X x k H , u k H P(x k + 1 H jx k H , u k H ; b, u) Á P(u k H jx k H ; b, u)P(x k H ; b, u) ð6Þ for a particular choice of b. Marginalizing over b according to our belief at the current step time k, we obtain the overall occupancy probability distribution at each future time step k: P(x k H ; u) = E b;b k P(x k H ; b, u) ð7Þ Note that ( 6 ) is expressed more generally than is strictly required. Indeed, because the only randomness in dynamics model fH originates from the human's choice of control input u H , we have P( x k + 1 H jx k H , u k H ; b, u) = 1fx k + 1 H = fH (x k H , u k H )g.
Model confidence with auxiliary parameter identification Thus far, we have tacitly assumed that the only unknown parameter in the human utility model ( 3 ) is the model confidence, b. However, often one or more of the auxiliary parameters u are also unknown. These auxiliary parameters could encode one or more human goal states or intents, or other characteristics of the human's utility, such as her preference for avoiding parts of the environment. Further, much like model confidence, they may change over time. In principle, it is possible to maintain a Bayesian belief over b and u jointly. The Bayesian update for the hidden state (b, u) is then given by ) the prior at time step k. This approach can be practical for parameters taking finitely many values from a small, discrete set, e.g., possible distinct modes for a human driver (distracted, cautious, aggressive). However, for certain scenarios or approaches it may not be practical to maintain a full Bayesian belief on the parameters u. In such cases, it is reasonable to replace the belief over u with a point estimate u, such as the maximum likelihood estimator or the mean, and substitute that estimate into (6). Depending on the complexity of the resulting maximum likelihood estimation problem, it may or may not be computationally feasible to update the parameter estimate u at each time step. Fortunately, even when it is computationally expensive to estimate u, we can leverage our model confidence as an indicator of when reestimating these parameters may be most useful. That is, when model confidence degrades that may indicate poor estimates of u. b k + (b, u) = P(u k H jx k H ; b, u)b k À (b, u) P b, ũ P(u k H jx k H ; b, ũ)b k À ( b, ũ) ð8Þ
Prediction examples We illustrate these inference steps with two sets of examples: our running pedestrian example and a simple model of a car.
Pedestrian model (running example) So far, we have presented a running example of a quadcopter avoiding a human. We use a deliberately simple, purely kinematic model of continuous-time human motion: _ x H = _ h x _ h y ! = v H cos u H v H sin u H ! ð9Þ However, as discussed in Section 3.3, the proposed prediction method operates in discrete time (and space). The discrete dynamics corresponding to (9) are given by x k + 1 H À x k H [x H (t + Dt) À x H (t) = v H Dt cos u H (t) v H Dt sin u H (t) ! ð10Þ for a time discretization of Dt.
Dubins car model To emphasize the generality of our method, we present similar results for a different application domain: autonomous driving. We will model a human-driven vehicle as a dynamical system whose state x H evolves as _ x H = _ h x _ h y _ h f 2 4 3 5 = v H cos h f v H sin h f u H 2 4 3 5 ð11Þ Observe that, while (11) appears very similar to (9), in this Dubins car example the angle of motion is a state, not a control input. We discretize these dynamics by integrating (11) from t to t + Dt, assuming a constant control input u H : x k + 1 H À x k H [x H (t + Dt) À x H (t) = v H u H (t) sin (h f (t) + u H (t)Dt) À sin (h f (t)) À Á À v H u H (t) cos (h f (t) + u H (t)Dt) À cos (h f (t)) À Á u H Dt 2 6 4 3 7 5 For a specific goal position g = ½g x , g y , the Q-value corresponding to state-action pair (x H , u H ) and reward function r H (x H , u H ) = À v H Dt (until the goal is reached) may be found by solving a shortest path problem offline.
Accurate model First, we consider a scenario in which the robot has full knowledge of the human's goal, and the human moves along the shortest path from a start location to this known goal state. Thus, human motion is well-explained by Q H . The first row of Figure 2 illustrates the probability distributions our method predicts for the pedestrian's future state at different times. Initially, the predictions generated by our Bayesian confidence-inference approach (right) appear similar to those generated by the low model confidence predictor (left). However, our method rapidly discovers that Q H is an accurate description of the pedestrian's motion and generates predictions that match the high model confidence predictor (center). The data used in this example was collected by tracking the motion of a real person walking in a motion capture arena. See Section 8 for further details. Likewise, the first row of Figure 3 shows similar results for a human-driven Dubins car model (in simulation) at an intersection. Here, traffic laws provide a strong prior on the human's potential goal states. As shown, our method of Bayesian model confidence inference quickly infers the correct goal and learns that the human driver is acting in accordance with its model Q H . The resulting predictions are substantially similar to the high-b predictor. The data used in this example was simulated by controlling a Dubins car model along a pre-specified trajectory.
Unmodeled obstacle Often, robots do not have fully specified models of the environment. Here, we showcase the resilience of our approach to unmodeled obstacles that the human must avoid. In this scenario, the human has the same start and goal as in the accurate model case, except that there is an obstacle along the way. The robot is unaware of this obstacle, however, which means that in its vicinity the human's motion is not well-explained by Q H , and b(b) ought to place more probability mass on higher values of b. The second rows of Figure 2 and Figure 3 illustrate this type of situation for the pedestrian and Dubins car, respectively. In Figure 2 , the pedestrian walks to an a priori known goal location and avoids an unmodeled spill on the ground. Analogously, in Figure 3 the car swerves to avoid a large pothole. By inferring model confidence online, our approach generates higher-variance predictions of future state, but only in the vicinity of these unmodeled obstacles. At other times throughout the episode when Q H is more accurate, our approach produces predictions more in line with the high model confidence predictor.
Unmodeled goal In most realistic human-robot encounters, even if the robot does have an accurate environment map and observes all obstacles, it is unlikely for it to be aware of all human goals. We test our approach's resilience to unknown human goals by constructing a scenario in which the human moves between both known and unknown goals. The third row of Figure 2 illustrates this situation for the pedestrian example. Here, the pedestrian first moves to one known goal position, then to another, and finally back to the start which was not a modeled goal location. The first two legs of this trajectory are consistent with the robot's model of goal-oriented motion, though accurate prediction does require the predictor to infer which goal the pedestrian is walking toward. However, when the pedestrian returns to the start, her motion appears inconsistent with Q H , skewing the robot's belief over b toward zero. Similarly, in the third row of Figure 3 we consider a situation in which a car makes an unexpected turn onto an unmapped access road. As soon as the driver initiates the turn, our predictor rapidly learns to distrust its internal model Q H and shift its belief over b upward.
Safe probabilistic planning and tracking Given probabilistic predictions of the human's future motion, the robot must plan efficient trajectories that avoid collision with high probability. In order to reason robustly about this probability of future collision, we must account for potential tracking errors incurred by the real system as it follows planned trajectories. To this end, we build on the recent FaSTrack framework of Herbert et al. (2017) , which provides control-theoretic robust safety certificates in the presence of deterministic obstacles, and extend it to achieve approximate probabilistic collision-avoidance.
Background: fast planning, safe tracking Recall that x R is the robot's state for the purposes of motion planning, and that s R encodes a higher-fidelity, potentially higher-dimensional notion of state (with associated dynamics). The recently proposed FaSTrack framework from Herbert et al. (2017) uses Hamilton-Jacobi reachability analysis to quantify the worst-case tracking performance of the s R -system as it follows trajectories generated by the x R -system. For further reading on reachability analysis refer to Evans and Souganidis (1984) , Mitchell et al. (2005) , and Bansal et al. (2017) . A byproduct of this FaSTrack analysis is an error feedback controller that the s R system can use to achieve this worst-case tracking error. The tracking error bound may be given to one of many off-the-shelf real-time motion planning algorithms operating in x R -space in order to guarantee real-time collision avoidance by the s Rsystem. Formally, FaSTrack precomputes an optimal tracking controller, as well as a corresponding compact set E in the robot's planning state space, such that (p(s R (t))À x R, ref (t)) 2 E for any reference trajectory proposed by the lower-fidelity planner. This bound E is a trajectory tracking certificate that can be passed to an online planning algorithm for real-time safety verification: the dynamical robot is guaranteed to always be somewhere within the bound relative to the current planned reference point x R, ref (t). This tracking error bound may sometimes be expressed analytically; otherwise, it may be computed numerically offline using level set methods (e.g., Mitchell, 2009) . Equipped with E, the planner can generate safe plans online by ensuring that the entire tracking error bound around the nominal state remains collision-free throughout the trajectory. Efficiently checking these E-augmented trajectories for collisions with known obstacles is critical for real-time performance. Note that the planner only needs to know E (which is computed offline) and otherwise requires no explicit understanding of the high-fidelity model. Running example: As dynamics (2) are decoupled in the three spatial directions, the bound E computed by FaSTrack is an axis-aligned box of dimensions E x × E y × E z . For further details refer to Fridovich-Keil et al. ( 2018 ).
Robust tracking, probabilistic safety Unfortunately, planning algorithms for collision checking against deterministic obstacles cannot be readily applied to our problem. Instead, a trajectory's collision check should return the probability that it might lead to a collision. Based on this probability, the planning algorithm can discriminate between trajectories that are sufficiently safe and those that are not. As discussed in Section 3.4, a safe online motion planner invoked at time t should continually check the probability that, at any future time t, (p(s R (t)), x H (t)) 2 K. The tracking error bound guarantee from FaSTrack allows us to conduct worst-case analysis on collisions given a human state x H . Concretely, if no point in the Minkowski sum fx R + Eg is in the collision set with x H , we can guarantee that the robot is not in collision with the human. The probability of a collision event for any point x R (t) along a candidate trajectory is then P coll (x R (t)) :¼ P((x R , x H ) 2 K) ð12Þ Assuming worst-case tracking error bound E, this quantity can be upper-bounded by the total probability that x H (t) will be in collision with any of the possible robot states xR 2 fx R (t) + Eg. For each robot planning state x R 2 R n R we define the set of human states in potential collision with the robot: H E (x R ) :¼ fx H 2 R n H : 9x R 2 fx R + Eg, (x R , xH ) 2 Kg ð13Þ Running example: Given K and E, H E (x R ) is the set of human positions within the rectangle of dimensions (l + E x ) × (l + E y ) centered on ½p x , p y . A human anywhere in this rectangle could be in collision with the quadcopter. The following result follows directly from the definition of the tracking error bound and a union bound. Proposition 1. The probability of a robot with worst-case tracking error E colliding with the human at any trajectory point x R (t) is bounded above by the probability mass of x H (t) contained within H E (x R (t)). We consider discrete-time motion plans. The probability of collision along any such trajectory from current time step k to final step k + K is upper-bounded by P k:k + K coll ł P k:k + K coll :¼ 1 À Y k + K k = k P(x k H 6 2 H E (x k R )jx k H 6 2 H E (x s R ), k ł s\k) ð14Þ Evaluating the right-hand side of ( 14 ) exactly requires reasoning about the joint distribution of human states over all time steps and its conditional relationship on whether collision has yet occurred. This is equivalent to maintaining a probability distribution over the exponentially large space of trajectories x k:k + K H that the human might follow. As The International Journal of Robotics Research 39(2-3) motion planning occurs in real time, we shall resort to a heuristic approximation of ( 14 ). One approach to approximating ( 14 ) is to assume that the event x k 1 H 6 2 H E (x k 1 R ) is independent of x k 2 H 6 2 H E (x k 2 R ), for all k 1 6 ¼ k 2 . This independence assumption is equivalent to removing the conditioning in ( 14 ). Unfortunately, this approximation is excessively pessimistic; if there is no collision at time step k, then collision is also unlikely at time step k + 1 because both human and robot trajectories are continuous. In fact, for sufficiently small time discretization Dt and nonzero collision probabilities at each time step, the total collision probability resulting from an independence assumption would approach 1 exponentially fast in the number of time steps K. We shall refine this approximation by finding a tight lower bound on the right-hand side of ( 14 ). Because collision events are correlated in time, we first consider replacing each conditional probability P( x k H 6 2 H E (x k R )jx s H 6 2 H E (x s R ), k ł s\k) by 1 for all k 2 fk + 1, . . . , k + Kg. This effectively lower bounds P k:k + K coll by the worst-case probability of collision at the current time step k: P k:k + K coll ø 1 À P(x k H 6 2 H E (x k R )) = P(x k H 2 H E (x k R )) ð15Þ This bound is extremely loose in general, because it completely ignores the possibility of future collision. However, note that probabilities in the product in ( 14 ) may be conditioned in any particular order (not necessarily chronological). This commutativity allows us to generate K À k + 1 lower bounds of the form P k:k + K coll ø P(x k H 2 H E (x k R )) for k 2 fk, . . . , k + Kg. Taking the tightest of all of these bounds, we can obtain an informative, yet quickly computable, approximator for the sought probability: P k:k + K coll ø max k2fk:k + Kg P(x k H 2 H E (x k R ))'P k:k + K coll ð16Þ To summarize, the left inequality in ( 16 ) lower-bounds P k:k + K coll with the greatest marginal collision probability at any point in the trajectory. On the right-hand side of ( 16 ), we take this greatest marginal collision probability as an approximator of the actual probability of collision over the entire trajectory. In effect, we shall approximate P k:k + K coll with a tight lower bound of an upper bound. While this type of approximation may err on the side of optimism, we note that both the robot's ability to replan over time and the fact that the left-hand side of ( 16 ) is an upper bound on total trajectory collision probability mitigate this potentially underestimated risk.
Safe online planning under uncertain human predictions This approximation of collision probability allows the robot to discriminate between valid and invalid candidate trajectories during motion planning. Using the prediction methodology proposed in Section 4, we may quickly generate, at every time t, the marginal probabilities in ( 16 ) at each future time k 2 fk, . . . , k + Kg, based on past observations at times 0, . . . , k. The planner then computes the instantaneous probability of collision P(x k H 2 H E (x k R )) by integrating P(x t H jx 0:k H ) over H E (x k R ) , and rejects the candidate point x k R if this probability exceeds P th . Note that for graph-based planners that consider candidate trajectories by generating a graph of time-stamped states, rejecting a candidate edge from this graph is equivalent to rejecting all further trajectories that would contain that edge. This early rejection rule is consistent with the proposed approximation (16) of P k:k + K coll while preventing unnecessary exploration of candidate trajectories that would ultimately be deemed unsafe. Throughout operation, the robot follows each planned trajectory using the error feedback controller provided by FaSTrack, which ensures that the robot's high-fidelity state representation s R and the lower-fidelity state used for planning, x R , differ by no more than the tracking error bound E. This planning and tracking procedure continues until the robot reaches its desired goal state. Running example: Our quadcopter is now required to navigate to a target position shown in Figure 4 without colliding with the human. Our proposed algorithm successfully avoids collisions at all times, replanning to leave greater separation from the human whenever her motion departs from the model. In contrast, robot planning with fixed model confidence is either overly conservative at the expense of time and performance or overly aggressive at the expense of safety.
Connections to reachability analysis In this section, we present an alternative, complementary analysis of the overall safety properties of the proposed approach to prediction and motion planning. This discussion is grounded in the language of reachability theory and worst-case analysis of human motion.
Forward reachable set Throughout this section, we frequently refer to the human's time-indexed forward reachable set. We define this set formally in the following. Definition 1. (Forward reachable set) For a dynamical system _ x = f (x, u) with state trajectories given by the function j(x(0), t, u( Á ))ex(t), the forward reachable set FRS(x, t) of a state x after time t has elapsed is FRS(x, t) :¼ fx 0 : 9u( Á ), x 0 = j(x, t, u( Á ))g That is, a state x 0 is in the forward reachable set of x after time t if it is reachable via some applied control signal u( Á ). Remark 1. (Recovery of FRS) For P th = 0 and any finite b, the set of states assigned probability greater than P th is identical to the forward reachable set, up to discretization errors. This is visualized for low, high, and Bayesian model confidence in Figure 5 .
A sufficient condition for the safety of individual trajectories In Section 6.2, we construct an approximation to the probability of collision along a trajectory, which we use during motion planning to avoid potentially dangerous states. To make this guarantee of collision avoidance for a motion plan even stronger, it would suffice to ensure that the robot never comes too close to the human's forward reachable set. More precisely, a planned trajectory is safe if fx R (t) + Eg \ FRS(x H , t) = ;, for every state x R (t) along a motion plan generated when the human was at state x H . The proof of this statement follows directly from the properties of the tracking error bound E described in Section 6. While this condition may seem appealing, it is in fact highly restrictive. The requirement of avoiding the full forward reachable set is not always possible in confined spaces; indeed, this was our original motivation for wanting to predict human motion (see Section 3.4). However, despite this shortcoming, the logic behind this sufficient condition for safety provides insight into the effectiveness of our framework.
Recovering the forward reachable set Though it will not constitute a formal safety guarantee, we analyze the empirical safety properties of our approach by examining how our predicted state distributions over time relate to forward reachable sets. During operation, our belief over model confidence b evolves to match the degree to which the utility model Q H explains recent human motion. The ''time constant'' governing the speed of this evolution may be tuned by the system designer to be arbitrarily fast by choosing the parameter e to be small, as discussed in Section 4.1. Thus, we may safely assume that b(b) places high probability mass on small values of b as soon as the robot observes human motion that is not well explained by Q H . Figure 6 shows the sets of states with ''high enough'' (.P th ) predicted probability mass overlaid on the human's forward reachable set at time t, which is a circle of radius v H t centered on x H for the dynamics in our running example. When b is high (10), we observe that virtually all of the probability mass is concentrated in a small number of states in the direction of motion predicted by our utility model. When b is low (0:05) we observe that the set of states assigned probability above our collision threshold P th occupies a much larger fraction of the reachable set. A typical belief b(b) recorded at a moment when the human was roughly moving according to Q H yields an intermediate set of states. Figure 7 illustrates the evolution of these sets of states over time, for the unmodeled obstacle example of Section 5.4 in which a pedestrian avoids a spill. Each row corresponds to the predicted state distribution at a particular point in time. Within a row, each column shows the reachable set and the set of states assigned occupancy Interestingly, as the Bayesian model confidence decreases, which occurs when the pedestrian turns to avoid the spill at t'6 s, the predicted state distribution assigns high probability to a relatively large set of states, but unlike the low-b predictor that set of states is oriented toward the known goal. Of course, had b(b) placed even more probability mass on lower values of b then the Bayesian confidence predictor would converge to the low confidence one. In addition, we observe that, within each row as the prediction horizon increases, the area contained within the forward reachable set increases and the fraction of that area contained within the predicted sets decreases. This phenomenon is a direct consequence of our choice of threshold P th . Had we chosen a smaller threshold value, a larger fraction of the forward reachable set would have been occupied by the lower-b predictors. This observation may be viewed prescriptively. Recalling the sufficient condition for safety of planned trajectories from Section 7.2, if the robot replans every T replan seconds, we may interpret the fraction of FRS( Á , t + T replan ) assigned occupancy probability greater than P th by the low-confidence predictor as a rough indicator of the safety of an individual motion plan, robust to worst-case human movement. As this fraction tends toward unity, the robot is more and more likely to be safe. However, for any P th .0, this fraction approaches zero for T replan " '. This immediately suggests that, if we wish to replan every T replan seconds, we can achieve a particular level of safety as measured by this fraction by choosing an appropriate threshold P th . In summary, confidence-aware predictions rapidly place high-probability mass on low values of b whenever human motion is not well-explained by utility model Q H . Whenever this happens, the resulting predictions encompass a larger fraction of the forward reachable set, and in the limit that P th # 0 we recover the forward reachable set exactly. The larger this fraction, the more closely our approach satisfies the sufficient condition for safety presented in Section 7.2.
Hardware demonstration We implemented confidence-aware human motion prediction (Section 4) and integrated it into a real-time, safe probabilistic motion planner (Section 6), all within the Robot Operating System (ROS) software framework of Quigley et al. (2009) . To demonstrate the efficacy of our methods, we tested our work for the quadcopter-avoiding-pedestrian example used for illustration throughout this paper. Human trajectories were recorded as (x, y) positions on the ground plane at roughly 235 Hz by an OptiTrack infrared motion capture system, and we used a Crazyflie 2.0 micro-quadcopter, also tracked by the OptiTrack system. 4 Figure 4 illustrates the unmodeled obstacle case from Section 5.4, in which the pedestrian turns to avoid a spill on the ground. Using a low model confidence results in motion plans that suddenly and excessively deviate from the ideal straight-line path when the pedestrian turns to avoid the spill. By contrast, the high-confidence predictor consistently predicts that the pedestrian will walk in a straight line to the goal even when they turn; this almost leads to collision, as shown in detail in Figure 8 . Our proposed approach for Bayesian model confidence initially assigns high confidence and predicts that the pedestrian will walk straight to the goal, but when they turn to avoid the spill, the predictions become less confident. This causes the quadcopter to make a minor course correction, shown in further detail in Figure 9 .
Conclusion When robots operate in complex environments in concert with other agents, safety often depends upon the robot's ability to predict the agents' future actions. While this prediction problem may be tractable in some cases, it can be extremely difficult for agents such as people who act with intent. In this paper, we introduce the idea of confidenceaware prediction as a natural coping mechanism for predicting the future actions of intent-driven agents. Our approach uses each measurement of the human's state to reason about the accuracy of its internal model of human decision-making. This reasoning about model confidence is expressed compactly as a Bayesian filter over the possible values of a single parameter, b, which controls the entropy of the robot's model of the human's choice of action. In effect, whenever the human's motion is not wellexplained by this model, the robot predicts that the human could occupy a larger volume of the state space. We couple this notion of confidence-aware prediction with a reachability-based robust motion planning algorithm, FaSTrack, which quantifies the robot's ability to track a planned reference trajectory. Using this maximum tracking error allows us to bound an approximation of the probability of collision along planned trajectories. In addition, we present a deeper connection between confidence-aware prediction and forward reachable sets, which provides an alternative explanation of the safety of our approach. We demonstrate the proposed methodology on a ROS-based quadcopter testbed in a motion capture arena.
Limitations There are several important limitations of this work, which we summarize and discuss in the following. 9.1.1. State discretization. As presented, our approach to prediction requires a discrete representation of the human's state space. This can be tractable for the relatively simple dynamical models of human motion we consider in this work. Fortunately, one of the strongest attributes of confidenceaware prediction is that it affords a certain degree of robustness to modeling errors by design. Still, our approach is effectively limited to low-order dynamical models. 9.1.2. FaSTrack complexity. FaSTrack provides a strong safety guarantee vis-a `-vis the maximum tracking error that could ever exist between a higher-fidelity dynamical model of the robot and a lower-order model used for motion planning. Unfortunately, the computational complexity of finding this maximum tracking error and the corresponding safety controller scales exponentially with the dimension of The subsequent motion plans may not be safe; here, poor prediction quality leads to a collision. the high-fidelity model. In some cases, these dynamics are decomposable and analytic solutions exist (e.g., Chen et al., 2018; Fridovich-Keil et al., 2018) , and in other cases conservative approximations may be effective (e.g., Chen et al., 2016; Royo et al., 2018) . 9.1.3. Boltzmann distributional assumption. We model the human's choice of control input at each time step as an independent, random draw from a Boltzmann distribution (3). This distributional assumption is motivated from the literature in cognitive science and econometrics and is increasingly common in robotics, yet it may not be accurate in all cases. Maintaining an up-to-date model confidence belief b(b) can certainly mitigate this inaccuracy, but only at the cost of making excessively conservative predictions. 9.1.4. Safety certification. Our analysis in Section 7 makes connections to forward reachability in an effort to understand the safety properties of our system. As shown, whenever our confidence-aware prediction method detects poor model performance it quickly yields predictions that approximate the human's forward reachable set. Although this approximation is not perfect, and hence we cannot provide a strong safety certificate, the connection to reachability is in some sense prescriptive. That is, it can be used to guide the choice of collision probability threshold P th and replanning frequency. However, even if we could provide a strong guarantee of collision avoidance for a particular motion plan, that would not, in general, guarantee that future motion plans would be recursively safe. This recursive property is much more general and, unsurprisingly, more difficult to satisfy.
Future directions Future work will aim to address each of these shortcomings. We are also interested in extending our methodology for the multi-robot, multi-human setting; our preliminary results are reported by Bajcsy et al. (2018) . In addition, we believe that our model confidence inference approach could be integrated with other commonly used probabilistic prediction methods besides the Boltzmann utility model. Finally, we are excited to test our work in hardware in other application spaces, such as manipulation and driving. Fig. Fig.1. When planning around humans, accurate predictions of human motion (visualized here in pink and blue, representing high and low probability, respectively) are an essential prerequisite for safety. Unfortunately, these approaches may fail to explain all observed motion at runtime (e.g., human avoids unmodeled spill on the ground), leading to inaccurate predictions, and potentially, collisions (left). Our method addresses this by updating its predictive model confidence in real time (right), leading to more conservative motion planning in circumstances when predictions are known to be suspect.
with b k + (b, u) :¼ P(b, ujx 0:k H , u 0:k H ) the running posterior and b k À (b, u) :¼ P(b, ujx 0:kÀ1 H , u 0:kÀ1 H
Fig. 2 . 2 Fig. 2. Snapshots of pedestrian trajectory and probabilistic model predictions. Top row: Pedestrian moves from the bottom right to a goal marked as a red circle. Middle row: Pedestrian changes course to avoid a spill on the floor. Bottom row: Pedestrian moves to one known goal, then to another, then to a third which the robot has not modeled. The first two columns show predictions for low and high model confidence; the third column shows the predictions using our Bayesian model confidence. For all pedestrian videos, see https://youtu.be/lh_E9rW-MJo.
Fig. 3 . 3 Fig. 3. Snapshots of Dubins car and probabilistic predictions. Top row: Car moves straight ahead toward one of two known goals (red arrows), staying in its lane. Middle row: Car suddenly swerves to the left to avoid a pothole. Bottom row: Car turns to the right, away from the only known goal. The left and center columns show results for low and high confidence predictors, respectively, and the right column shows our approach using Bayesian inferred model confidence. For all Dubins car videos, see https://youtu.be/sAJKNnP42fQ.
Fig. 4 . 4 Fig. 4. Scenario from the middle row of Figure 2 visualized with robot's trajectory. When b is low and the robot is not confident, it makes large deviations from its path to accommodate the human. When b is high, the robot refuses to change course and comes dangerously close to the human. With inferred model confidence, the robot balances safety and efficiency with a slight deviation around the human.
Fig. 5 . 5 Fig. 5. The human (black dot) is moving west towards a goal. Visualized are the predicted state distributions for 1 second into the future when using low, high, and Bayesian model confidence. Higher-saturation indicates higher likelihood of occupancy. The dashed circle represents the pedestrian's 1 second forward reachable set.
Fig. 6 . 6 Fig.6. Visualization of the states with probability greater than or equal to the collision threshold, P th = 0:01. The human's forward reachable set includes the set of states assigned probability greater than P th . We show these ''high probability'' predicted states for predictors with fixed low and high b, as well as our Bayesian-inferred b.
Fig. 7 . 7 Fig.7. The human (black dot) is walking towards the known goal (red dot) but has to avoid an unmodeled coffee spill on the ground. Here we show the snapshots of the predictions at various future times (columns) as the human walks around in real time (rows). The visualized states have probability greater than or equal to P th = 0:01. Each panel displays the human prediction under low confidence (in yellow), high confidence (in dark purple), and Bayesian confidence (colored as per the most likely b value), as well as the forward reachable set. The human's actual trajectory is shown in red.
Fig. 8 . 8 Fig. 8. Predicting with fixed-b (in this case, b = 20) can yield highly inaccurate predictions (and worse, confidently inaccurate ones).The subsequent motion plans may not be safe; here, poor prediction quality leads to a collision.
Fig. 9 . 9 Fig. 9. Inferring b leads to predicted state distributions whose entropy increases whenever the utility model Q H fails to explain observed human motion. The resulting predictions are more robust to modeling errors, resulting in safer motion plans. Here, the quadcopter successfully avoids the pedestrian even when they turn unexpectedly.
The International Journal of Robotics Research 39(2-3)
|
fcaef7fc-bb73-4402-8db4-6268096f0a28
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
RUDDER: Return Decomposition for Delayed Rewards
RUDDER: Return Decomposition for Delayed
Rewards
Jose A. Arjona-MedinaMichael GillhoferMichael Widrich
Thomas Unterthiner Johannes Brandstetter Sepp Hochreitery
LIT AI Lab
Institute for Machine Learning
Johannes Kepler University Linz, Austria
yalso at Institute of Advanced Research in Artificial Intelligence (IARAI)
Abstract
We propose RUDDER, a novel reinforcement learning approach for delayed re-
wards in finite Markov decision processes (MDPs). In MDPs the Q-values are
equal to the expected immediate reward plus the expected future rewards. The
latter are related to bias problems in temporal difference (TD) learning and to
high variance problems in Monte Carlo (MC) learning. Both problems are even
more severe when rewards are delayed. RUDDER aims at making the expected
future rewards zero, which simplifies Q-value estimation to computing the mean
of the immediate reward. We propose the following two new concepts to push
the expected future rewards toward zero. (i) Reward redistribution that leads to
return-equivalent decision processes with the same optimal policies and, when
optimal, zero expected future rewards. (ii) Return decomposition via contribution
analysis which transforms the reinforcement learning task into a regression task
at which deep learning excels. On artificial tasks with delayed rewards, RUD-
DER is significantly faster than MC and exponentially faster than Monte Carlo
Tree Search (MCTS), TD( ), and reward shaping approaches. At Atari games,
RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the
scores, which is most prominent at games with delayed rewards. Source code is
available at https://github.com/ml-jku/rudder and demonstration videos
athttps://goo.gl/EQerZV .
1 Introduction
Assigning credit for a received reward to past actions is central to reinforcement learning [ 128].
A great challenge is to learn long-term credit assignment for delayed rewards [ 65,59,46,106].
Delayed rewards are often episodic or sparse and common in real-world problems [ 97,76]. For
Markov decision processes (MDPs), the Q-value is equal to the expected immediate reward plus
the expected future reward. For Q-value estimation, the expected future reward leads to biases in
temporal difference (TD) and high variance in Monte Carlo (MC) learning. For delayed rewards, TD
requires exponentially many updates to correct the bias, where the number of updates is exponential
in the number of delay steps. For MC learning the number of states affected by a delayed reward
can grow exponentially with the number of delay steps. (Both statements are proved after theorems
A8 and A10 in the appendix.) An MC estimate of the expected future reward has to average over all
possible future trajectories, if rewards, state transitions, or policies are probabilistic. Delayed rewards
make an MC estimate much harder.
authors contributed equallyarXiv:1806.07857v3 [cs.LG] 10 Sep 2019
The main goal of our approach is to construct an MDP that has expected future rewards equal to
zero. If this goal is achieved, Q-value estimation simplifies to computing the mean of the immediate
rewards. To push the expected future rewards to zero, we require two new concepts. The first new
concept is reward redistribution to create return-equivalent MDPs , which are characterized by
having the same optimal policies. An optimal reward redistribution should transform a delayed reward
MDP into a return-equivalent MDP with zero expected future rewards. However, expected future
rewards equal to zero are in general not possible for MDPs. Therefore, we introduce sequence-Markov
decision processes (SDPs), for which reward distributions need not to be Markov. We construct
a reward redistribution that leads to a return-equivalent SDP with a second-order Markov reward
distribution and expected future rewards that are equal to zero. For these return-equivalent SDPs, Q-
value estimation simplifies to computing the mean. Nevertheless, the Q-values or advantage functions
can be used for learning optimal policies. The second new concept is return decomposition and its
realization via contribution analysis . This concept serves to efficiently construct a proper reward
redistribution, as described in the next section. Return decomposition transforms a reinforcement
learning task into a regression task, where the sequence-wide return must be predicted from the
whole state-action sequence. The regression task identifies which state-action pairs contribute to the
return prediction and, therefore, receive a redistributed reward. Learning the regression model uses
only completed episodes as training set, therefore avoids problems with unknown future state-action
trajectories. Even for sub-optimal reward redistributions, we obtain an enormous speed-up of Q-value
learning if relevant reward-causing state-action pairs are identified. We propose RUDDER (RetUrn
Decomposition for DElayed Rewards) for learning with reward redistributions that are obtained via
return decompositions.
To get an intuition for our approach, assume you repair pocket watches and then sell them. For a
particular brand of watch you have to decide whether repairing pays off. The sales price is known, but
you have unknown costs, i.e. negative rewards, caused by repair and delivery. The advantage function
is the sales price minus the expected immediate repair costs minus the expected future delivery costs.
Therefore, you want to know whether the advantage function is positive. — Why is zeroing the
expected future costs beneficial? — If the average delivery costs are known, then they can be added
to the repair costs resulting in zero future costs. Using your repairing experiences, you just have to
average over the repair costs to know whether repairing pays off. — Why is return decomposition
so efficient? — Because of pattern recognition. For zero future costs, you have to estimate the
expected brand-related delivery costs, which are e.g. packing costs. These brand-related costs are
superimposed by brand-independent general delivery costs for shipment (e.g. time spent for delivery).
Assume that general delivery costs are indicated by patterns, e.g. weather conditions, which delay
delivery. Using a training set of completed deliveries, supervised learning can identify these patterns
and attribute costs to them. This is return decomposition. In this way, only brand-related delivery
costs remain and, therefore, can be estimated more efficiently than by MC.
Related Work. Our new learning algorithm is gradually changing the reward redistribution during
learning, which is known as shaping [ 120,128]. In contrast to RUDDER, potential-based shaping like
reward shaping [ 87], look-ahead advice, and look-back advice [ 144] use a fixed reward redistribution.
Moreover, since these methods keep the original reward, the resulting reward redistribution is not
optimal, as described in the next section, and learning can still be exponentially slow. A monotonic
positive reward transformation [ 91] also changes the reward distribution but is neither assured to keep
optimal policies nor to have expected future rewards of zero. Disentangled rewards keep optimal
policies but are neither environment nor policy specific, therefore can in general not achieve expected
future rewards being zero [ 28]. Successor features decouple environment and policy from rewards,
but changing the reward changes the optimal policies [ 7,6]. Temporal Value Transport (TVT) uses
an attentional memory mechanism to learn a value function that serves as fictitious reward [ 59].
However, expected future rewards are not close to zero and optimal policies are not guaranteed to
be kept. Reinforcement learning tasks have been changed into supervised tasks [ 108,8,112]. For
example, a model that predicts the return can supply update signals for a policy by sensitivity analysis.
This is known as “backpropagation through a model” [ 86,101,102,142,111,4,5]. In contrast to
these approaches, (i) we use contribution analysis instead of sensitivity analysis, and (ii) we use the
whole state-action sequence to predict its associated return.
2
2 Reward Redistribution and Novel Learning Algorithms
Reward redistribution is the main new concept to achieve expected future rewards equal to zero.
We start by introducing MDPs, return-equivalent sequence-Markov decision processes (SDPs), and
reward redistributions. Furthermore, optimal reward redistribution is defined and novel learning
algorithms based on reward redistributions are introduced.
MDP Definitions and Return-Equivalent Sequence-Markov Decision Processes (SDPs). A fi-
nite Markov decision process (MDP) Pis 5-tupleP= (S;A;R;p;
)of finite sets Sof states
s(random variable Stat timet),Aof actionsa(random variable At), and Rof rewardsr(ran-
dom variable Rt+1). Furthermore,Phas transition-reward distributions p(St+1=s0;Rt+1=rj
St=s;At=a)conditioned on state-actions, and a discount factor
2[0;1]. The marginals
arep(rjs;a) =P
s0p(s0;rjs;a)andp(s0js;a) =P
rp(s0;rjs;a). The expected reward
isr(s;a) =P
rrp(rjs;a). The return GtisGt=P1
k=0
kRt+k+1, while for finite horizon
MDPs with sequence length Tand
= 1 it isGt=PT t
k=0Rt+k+1. A Markov policy is given
as action distribution (At=ajSt=s)conditioned on states. We often equip an MDP P
with a policy without explicitly mentioning it. The action-value function q(s;a)for policy
isq(s;a) = E[GtjSt=s;At=a]. The goal of learning is to maximize the expected return at
timet= 0, that isv
0= E[G0]. The optimal policy is= argmax[v
0]. Asequence-Markov
decision process (SDP) is defined as a decision process which is equipped with a Markov policy and
has Markov transition probabilities but a reward that is not required to be Markov. Two SDPs ~Pand
Parereturn-equivalent if (i) they differ only in their reward distribution and (ii) they have the same
expected return at t= 0for each policy :~v
0=v
0. They are strictly return-equivalent if they have
the same expected return for every episode and for each policy . Strictly return-equivalent SDPs
are return-equivalent. Return-equivalent SDPs have the same optimal policies. For more details see
Section A2.2 in the appendix.
Reward Redistribution. Strictly return-equivalent SDPs ~PandPcan be constructed by re-
ward redistributions. A reward redistribution given an SDP ~Pis a procedure that redistributes
for each sequence s0;a0;:::;sT;aTthe realization of the sequence-associated return variable
~G0=PT
t=0~Rt+1or its expectation along the sequence. Later we will introduce a reward re-
distribution that depends on the SDP ~P. The reward redistribution creates a new SDP Pwith the
redistributed reward Rt+1at time (t+1) and the return variable G0=PT
t=0Rt+1. A reward redistri-
bution is second order Markov if the redistributed reward Rt+1depends only on (st 1;at 1;st;at).
If the SDPPis obtained from the SDP ~Pby reward redistribution, then ~PandPare strictly return-
equivalent. The next theorem states that the optimal policies are still the same for ~PandP(proof
after Section Theorem S2).
Theorem 1. Both the SDP ~Pwith delayed reward ~Rt+1and the SDPPwith redistributed reward
Rt+1have the same optimal policies.
Optimal Reward Redistribution with Expected Future Rewards Equal to Zero. We move on
to the main goal of this paper: to derive an SDP via reward redistribution that has expected future
rewards equal to zero and, therefore, no delayed rewards. At time (t 1)the immediate reward is Rt
with expectation r(st 1;at 1). We define the expected future rewards (m;t 1)at time (t 1)as
the expected sum of future rewards from Rt+1toRt+1+m.
Definition 1. For16t6Tand06m6T t, the expected sum of delayed rewards at time
(t 1)in the interval [t+ 1;t+m+ 1] is defined as (m;t 1) = E[Pm
=0Rt+1+jst 1;at 1].
For every time point t, the expected future rewards (T t 1;t)given (st;at)is the expected sum
of future rewards until sequence end, that is, in the interval [t+ 2;T+ 1]. For MDPs, the Bellman
equation for Q-values becomes q(st;at) =r(st;at) +(T t 1;t). We aim to derive an MDP
with(T t 1;t) = 0 , which gives q(st;at) =r(st;at). In this case, learning the Q-values
simplifies to estimating the expected immediate reward r(st;at) = E [Rt+1jst;at]. Hence, the
reinforcement learning task reduces to computing the mean, e.g. the arithmetic mean, for each
state-action pair (st;at). A reward redistribution is defined to be optimal , if(T t 1;t) = 0
for06t6T 1. In general, an optimal reward redistribution violates the Markov assumptions
and the Bellman equation does not hold (proof after Theorem A3 in the appendix). Therefore, we
3
will consider SDPs in the following. The next theorem states that a delayed reward MDP ~Pwith
a particular policy can be transformed into a return-equivalent SDP Pwith an optimal reward
redistribution.
Theorem 2. We assume a delayed reward MDP ~P, where the accumulated reward is given at
sequence end. A new SDP Pis obtained by a second order Markov reward redistribution, which
ensures thatPis return-equivalent to ~P. For a specific , the following two statements are equivalent:
(I)(T t 1;t) = 0 , i.e. the reward redistribution is optimal,
(II)E [Rt+1jst 1;at 1;st;at] = ~q(st;at) ~q(st 1;at 1): (1)
An optimal reward redistribution fulfills for 16t6Tand06m6T t:(m;t 1) = 0 .
The proof can be found after Theorem A4 in the appendix. Equation (T t 1;t) = 0 implies that
the new SDPPhas no delayed rewards, that is, E[Rt+1+jst 1;at 1] = 0 , for066T t 1
(Corollary A1 in the appendix). The SDP Phas no delayed rewards since no state-action pair can
increase or decrease the expectation of a future reward. Equation (1)shows that for an optimal reward
redistribution the expected reward has to be the difference of consecutive Q-values of the original
delayed reward. The optimal reward redistribution is second order Markov since the expectation of
Rt+1at time (t+ 1) depends on (st 1;at 1;st;at).
The next theorem states the major advantage of an optimal reward redistribution: ~q(st;at)can be
estimated with an offset that depends only on stby estimating the expected immediate redistributed
reward. Thus, Q-value estimation becomes trivial and the the advantage function of the MDP ~Pcan
be readily computed.
Theorem 3. If the reward redistribution is optimal, then the Q-values of the SDP Pare given by
q(st;at) =r(st;at) = ~q(st;at) Est 1;at 1[~q(st 1;at 1)jst] = ~q(st;at) (st):
(2)The SDPPand the original MDP ~Phave the same advantage function. Using a behavior policy
the expected immediate reward is
E[Rt+1jst;at] = ~q(st;at) ;(st): (3)
The proof can be found after Theorem A5 in the appendix. If the reward redistribution is not optimal,
then(T t 1;t)measures the deviation of the Q-value from r(st;at). This theorem justifies
several learning methods based on reward redistribution presented in the next paragraph.
Novel Learning Algorithms Based on Reward Redistributions. We assume
= 1and a finite
horizon or an absorbing state original MDP ~Pwith delayed rewards. For this setting we introduce
new reinforcement learning algorithms. They are gradually changing the reward redistribution during
learning and are based on the estimations in Theorem 3. These algorithms are also valid for non-
optimal reward redistributions, since the optimal policies are kept (Theorem 1). Convergence of
RUDDER learning can under standard assumptions be proven by the stochastic approximation for two
time-scale update rules [ 17,64]. Learning consists of an LSTM and a Q-value update. Convergence
proofs to an optimal policy are difficult, since locally stable attractors may not correspond to optimal
policies.
According to Theorem 1, reward redistributions keep the optimal policies. Therefore, even non-
optimal reward redistributions ensure correct learning. However, an optimal reward redistribution
speeds up learning considerably. Reward redistributions can be combined with methods that use
Q-value ranks or advantage functions. We consider (A) Q-value estimation, (B) policy gradients,
and (C)Q-learning. Type (A) methods estimate Q-values and are divided into variants (i), (ii), and
(iii). Variant (i) assumes an optimal reward redistribution and estimates ~q(st;at)with an offset
depending only on st. The estimates are based on Theorem 3 either by on-policy direct Q-value
estimation according to Eq. (2)or by off-policy immediate reward estimation according to Eq. (3).
Variant (ii) methods assume a non-optimal reward redistribution and correct Eq. (2)by estimating .
Variant (iii) methods use eligibility traces for the redistributed reward. RUDDER learning can be
based on policies like “greedy in the limit with infinite exploration” (GLIE) or “restricted rank-based
randomized” (RRR) [ 118]. GLIE policies change toward greediness with respect to the Q-values
during learning. For more details on these learning approaches see Section A2.7.1 in the apendix.
Type (B) methods replace in the expected updates E[rlog(ajs;)q(s;a)]of policy gradients
the valueq(s;a)by an estimate of r(s;a)or by a sample of the redistributed reward. The offset
4
(s)in Eq. (2)or ;(s)in Eq. (3)reduces the variance as baseline normalization does. These
methods can be extended to Trust Region Policy Optimization (TRPO) [ 113] as used in Proximal
Policy Optimization (PPO) [ 115]. The type (C) method is Q-learning with the redistributed reward.
Here,Q-learning is justified if immediate and future reward are drawn together, as typically done.
3 Constructing Reward Redistributions by Return Decomposition
We now propose methods to construct reward redistributions. Learning with non-optimal reward
redistributions does work since the optimal policies do not change according to Theorem 1. However,
reward redistributions that are optimal considerably speed up learning, since future expected rewards
introduce biases in TD methods and high variances in MC methods. The expected optimal redis-
tributed reward is the difference of Q-values according to Eq. (1). The more a reward redistribution
deviates from these differences, the larger are the absolute -values and, in turn, the less optimal
the reward redistribution gets. Consequently, to construct a reward redistribution which is close to
optimal we aim at identifying the largest Q-value differences.
Reinforcement Learning as Pattern Recognition. We want to transform the reinforcement learn-
ing problem into a pattern recognition task to employ deep learning approaches. The sum of the
Q-value differences gives the difference between expected return at sequence begin and the expected
return at sequence end (telescope sum). Thus, Q-value differences allow to predict the expected
return of the whole state-action sequence. Identifying the largest Q-value differences reduces the
prediction error most. Q-value differences are assumed to be associated with patterns in state-action
transitions. The largest Q-value differences are expected to be found more frequently in sequences
with very large or very low return. The resulting task is to predict the expected return from the whole
sequence and identify which state-action transitions have contributed the most to the prediction. This
pattern recognition task serves to construct a reward redistribution, where the redistributed reward
corresponds to the different contributions. The next paragraph shows how the return is decomposed
and redistributed along the state-action sequence.
Return Decomposition. Thereturn decomposition idea is that a function gpredicts the expectation
of the return for a given state-action sequence (return for the whole sequence). The function gis
neither a value nor an action-value function since it predicts the expected return when the whole
sequence is given. With the help of geither the predicted value or the realization of the return is
redistributed over the sequence. A state-action pair receives as redistributed reward its contribution
to the prediction, which is determined by contribution analysis. We use contribution analysis
since sensitivity analysis has serious drawbacks: local minima, instabilities, exploding or vanishing
gradients, and proper exploration [ 48,110]. The major drawback is that the relevance of actions is
missed since sensitivity analysis does not consider the contribution of actions to the output, but only
their effect on the output when slightly perturbing them. Contribution analysis determines how much
a state-action pair contributes to the final prediction. We can use any contribution analysis method,
but we specifically consider three methods: (A) differences of return predictions, (B) integrated
gradients (IG) [ 125], and (C) layer-wise relevance propagation (LRP) [ 3]. For (A), gmust try
to predict the sequence-wide return at every time step. The redistributed reward is given by the
difference of consecutive predictions. The function gcan be decomposed into past, immediate, and
future contributions to the return. Consecutive predictions share the same past and the same future
contributions except for two immediate state-action pairs. Thus, in the difference of consecutive
predictions contributions cancel except for the two immediate state-action pairs. Even for imprecise
predictions of future contributions to the return, contribution analysis is more precise, since prediction
errors cancel out. Methods (B) and (C) rely on information later in the sequence for determining
the contribution and thereby may introduce a non-Markov reward. The reward can be viewed to be
probabilistic but is prone to have high variance. Therefore, we prefer method (A).
Explaining Away Problem. We still have to tackle the problem that reward causing actions do
not receive redistributed rewards since they are explained away by later states. To describe the
problem, assume an MDP ~Pwith the only reward at sequence end. To ensure the Markov property,
states in ~Phave to store the reward contributions of previous state-actions; e.g. sThas to store all
previous contributions such that the expectation ~r(sT;aT)is Markov. The explaining away problem
is that later states are used for return prediction, while reward causing earlier actions are missed.
5
To avoid explaining away, we define a difference function (st 1;at 1;st;at)between a state-
action pair (st;at)and its predecessor (st 1;at 1). That is a function of (st;at;st 1;at 1)is
justified by Eq. (1), which ensures that such s allow an optimal reward redistribution. The sequence
of differences is 0:T:=
(s 1;a 1;s0;a0);:::; (sT 1;aT 1;sT;aT)
. The components
are assumed to be statistically independent from each other, therefore cannot store reward
contributions of previous . The function gshould predict the return by g(0:T) = ~r(sT;aT)and
can be decomposed into g(0:T) =PT
t=0ht. The contributions are ht=h((st 1;at 1;st;at))
for06t6T. For the redistributed rewards Rt+1, we ensure E [Rt+1jst 1;at 1;st;at] =ht.
The reward ~RT+1of~Pis probabilistic and the function gmight not be perfect, therefore neither
g(0:T) = ~rT+1for the return realization ~rT+1norg(0:T) = ~r(sT;aT)for the expected return
holds. Therefore, we need to introduce the compensation ~rT+1 PT
=0h((s 1;a 1;s;a))
as an extra reward RT+2at timeT+ 2to ensure strictly return-equivalent SDPs. If gwas perfect,
then it would predict the expected return which could be redistributed. The new redistributed rewards
Rt+1are based on the return decomposition, since they must have the contributions htas mean:
E [R1js0;a0] =h0,E [Rt+1jst 1;at 1;st;at] =ht;0< t6T,RT+2=~RT+1 PT
t=0ht,
where the realization ~rT+1is replaced by its random variable ~RT+1. If the prediction of gis perfect,
then we can redistribute the expected return via the prediction. Theorem 2 holds also for the correction
RT+2(see Theorem A6 in the appendix). A gwith zero prediction errors results in an optimal reward
redistribution. Small prediction errors lead to reward redistributions close to an optimal one.
RUDDER: Return Decomposition using LSTM. RUDDER uses a Long Short-Term Memory
(LSTM) network for return decomposition and the resulting reward redistribution. RUDDER consists
of three phases. (I) Safe exploration. Exploration sequences should generate LSTM training samples
with delayed rewards by avoiding low Q-values during a particular time interval. Low Q-values hint
at states where the agent gets stuck. Parameters comprise starting time, length, and Q-value threshold.
(II) Lessons replay buffer for training the LSTM. If RUDDER’s safe exploration discovers an
episode with unseen delayed rewards, it is secured in a lessons replay buffer [ 74]. Unexpected
rewards are indicated by a large prediction error of the LSTM. For LSTM training, episodes with
larger errors are sampled more often from the buffer, similar to prioritized experience replay [ 109].
(III) LSTM and return decomposition. An LSTM learns to predict sequence-wide return at every
time step and, thereafter, return decomposition uses differences of return predictions (contribution
analysis method (A)) to construct a reward redistribution. For more details see Section A8.4 in the
appendix.
Feedforward Neural Networks (FFNs) vs. LSTMs. In contrast to LSTMs, FNNs are not suited
for processing sequences. Nevertheless, FNNs can learn a action-value function, which enables
contribution analysis by differences of predictions. However, this leads to serious problems by
spurious contributions that hinder learning. For example, any contributions would be incorrect if
the true expectation of the return did not change. Therefore, prediction errors might falsely cause
contributions leading to spurious rewards. FNNs are prone to such prediction errors since they have
to predict the expected return again and again from each different state-action pair and cannot use
stored information. In contrast, the LSTM is less prone to produce spurious rewards: (i) The LSTM
will only learn to store information if a state-action pair has a strong evidence for a change in the
expected return. If information is stored, then internal states and, therefore, also the predictions
change, otherwise the predictions stay unchanged. Hence, storing events receives a contribution and a
corresponding reward, while by default nothing is stored and no contribution is given. (ii) The LSTM
tends to have smaller prediction errors since it can reuse past information for predicting the expected
return. For example, key events can be stored. (iii) Prediction errors of LSTMs are much more likely
to cancel via prediction differences than those of FNNs. Since consecutive predictions of LSTMs
rely on the same internal states, they usually have highly correlated errors.
Human Expert Episodes. They are an alternative to exploration and can serve to fill the lessons
replay buffer. Learning can be sped up considerably when LSTM identifies human key actions. Return
decomposition will reward human key actions even for episodes with low return since other actions
that thwart high returns receive negative reward. Using human demonstrations in reinforcement
learning led to a huge improvement on some Atari games like Montezuma’s Revenge [93, 2].
6
Limitations. In all of the experiments reported in this manuscript, we show that RUDDER signifi-
cantly outperforms other methods for delayed reward problems. However, RUDDER might not be
effective when the reward is not delayed since LSTM learning takes extra time and has problems with
very long sequences. Furthermore, reward redistribution may introduce disturbing spurious reward
signals.
4 Experiments
RUDDER is evaluated on three artificial tasks with delayed rewards. These tasks are designed to show
problems of TD, MC, and potential-based reward shaping. RUDDER overcomes these problems.
Next, we demonstrate that RUDDER also works for more complex tasks with delayed rewards.
Therefore, we compare RUDDER with a Proximal Policy Optimization (PPO) baseline on 52 Atari
games. All experiments use finite time horizon or absorbing states MDPs with
= 1and reward at
episode end. For more information see Section A4.1 in the appendix.
Artificial Tasks (I)–(III). Task (I) shows that TD methods have problems with vanishing information
for delayed rewards. Goal is to learn that a delayed reward is larger than a distracting immediate
reward. Therefore, the correct expected future reward must be assigned to many state-action pairs.
Task (II) is a variation of the introductory pocket watch example with delayed rewards. It shows
that MC methods have problems with the high variance of future unrelated rewards. The expected
future reward that is caused by the first action has to be estimated. Large future rewards that are not
associated with the first action impede MC estimations. Task (III) shows that potential-based reward
shaping methods have problems with delayed rewards. For this task, only the first two actions are
relevant, to which the delayed reward has to be propagated back.
The tasks have different delays, are tabular ( Q-table), and use an -greedy policy with = 0:2.
We compare RUDDER, MC, and TD( ) on all tasks, and Monte Carlo Tree Search (MCTS) on
task (I). Additionally, on task (III), SARSA( ) and reward shaping are compared. We use = 0:9
as suggested [ 128]. Reward shaping methods are the original method, look-forward advice, and
look-back advice with three different potential functions. RUDDER uses an LSTM without output
and forget gates, no lessons buffer, and no safe exploration. For all tasks contribution analysis is
performed with difference of return predictions. A Q-table is learned by an exponential moving
average of the redistributed reward (RUDDER’s Q-value estimation) or by Q-learning. Performance
is measured by the learning time to achieve 90% of the maximal expected return. A Wilcoxon
signed-rank test determines the significance of performance differences between RUDDER and other
methods.
(I) Grid World shows problems of TD methods with delayed rewards. The task illustrates a time
bomb that explodes at episode end. The agent has to defuse the bomb and then run away as far as
possible since defusing fails with a certain probability. Alternatively, the agent can immediately run
away, which, however, leads to less reward on average. The Grid World is a 3131grid with bomb
at coordinate [30;15]andstart at[30 d;15], wheredis the delay of the task. The agent can move
up,down ,left, and right as long as it stays on the grid. At the end of the episode, after b1:5dcsteps,
the agent receives a reward of 1000 with probability of 0.5, if it has visited bomb . At each time
step, the agent receives an immediate reward of cth, wherecdepends on the chosen action, t
is the current time step, and his the Hamming distance to bomb . Each move toward the bomb , is
immediately penalized with c= 0:09. Each move away from the bomb , is immediately rewarded
withc= 0:1. The agent must learn the Q-values precisely to recognize that directly running away
is not optimal. Figure 1(I) shows the learning times to solve the task vs. the delay of the reward
averaged over 100 trials. For all delays, RUDDER is significantly faster than all other methods
withp-values<10 12. Speed-ups vs. MC and MCTS, suggest to be exponential with delay time.
RUDDER is exponentially faster with increasing delay than Q(), supporting Theorem A8 in the
appendix. RUDDER significantly outperforms all other methods.
(II) The Choice shows problems of MC methods with delayed rewards. This task has probabilistic
state transitions, which can be represented as a tree with states as nodes. The agent traverses the tree
from the root (initial state) to the leafs (final states). At the root, the agent has to choose between the
left and the right subtree, where one subtree has a higher expected reward. Thereafter, it traverses the
tree randomly according to the transition probabilities. Each visited node adds its fixed share to the
final reward. The delayed reward is given as accumulated shares at a leaf. The task is solved when
7
2 4 6 8 10 12 14102103104105106107(I)
RUDDERQ()
MC
MCTS
0 100 200 300 400 500104105(II)
RUDDERQ()
MC
RUDDER
10 15 20 25103104105(III)
RUDDER Q( )
RUDDERRS
look-ahead
look-backSARSA( )
Q()
RS
look-ahead
look-back
20 401234
delay of the reward#episodes
Figure 1: Comparison of RUDDER and other methods on artificial tasks with respect to the learning
time in episodes (median of 100 trials) vs. the delay of the reward. The shadow bands indicate the
40% and60% quantiles. In (II), the y-axis of the inlet is scaled by 105. In (III), reward shaping
(RS), look-ahead advice (look-ahead), and look-back advice (look-back) use three different potential
functions. In (III), the dashed blue line represents RUDDER with Q(), in contrast to RUDDER with
Q-estimation. In all tasks, RUDDER significantly outperforms all other methods.
the agent always chooses the subtree with higher expected reward. Figure 1(II) shows the learning
times to solve the task vs. the delay of the reward averaged over 100 trials. For all delays, RUDDER
is significantly faster than all other methods with p-values<10 8. The speed-up vs. MC, suggests
to be exponential with delay time. RUDDER is exponentially faster with increasing delay than Q(),
supporting Theorem A8 in the appendix. RUDDER significantly outperforms all other methods.
(III) Trace-Back shows problems of potential-based reward shaping methods with delayed rewards.
We investigate how fast information about delayed rewards is propagated back by RUDDER, Q(),
SARSA(), and potential-based reward shaping. MC is skipped since it does not transfer back
information. The agent can move in a 15 15 grid to the 4 adjacent positions as long as it remains on
the grid. Starting at (7;7), the number of moves per episode is T= 20 . The optimal policy moves the
agent up int= 1and right in t= 2, which gives immediate reward of 50att= 2, and a delayed
reward of 150 at the end t= 20 =T. Therefore, the optimal return is 100. For any other policy, the
agent receives only an immediate reward of 50 at t= 2. Fort62, state transitions are deterministic,
while fort >2they are uniformly distributed and independent of the actions. Thus, the return
does not depend on actions at t>2. We compare RUDDER, original reward shaping, look-ahead
advice, and look-back advice. As suggested by the authors, we use SARSA instead of Q-learning
for look-back advice. We use three different potential functions for reward shaping, which are all
based on the reward redistribution (see appendix). At t= 2, there is a distraction since the immediate
reward is 50for the optimal and 50 for other actions. RUDDER is significantly faster than all other
methods with p-values<10 17. Figure 1(III) shows the learning times averaged over 100 trials.
RUDDER is exponentially faster than all other methods and significantly outperforms them.
Atari Games. RUDDER is evaluated with respect to its learning time and achieves scores on
Atari games of the Arcade Learning Environment (ALE) [ 11] and OpenAI Gym [ 18]. RUDDER
is used on top of the TRPO-based [ 113] policy gradient method PPO that uses GAE [ 114]. Our
PPO baseline differs from the original PPO baseline [ 115] in two aspects. (i) Instead of using the
sign function of the rewards, rewards are scaled by their current maximum. In this way, the ratio
between different rewards remains unchanged and the advantage of large delayed rewards can be
recognized. (ii) The safe exploration strategy of RUDDER is used. The entropy coefficient is replaced
by Proportional Control [ 16,12]. A coarse hyperparameter optimization is performed for the PPO
baseline. For all 52 Atari games, RUDDER uses the same architectures, losses, and hyperparameters,
which were optimized for the baseline. The only difference to the PPO baseline is that the policy
network predicts the value function of the redistributed reward to integrate reward redistribution into
the PPO framework. Contribution analysis uses an LSTM with differences of return predictions. Here
is the pixel-wise difference of two consecutive frames augmented with the current frame. LSTM
training and reward redistribution are restricted to sequence chunks of 500 frames. Source code is
provided upon publication.
8
RUDDER baseline delay delay-event
Bowling 192 56 200 strike pins
Solaris 1,827 616 122 navigate map
Venture 1,350 820 150 find treasure
Seaquest 4,770 1,616 272 collect divers
Table 1: Average scores over 3 random seeds with 10 trials each for delayed reward Atari games.
"delay": frames between reward and first related action. RUDDER considerably improves the PPO
baseline on delayed reward games.
Policies are trained with no-op starting condition for 200M game frames using every 4th frame.
Training episodes end with losing a life or at maximal 108K frames. All scores are averaged over 3
different random seeds for network and ALE initialization. We asses the performance by the learning
time and the achieved scores. First, we compare RUDDER to the baseline by average scores per game
throughout training, to assess learning speed [ 115]. For 32 (20) games RUDDER (baseline) learns
on average faster. Next, we compare the average scores of the last 10 training games. For 29 (23)
games RUDDER (baseline) has higher average scores. In the majority of games RUDDER, improves
the scores of the PPO baseline. To compare RUDDER and the baseline on Atari games that are
characterize by delayed rewards, we selected the games Bowling, Solaris, Venture, and Seaquest. In
these games, high scores are achieved by learning the delayed reward, while learning the immediate
reward and extensive exploration (like for Montezuma’s revenge) is less important. The results are
presented in Table 1. For more details and further results see Section A4.2 in the appendix. Figure 2
displays how RUDDER redistributes rewards to key events in Bowling. At delayed reward Atari
games, RUDDER considerably increases the scores compared to the PPO baseline.
steering ball
100 frames redistributed rewardoriginal reward
0striking pins
Figure 2: RUDDER redistributes rewards to key events in the Atari game Bowling. Originally,
rewards are delayed and only given at episode end. The first 120 out of 200 frames of the episode are
shown. RUDDER identifies key actions that steer the ball to hit all pins.
Conclusion. We have introduced RUDDER, a novel reinforcement learning algorithm based on
the new concepts of reward redistribution and return decomposition. On artificial tasks, RUDDER
significantly outperforms TD( ), MC, MCTS and reward shaping methods, while on Atari games it
improves a PPO baseline on average but most prominently on long delayed rewards games.
Acknowledgments
This work was supported by NVIDIA Corporation, Merck KGaA, Audi.JKU Deep Learning Center,
Audi Electronic Venture GmbH, Janssen Pharmaceutica (madeSMART), TGW Logistics Group, ZF
Friedrichshafen AG, UCB S.A., FFG grant 871302, LIT grant DeepToxGen and AI-SNN, and FWF
grant P 28660-N31.
References
References are provided in Section A11 in the appendix.
9
Appendix
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Reward Redistribution and Novel Learning Algorithms . . . . . . . . . . . . . . . . . 3
3 Constructing Reward Redistributions by Return Decomposition . . . . . . . . . . . . . 5
4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
A1 Definition of Finite Markov Decision Processes . . . . . . . . . . . . . . . . . . . . . 12
A2 Reward Redistribution, Return-Equivalent SDPs, Novel Learning Algorithms, and Return
Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A2.1 State Enriched MDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A2.2 Return-Equivalent Sequence-Markov Decision Processes (SDPs) . . . . . . . . . 16
A2.2.1 Sequence-Markov Decision Processes (SDPs) . . . . . . . . . . . . . . . 16
A2.2.2 Return-Equivalent SDPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
A2.3 Reward Redistribution for Strictly Return-Equivalent SDPs . . . . . . . . . . . . 17
A2.3.1 Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A2.4 Reward Redistribution Constructs Strictly Return-Equivalent SDPs . . . . . . . 18
A2.4.1 Special Cases of Strictly Return-Equivalent Decision Processes: Reward
Shaping, Look-Ahead Advice, and Look-Back Advice . . . . . . . . . . . 18
A2.5 Transforming an Immediate Reward MDP to a Delayed Reward MDP . . . . . . 19
A2.6 Transforming an Delayed Reward MDP to an Immediate Reward SDP . . . . . . . 21
A2.6.1 Optimal Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . 22
A2.7 Novel Learning Algorithms based on Reward Redistributions . . . . . . . . . . . 27
A2.7.1 Q-Value Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A2.7.2 Policy Gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A2.7.3 Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A2.8 Return Decomposition to construct a Reward Redistribution . . . . . . . . . . . . 31
A2.8.1 Return Decomposition Idea . . . . . . . . . . . . . . . . . . . . . . . . . . 31
A2.8.2 Reward Redistribution based on Return Decomposition . . . . . . . . . . 32
A2.9 Remarks on Return Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 34
A2.9.1 Return Decomposition for Binary Reward . . . . . . . . . . . . . . . . . . 34
A2.9.2 Optimal Reward Redistribution reduces the MDP to a Stochastic Contextual
Bandit Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A2.9.3 Relation to ”Backpropagation through a Model´´ . . . . . . . . . . . . . . 35
A3 Bias-Variance Analysis of MDP Q-Value Estimators . . . . . . . . . . . . . . . . . . . 35
A3.1 Bias-Variance for MC and TD Estimates of the Expected Return . . . . . . . . . 36
A3.2 Mean and Variance of an MDP Sample of the Return . . . . . . . . . . . . . . . 38
A3.3 TD corrects Bias exponentially slowly with Respect to Reward Delay . . . . . . 40
A3.4 MC affects the Variance of Exponentially Many Estimates with Delayed Reward 42
A4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A4.1 Artificial Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A4.1.1 Task (I): Grid World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A4.1.2 Task (II): The Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A4.1.3 Task(III): Trace-Back . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
A4.1.4 Task (IV): Charge-Discharge . . . . . . . . . . . . . . . . . . . . . . . . 57
A4.1.5 Task (V): Solving Trace-Back using policy gradient methods . . . . . . . 57
A4.2 Atari Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
A4.2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
A4.2.2 Lessons Replay Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
A4.2.3 Game Processing, Update Design, and Target Design . . . . . . . . . . . . 60
A4.2.4 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
A4.2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
A5 Discussion and Frequent Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
A6 Additional Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
A7 Markov Decision Processes with Undiscounted Rewards . . . . . . . . . . . . . . . . 70
A7.1 Properties of the Bellman Operator in MDPs with Undiscounted Rewards . . . . 70
A7.1.1 Monotonically Increasing and Continuous . . . . . . . . . . . . . . . . . 70
10
A7.1.2 Contraction for Undiscounted Finite Horizon . . . . . . . . . . . . . . . . . 71
A7.1.3 Contraction for Undiscounted Infinite Horizon With Absorbing States . . . 72
A7.1.4 Fixed Point of Contraction is Continuous wrt Parameters . . . . . . . . . . 72
A7.1.5 t-fold Composition of the Operator . . . . . . . . . . . . . . . . . . . . . 73
A7.2 Q-value Transformations: Shaping Reward, Baseline, and Normalization . . . . 74
A7.3 Alternative Definition of State Enrichment . . . . . . . . . . . . . . . . . . . . . 75
A7.4 Variance of the Weighted Sum of a Multinomial Distribution . . . . . . . . . . . 76
A8 Long Short-Term Memory (LSTM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A8.1 LSTM Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A8.2 LSTM in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A8.3 Long-Term Dependencies vs. Uniform Credit Assignment . . . . . . . . . . . . 79
A8.4 Special LSTM Architectures for contribution Analysis . . . . . . . . . . . . . . 79
A8.4.1 LSTM for Integrated Gradients . . . . . . . . . . . . . . . . . . . . . . . 79
A8.4.2 LSTM for LRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A8.4.3 LSTM for Nondecreasing Memory Cells . . . . . . . . . . . . . . . . . . 82
A8.4.4 LSTM without Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A9 Contribution Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
A9.1 Difference of Consecutive Predictions for Sequences . . . . . . . . . . . . . . . 86
A9.2 Input Zeroing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A9.3 Integrated Gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A9.4 Layer-Wise Relevance Propagation . . . . . . . . . . . . . . . . . . . . . . . . 90
A9.4.1 New Variants of LRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A9.4.2 LRP for Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A9.5 Variance Considerations for contribution Analysis . . . . . . . . . . . . . . . . . 92
A10 Reproducibility Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
11
A1 Definition of Finite Markov Decision Processes
We consider a finite Markov decision process (MDP) P, which is a 5-tuple P= (S;A;R;p;
):
•Sis a finite set of states; Stis the random variable for states at time twith values2S.St
has a discrete probability distribution.
•Ais a finite set of actions (sometimes state-dependent A(s));Atis the random variable for
actions at time twith valuea2A.Athas a discrete probability distribution.
•Ris a finite set of rewards; Rt+1is the random variable for rewards at time (t+ 1) with
valuer2R.Rthas a discrete probability distribution.
•p(St+1=s0;Rt+1=rjSt=s;At=a)are the transition and reward distributions over
states and rewards, respectively, conditioned on state-actions,
•
2[0;1]is a discount factor for the reward.
The Markov policy is a distribution over actions given the state: (At=ajSt=s). We often
equip an MDPPwith a policy without explicitly mentioning it. At time t, the random variables
give the states, actions, and rewards of the MDP, while low-case letters give possible values. At each
timet, the environment is in some state st2S. The policy takes an action at2A, which causes
a transition of the environment to state st+1and a reward rt+1for the policy. Therefore, the MDP
creates a sequence
(S0;A0;R1;S1;A1;R2;S2;A2;R3;:::): (A1)
The marginal probabilities for
p(s0;rjs;a) = Pr [St+1=s0;Rt+1=rjSt=s;At=a] (A2)
are:
p(rjs;a) = Pr [Rt+1=rjSt=s;At=a] =X
s0p(s0;rjs;a); (A3)
p(s0js;a) = Pr [St+1=s0jSt=s;At=a] =X
rp(s0;rjs;a): (A4)
We use a sum convention:P
a;bgoes over all possible values of aandb, that is, all combinations which
fulfill the constraints on aandb. Ifbis a function of a(fully determined by a), thenP
a;b=P
a.
We denote expectations:
•Eis the expectation where the random variable is an MDP sequence of states, actions, and
rewards generated with policy .
•Esis the expectation where the random variable is Stwith valuess2S.
•Eais the expectation where the random variable is Atwith valuesa2A.
•Eris the expectation where the random variable is Rt+1with valuesr2R.
•Es;a;r;s0;a0is the expectation where the random variables are St+1with values s02S,St
with values s2S,Atwith values a2A,At+1with values a02A, andRt+1with values
r2R. If more or fewer random variables are used, the notation is consistently adapted.
The returnGtis the accumulated reward starting from t+ 1:
Gt=1X
k=0
kRt+k+1: (A5)
The discount factor
determines how much immediate rewards are favored over more delayed
rewards. For
= 0the return (the objective) is determined as the largest expected immediate reward,
while for
= 1the return is determined by the expected sum of future rewards if the sum exists.
State-Value and Action-Value Function. The state-value function v(s)for policyand states
is defined as
v(s) = E[GtjSt=s] = E"1X
k=0
kRt+k+1jSt=s#
: (A6)
12
Starting att= 0:
v
0= E"1X
t=0
tRt+1#
= E[G0]; (A7)
the optimal state-value function vand policyare
v(s) = max
v(s); (A8)
= arg max
v(s)for alls: (A9)
The action-value function q(s;a)for policyis the expected return when starting from St=s,
taking action At=a, and following policy :
q(s;a) = E[GtjSt=s;At=a] = E"1X
k=0
kRt+k+1jSt=s;At=a#
: (A10)
The optimal action-value function qand policyare
q(s;a) = max
q(s;a); (A11)
= arg max
q(s;a)for all (s;a): (A12)
The optimal action-value function qcan be expressed via the optimal value function v:
q(s;a) = E [Rt+1+
v(St+1)jSt=s;At=a]: (A13)
The optimal state-value function vcan be expressed via the optimal action-value function qusing
the optimal policy :
v(s) = max
aq(s;a) = max
aE[GtjSt=s;At=a] = (A14)
max
aE[Rt+1+
Gt+1jSt=s;At=a] =
max
aE [Rt+1+
v(St+1)jSt=s;At=a]:
Finite time horizon and no discount. We consider a finite time horizon, that is, we consider only
episodes of length T, but may receive reward RT+1at episode end at time T+ 1. The finite time
horizon MDP creates a sequence
(S0;A0;R1;S1;A1;R2;S2;A2;R3;:::;ST 1;AT 1;RT;ST;AT;RT+1): (A15)
Furthermore, we do not discount future rewards, that is, we set
= 1. The return Gtfrom timetto
Tis the sum of rewards:
Gt=T tX
k=0Rt+k+1: (A16)
The state-value function vfor policyis
v(s) = E[GtjSt=s] = E"T tX
k=0Rt+k+1jSt=s#
(A17)
and the action-value function qfor policyis
q(s;a) = E[GtjSt=s;At=a] = E"T tX
k=0Rt+k+1jSt=s;At=a#
(A18)
= E[Rt+1+Gt+1jSt=s;At=a]
=X
s0;rp(s0;rjs;a)"
r+X
a0(a0js0)q(s0;a0)#
:
13
From the Bellman equation Eq. (A18), we obtain:
X
s0p(s0js;a)X
a0(a0js0)q(s0;a0) =q(s;a) X
rrp(rjs;a); (A19)
Es0;a0[q(s0;a0)js;a] =q(s;a) r(s;a): (A20)
The expected return at time t= 0for policyis
v
0= E[G0] = E"TX
t=0Rt+1#
; (A21)
= argmax
v
0:
The agent may start in a particular starting state S0which is a random variable. Often S0has only
one values0.
Learning. The goal of learning is to find the policy that maximizes the expected future dis-
counted reward (the return) if starting at t= 0. Thus, the optimal policy is
= argmax
v
0: (A22)
We consider two learning approaches for Q-values: Monte Carlo and temporal difference.
Monte Carlo (MC). To estimate q(s;a), MC computes the arithmetic mean of all observed
returns (GtjSt=s;At=a)in the data. When using Monte Carlo for learning a policy we use an
exponentially weighted arithmetic mean since the policy steadily changes.
For theith update Monte Carlo tries to minimize1
2M(st;at)2with the residual M(st;at)
M(st;at) = (q)i(st;at) T t 1X
=0
rt+1+; (A23)
such that the update of the action-value qat state-action (st;at)is
(q)i+1(st;at) = (q)i(st;at) M(st;at): (A24)
This update is called constant-MC[128].
Temporal difference (TD) methods. TD updates are based on the Bellman equation. If r(s;a)and
Es0;a0[^q(s0;a0)js;a]have been estimated, the Q-values can be updated according to the Bellman
equation:
(^q)new(s;a) =r(s;a) +
Es0;a0[^q(s0;a0)js;a]: (A25)
The update is applying the Bellman operator with estimates Es0;a0[^q(s0;a0)js;a]andr(s;a)to^q
to obtain (^q)new. The new estimate (^q)newis closer to the fixed point qof the Bellman operator,
since the Bellman operator is a contraction (see Section A7.1.3 and Section A7.1.2).
Since the estimates Es0;a0[^q(s0;a0)js;a]andr(s;a)are not known, TD methods try to minimize
1
2B(s;a)2with the Bellman residual B(s;a):
B(s;a) = ^q(s;a) r(s;a)
Es0;a0[^q(s0;a0)]: (A26)
TD methods use an estimate ^B(s;a)ofB(s;a)and a learning rate to make an update
^q(s;a)new ^q(s;a) ^B(s;a): (A27)
For all TD methods r(s;a)is estimated by Rt+1ands0bySt+1, while ^q(s0;a0)does not change
with the current sample, that is, it is fixed for the estimate. However, the sample determines which
(s0;a0)is chosen. The TD methods differ in how they select a0.SARSA [105] selectsa0by sampling
from the policy:
Es0;a0[^q(s0;a0)]^q(St+1;At+1)
andexpected SARSA [63] averages over selections
Es0;a0[^q(s0;a0)]X
a(ajSt+1) ^q(St+1;a):
14
It is possible to estimate r(s;a)separately via an unbiased minimal variance estimator like the
arithmetic mean and then perform TD updates with the Bellman error using the estimated r(s;a)
[103].Q-learning [140] is an off-policy TD algorithm which is proved to converge [ 141,20]. The
proofs were later generalized [61, 133]. Q-learning uses
Es0;a0[^q(s0;a0)]max
a^q(St+1;a): (A28)
The action-value function q, which is learned by Q-learning, approximates qindependently of the
policy that is followed. More precisely, with Q-learningqconverges with probability 1 to the optimal
q. However, the policy still determines which state-action pairs are encountered during learning.
The convergence only requires that all action-state pairs are visited and updated infinitely often.
A2 Reward Redistribution, Return-Equivalent SDPs, Novel Learning
Algorithms, and Return Decomposition
A2.1 State Enriched MDPs
For MDPs with a delayed reward the states have to code the reward. However, for an immediate
reward the states can be made more compact by removing the reward information. For example,
states with memory of a delayed reward can be mapped to states without memory. Therefore, in
order to compare MDPs, we introduce the concept of homomorphic MDPs. We first need to define a
partition of a set induced by a function. Let Bbe a partition of a set X. For anyx2X, we denote
[x]Bthe block of Bto whichxbelongs. Any function ffrom a setXto a setYinduces a partition
(or equivalence relation) on X, with [x]f= [x0]fif and only if f(x) =f(x0). We now can define
homomorphic MDPs.
Definition A1 (Ravindran and Barto [ 98,99]).An MDP homomorphism hfrom an MDPP=
(S;A;R;p;
)to an MDP ~P= (~S;~A;~R;~p;~
)is a a tuple of surjections (f;g1;g2;:::;gn)(nis
number of states), with h(s;a) = (f(s);gs(a)), wheref:S!~Sandgs:As!~Af(s)fors2S
(Asare the admissible actions in state sand~Af(s)are the admissible actions in state ~s). Furthermore,
for alls;s02S;a2As:
~p(f(s0)jf(s);gs(a)) =X
s002[s0]fp(s00js;a); (A29)
~p(~rjf(s);gs(a)) =p(rjs;a): (A30)
We use [s]f= [s0]fif and only if f(s) =f(s0).
We call ~Pthehomomorphic image ofPunderh. For homomorphic images the optimal Q-values
and the optimal policies are the same.
Lemma A1 (Ravindran and Barto [ 98]).If~Pis a homomorphic image of P, then the optimal
Q-values are the same and a policy that is optimal in ~Pcan be transformed to an optimal policy in
Pby normalizing the number of actions athat are mapped to the same action ~a.
Consequently, the original MDP can be solved by solving a homomorphic image.
Similar results have been obtained by Givan et al. using stochastically bisimilar MDPs: “Any stochas-
tic bisimulation used for aggregation preserves the optimal value and action sequence properties as
well as the optimal policies of the model” [ 34]. Theorem 7 and Corollary 9.1 in Givan et al. show
the facts of Lemma A1. Li et al. give an overview over state abstraction and state aggregation for
Markov decision processes, which covers homomorphic MDPs [73].
A Markov decision process ~Pis state-enriched compared to an MDP Pif~Phas the same states,
actions, transition probabilities, and reward probabilities as Pbut with additional information in its
states. We define state-enrichment as follows:
Definition A2. A Markov decision process ~Pisstate-enriched compared to a Markov decision
processPifPis a homomorphic image of ~P, whereg~sis the identity and f(~s) =sis not bijective.
Being not bijective means that there exist ~s0and~s00withf(~s0) =f(~s00), that is, ~Shas more elements
thanS. In particular, state-enrichment does not change the optimal policies nor the Q-values in the
sense of Lemma A1.
Proposition A1. If an MDP ~Pisstate-enriched compared to an MDP P, then both MDPs have the
same optimal Q-values and the same optimal policies.
15
Proof. According to the definition Pis a homomorphic image of ~P. The statements of Proposition A1
follow directly from Lemma A1.
Optimal policies of the state-enriched MDP ~Pcan be transformed to optimal policies of the original
MDPPand, vice versa, each optimal policy of the original MDP Pcorresponds to at least one
optimal policy of the state-enriched MDP ~P.
A2.2 Return-Equivalent Sequence-Markov Decision Processes (SDPs)
Our goal is to compare Markov decision processes (MDPs) with delayed rewards to decision processes
(DPs) without delayed rewards. The DPs without delayed rewards can but need not to be Markov in
the rewards. Toward this end, we consider two DPs ~PandPwhich differ only in their (non-Markov)
reward distributions. However for each policy the DPs ~PandPhave the same expected return at
t= 0, that is, ~v
0=v
0, or they have the same expected return for every episode.
A2.2.1 Sequence-Markov Decision Processes (SDPs)
We first define decision processes that are Markov except for the reward, which is not required to be
Markov.
Definition A3. A sequence-Markov decision process (SDP) is defined as a finite decision process
which is equipped with a Markov policy and has Markov transition probabilities but a reward
distribution that is not required to be Markov.
Proposition A2. Markov decision processes are sequence-Markov decision processes.
Proof. MDPs have Markov transition probabilities and are equipped with Markov policies.
Definition A4. We call two sequence-Markov decision processes Pand~Pthat have the same Markov
transition probabilities and are equipped with the same Markov policy sequence-equivalent .
Lemma A2. Two sequence-Markov decision processes that are sequence-equivalent have the same
probability to generate state-action sequences (s0;a0;:::;st;at),06t6T.
Proof. Sequence generation only depends on transition probabilities and policy. Therefore the
probability of generating a particular sequences is the same for both SDPs.
A2.2.2 Return-Equivalent SDPs
We define return-equivalent SDPs which can be shown to have the same optimal policies.
Definition A5. Two sequence-Markov decision processes ~PandParereturn-equivalent if they differ
only in their reward but for each policy have the same expected return ~v
0=v
0.~PandPare
strictly return-equivalent if they have the same expected return for every episode and for each policy
:
Eh
~G0js0;a0;:::;sT;aTi
= E[G0js0;a0;:::;sT;aT]: (A31)
The definition of return-equivalence can be generalized to strictly monotonic functions ffor which
~v
0=f(v
0). Since strictly monotonic functions do not change the ordering of the returns, maximal
returns stay maximal after applying the function f.
Strictly return-equivalent SDPs are return-equivalent as the next proposition states.
Proposition A3. Strictly return-equivalent sequence-Markov decision processes are return-
equivalent.
Proof. The expected return at t= 0 given a policy is the sum of the probability of generating a
sequence times the expected reward for this sequence. Both expectations are the same for two strictly
return-equivalent sequence-Markov decision processes. Therefore the expected return at time t= 0
is the same.
The next proposition states that return-equivalent SDPs have the same optimal policies.
Proposition A4. Return-equivalent sequence-Markov decision processes have the same optimal
policies.
Proof. The optimal policy is defined as maximizing the expected return at time t= 0. For each policy
the expected return at time t= 0is the same for return-equivalent decision processes. Consequently,
the optimal policies are the same.
16
Two strictly return-equivalent SDPs have the same expected return for each state-action sub-sequence
(s0;a0;:::;st;at),06t6T.
Lemma A3. Two strictly return-equivalent SDPs ~PandPhave the same expected return for each
state-action sub-sequence (s0;a0;:::;st;at),06t6T:
Eh
~G0js0;a0;:::;st;ati
= E[G0js0;a0;:::;st;at]: (A32)
Proof. Since the SDPs are strictly return-equivalent, we have
Eh
~G0js0;a0;:::;st;ati
(A33)
=X
st+1;at+1;:::;s T;aTp(st+1;at+1;:::;sT;aTjst;at) Eh
~G0js0;a0;:::;sT;aTi
=X
st+1;at+1;:::;s T;aTp(st+1;at+1;:::;sT;aTjst;at) E[G0js0;a0;:::;sT;aT]
= E[G0js0;a0;:::;st;at]:
We used the marginalization of the full probability and the Markov property of the state-action
sequence.
We now give the analog definitions and results for MDPs which are SDPs.
Definition A6. Two Markov decision processes ~PandParereturn-equivalent if they differ only in
p(~rjs;a)andp(rjs;a)but have the same expected return ~v
0=v
0for each policy .~PandP
arestrictly return-equivalent if they have the same expected return for every episode and for each
policy:
Eh
~G0js0;a0;:::;sT;aTi
= E[G0js0;a0;:::;sT;aT]: (A34)
Strictly return-equivalent MDPs are return-equivalent as the next proposition states.
Proposition A5. Strictly return-equivalent decision processes are return-equivalent.
Proof. Since MDPs are SDPs, the proposition follows from Proposition A3.
Proposition A6. Return-equivalent Markov decision processes have the same optimal policies.
Proof. Since MDPs are SDPs, the proposition follows from Proposition A4.
For strictly return-equivalent MDPs the expected return is the same if a state-action sub-sequence is
given.
Proposition A7. Strictly return-equivalent MDPs ~PandPhave the same expected return for a
given state-action sub-sequence (s0;a0;:::;st;at),06t6T:
Eh
~G0js0;a0;:::;st;ati
= E[G0js0;a0;:::;st;at]: (A35)
Proof. Since MDPs are SDPs, the proposition follows from Lemma A3.
A2.3 Reward Redistribution for Strictly Return-Equivalent SDPs
Strictly return-equivalent SDPs ~PandPcan be constructed by a reward redistribution.
A2.3.1 Reward Redistribution
We define reward redistributions for SDPs.
Definition A7. Areward redistribution given an SDP ~Pis a fixed procedure that redistributes for
each state-action sequence s0;a0;:::;sT;aTthe realization of the associated return variable ~G0=PT
t=0~Rt+1or its expectation Eh
~G0js0;a0;:::;sT;aTi
along the sequence. The redistribution
creates a new SDP Pwith redistributed reward Rt+1at time (t+ 1) and return variable G0=PT
t=0Rt+1. The redistribution procedure ensures for each sequence either ~G0=G0or
Eh
~G0js0;a0;:::;sT;aTi
= E[G0js0;a0;:::;sT;aT]: (A36)
17
Reward redistributions can be very general. A special case is if the return can be deduced from the
past sequence, which makes the return causal.
Definition A8. A reward redistribution is causal if for the redistributed reward Rt+1the following
holds:
E [Rt+1js0;a0;:::;sT;aT] = E [Rt+1js0;a0;:::;st;at]: (A37)
For our approach we only need reward redistributions that are second order Markov.
Definition A9. A causal reward redistribution is second order Markov if
E [Rt+1js0;a0;:::;st;at] = E [Rt+1jst 1;at 1;st;at]: (A38)
A2.4 Reward Redistribution Constructs Strictly Return-Equivalent SDPs
Theorem A1. If the SDPPis obtained by reward redistribution from the SDP ~P, then ~PandPare
strictly return-equivalent.
Proof. For redistributing the reward we have for each state-action sequence s0;a0;:::;sT;aTthe
same return ~G0=G0, therefore
Eh
~G0js0;a0;:::;sT;aTi
= E[G0js0;a0;:::;sT;aT]: (A39)
For redistributing the expected return the last equation holds by definition. The last equation is the
definition of strictly return-equivalent SDPs.
The next theorem states that the optimal policies are still the same when redistributing the reward.
Theorem A2. If the SDPPis obtained by reward redistribution from the SDP ~P, then both SDPs
have the same optimal policies.
Proof. According to Theorem A1, the SDP Pis strictly return-equivalent to the SDP ~P. According
to Proposition A3 and Proposition A4 the SDP Pand the SDP ~Phave the same optimal policies.
A2.4.1 Special Cases of Strictly Return-Equivalent Decision Processes: Reward Shaping,
Look-Ahead Advice, and Look-Back Advice
Redistributing the reward via reward shaping [ 87,143], look-ahead advice, and look-back advice
[144] is a special case of reward redistribution that leads to MDPs which are strictly return-equivalent
to the original MDP. We show that reward shaping is a special case of reward redistributions that
lead to MDPs which are strictly return-equivalent to the original MDP. First, we subtract from the
potential the constant c= ((s0;a0)
T(sT;aT))=(1
T), which is the potential of the initial
state minus the discounted potential in the last state divided by a fixed divisor. Consequently, the sum
of additional rewards in reward shaping, look-ahead advice, or look-back advice from 1toTis zero.
The original sum of additional rewards is
TX
i=1
i 1(
(si;ai) (si 1;ai 1)) =
T(sT;aT) (s0;a0): (A40)
If we assume
T(sT;aT) = 0 and(s0;a0) = 0 , then reward shaping does not change the return
and the shaping reward is a reward redistribution leading to an MDP that is strictly return-equivalent
to the original MDP. For T!1 only(s0;a0) = 0 is required. The assumptions can always be
fulfilled by adding a single new initial state and a single new final state to the original MDP.
Without the assumptions
T(sT;aT) = 0 and(s0;a0) = 0 , we subtract c= ((s0;a0)
T(sT;aT))=(1
T)from all potentials , and obtain
TX
i=1
i 1(
((si;ai) c) ((si 1;ai 1) c)) = 0: (A41)
Therefore, the potential-based shaping function (the additional reward) added to the original reward
does not change the return, which means that the shaping reward is a reward redistribution that leads
to an MDP that is strictly return-equivalent to the original MDP. Obviously, reward shaping is a
special case of reward redistribution that leads to a strictly return-equivalent MDP. Reward shaping
does not change the general learning behavior if a constant cis subtracted from the potential function
18
. TheQ-function of the original reward shaping and the Q-function of the reward shaping, which
has a constant csubtracted from the potential function , differ bycfor everyQ-value [ 87,143]. For
infinite horizon MDPs with
<1, the terms
Tand
T(sT;aT)vanish, therefore it is sufficient to
subtractc= (s0;a0)from the potential function.
Since TD based reward shaping methods keep the original reward, they can still be exponentially slow
for delayed rewards. Reward shaping methods like reward shaping, look-ahead advice, and look-back
advice rely on the Markov property of the original reward, while an optimal reward redistribution is
not Markov. In general, reward shaping does not lead to an optimal reward redistribution according
to Section A2.6.1.
As discussed in Paragraph A2.9, the optimal reward redistribution does not comply to the Bellman
equation. Also look-ahead advice does not comply to the Bellman equation. The return for the
look-ahead advice reward ~Rt+1is
Gt=1X
i=0~Rt+i+1 (A42)
with expectations for the reward ~Rt+1
Eh
~Rt+1jst+1;at+1;st;ati
= ~r(st+1;at+1;st;at) =
(st+1;at+1) (st;at):
(A43)
The expected reward ~r(st+1;at+1;st;at)depends on future states st+1and, more importantly, on
future actions at+1. It is a non-causal reward redistribution. Therefore look-ahead advice cannot be
directly used for selecting the optimal action at time t. For look-back advice we have
Eh
~Rt+1jst;at;st 1;at 1i
= ~r(st;at;st 1;at 1) = (st;at)
1(st 1;at 1):
(A44)
Therefore look-back advice introduces a second-order Markov reward like the optimal reward
redistribution.
A2.5 Transforming an Immediate Reward MDP to a Delayed Reward MDP
We assume to have a Markov decision process Pwith immediate reward. The MDP Pis transformed
into an MDP ~Pwith delayed reward, where the reward is given at sequence end. The reward-
equivalent MDP ~Pwith delayed reward is state-enriched, which ensures that it is an MDP.
The state-enriched MDP ~Phas
• reward:
~Rt=(
0; fort6TPT
k=0Rk+1;fort=T+ 1:(A45)
• state:
~st= (st;t); (A46)
t=t 1X
k=0rk+1;withRk+1=rk+1: (A47)
Here we assume that can only take a finite number of values to assure that the enriched states ~sare
finite. If the original reward was continuous, then can represent the accumulated reward with any
desired precision if the sequence length is Tand the original reward was bounded. We assume that
is sufficiently precise to distinguish the optimal policies, which are deterministic, from sub-optimal
deterministic policies. The random variable Rk+1is distributed according to p(rjsk;ak). We
assume that the time tis coded insin order to know when the episode ends and reward is no longer
received, otherwise we introduce an additional state variable =tthat codes the time.
Proposition A8. If a Markov decision process Pwith immediate reward is transformed by above
defined ~Rtand~stto a Markov decision process ~Pwith delayed reward, where the reward is given at
sequence end, then: (I) the optimal policies do not change, and (II) for ~(aj~s) =(ajs)
~q~(~s;a) =q(s;a) +t 1X
k=0rk+1; (A48)
for~St= ~s,St=s, andAt=a.
19
Proof. For (I) we first perform an state-enrichment of Pby~st= (st;t)witht=Pt 1
k=0rk+1for
Rk+1=rk+1leading to an intermediate MDP. We assume that the finite-valued is sufficiently
precise to distinguish the optimal policies, which are deterministic, from sub-optimal deterministic
policies. Proposition A1 ensures that neither the optimal Q-values nor the optimal policies change
between the original MDP Pand the intermediate MDP. Next, we redistribute the original reward
Rt+1according to the redistributed reward ~Rt. The new MDP ~Pwith state enrichment and reward
redistribution is strictly return-equivalent to the intermediate MDP with state enrichment but the
original reward. The new MDP ~Pis Markov since the enriched state ensures that ~RT+1is Markov.
Proposition A5 and Proposition A6 ensure that the optimal policies are the same.
For (II) we show a proof without Bellman equation and a proof using the Bellman equation.
Equivalence without Bellman equation. We have ~G0=G0. The Markov property ensures that the
future reward is independent of the already received reward:
E"TX
k=tRk+1jSt=s;At=a;=t 1X
k=0rk+1#
= E"TX
k=tRk+1jSt=s;At=a#
:(A49)
We assume ~(aj~s) =(ajs).
We obtain
~q~(~s;a) = E ~h
~G0j~St= ~s;At=ai
(A50)
= E ~"TX
k=0Rk+1jSt=s;=t 1X
k=0rk+1;At=a#
= E ~"TX
k=tRk+1jSt=s;=t 1X
k=0rk+1;At=a#
+t 1X
k=0rk+1
= E"TX
k=tRk+1jSt=s;At=a#
+t 1X
k=0rk+1
=q(s;a) +t 1X
k=0rk+1:
We used E~= E, which is ensured since reward probabilities, transition probabilities, and the
probability of choosing an action by the policy correspond to each other in both settings.
Since the optimal policies do not change for reward-equivalent and state-enriched processes, we have
~q(~s;a) =q(s;a) +t 1X
k=0rk+1: (A51)
Equivalence with Bellman equation. Withq(s;a)as optimal action-value function for the original
Markov decision process, we define a new Markov decision process with action-state function ~q~.
For~St= ~s,St=s, andAt=awe have
~q~(~s;a) :=q(s;a) +t 1X
k=0rk+1; (A52)
~(aj~s) :=(ajs): (A53)
Since ~s0= (s0;0),0=r+, and ~ris constant, the values ~St+1= ~s0and~Rt+1= ~rcan be
computed from Rt+1=r,, andSt+1=s0. Therefore, we have
~p(~s0;~rjs;;a ) = ~p(s0;0;~rjs;;a ) =p(s0;rjs;a): (A54)
20
Fort<T , we have ~r= 0and0=r+, where we set r=rt+1:
~q~(~s;a) =q(s;a) +t 1X
k=0rk+1 (A55)
=X
s0;rp(s0;rjs;a)"
r+X
a0(a0js0)q(s0;a0)#
+t 1X
k=0rk+1
=X
s0;0~p(s0;0;~rjs;;a )"
r+X
a0(a0js0)q(s0;a0)#
+t 1X
k=0rk+1
=X
~s0;~r~p(~s0;~rj~s;a)"
r+X
a0(a0js0)q(s0;a0) +t 1X
k=0rk+1#
=X
~s0;~r~p(~s0;~rj~s;a)"
~r+X
a0(a0js0)q(s0;a0) +tX
k=0rk+1#
=X
~s0;~r~p(~s0;~rj~s;a)"
~r+X
a0~(a0j~s0) ~q~(~s0;a0)#
:
Fort=Twe have ~r=PT
k=0rk+1=0andq(s0;a0) = 0 as well as ~q~(~s0;a0) = 0 . Bothqand~q
must be zero for t>Tsince after time t=T+ 1there is no more reward. We obtain for t=Tand
r=rT+1:
~q~(~s;a) =q(s;a) +T 1X
k=0rk+1 (A56)
=X
s0;rp(s0;rjs;a)"
r+X
a0(a0js0)q(s0;a0)#
+T 1X
k=0rk+1
=X
s0;0;r~p(s0;0js;;a )"
r+X
a0(a0js0)q(s0;a0)#
+T 1X
k=0rk+1
=X
s0;0;r~p(s0;0js;;a )"TX
k=0rk+1+X
a0(a0js0)q(s0;a0)#
=X
~s0;0~p(~s0j~s;a)"
0+X
a0(a0js0)q(s0;a0)#
=X
~s0;0~p(~s0j~s;a) [0+ 0]
=X
~s0;~r~p(~s0j~s;a)"
~r+X
a0~(a0j~s0) ~q~(~s0;a0)#
:
Since ~q~(~s;a)fulfills the Bellman equation, it is the action-value function for ~.
A2.6 Transforming an Delayed Reward MDP to an Immediate Reward SDP
Next we consider the opposite direction, where the delayed reward MDP ~Pis given and we want to
find an immediate reward SDP Pthat is return-equivalent to ~P. We assume an episodic reward for ~P,
that is, reward is only given at sequence end. The realization of final reward, that is the realization
of the return, ~rT+1is redistributed to previous time steps. Instead of redistributing the realization
~rT+1of the random variable ~RT+1, also its expectation ~r(sT;aT) = Eh
~RT+1jsT;aTi
can be
21
redistributed since Q-value estimation considers only the mean. We used the Markov property
Eh
~G0js0;a0;:::;sT;aTi
= E"TX
t=0~Rt+1js0;a0;:::;sT;aT#
(A57)
= Eh
~RT+1js0;a0;:::;sT;aTi
= Eh
~RT+1jsT;aTi
:
Redistributing the expectation reduces the variance of estimators since the variance of the random
variable is already factored out.
We assume a delayed reward MDP ~Pwith reward
~Rt=0; fort6T
~RT+1;fort=T+ 1;(A58)
where ~Rt= 0means that the random variable ~Rtis always zero. The expected reward at the last
time step is
~r(sT;aT) = Eh
~RT+1jsT;aTi
; (A59)
which is also the expected return. Given a state-action sequence (s0;a0;:::;sT;aT), we want to
redistribute either the realization ~rT+1of the random variable ~RT+1or its expectation ~r(sT;aT),
A2.6.1 Optimal Reward Redistribution
The main goal in this paper is to derive an SDP via reward redistribution that has zero expected future
rewards. Consequently the SDP has no delayed rewards. To measure the amount of delayed rewards,
we define the expected sum of delayed rewards (m;t 1).
Definition A10. For16t6Tand06m6T t, the expected sum of delayed rewards at time
(t 1)in the interval [t+ 1;t+m+ 1] is defined as
(m;t 1) = E"mX
=0Rt+1+jst 1;at 1#
: (A60)
The Bellman equation for Q-values becomes
q(st;at) =r(st;at) +(T t 1;t); (A61)
where(T t 1;t)is the expected sum of future rewards until sequence end given (st;at), that
is, in the interval [t+ 2;T+ 1]. We aim to derive an MDP with (T t 1;t) = 0 , which
givesq(st;at) =r(st;at). In this case, learning the Q-values reduces to estimating the average
immediate reward r(st;at) = E [Rt+1jst;at]. Hence, the reinforcement learning task reduces to
computing the mean, e.g. the arithmetic mean, for each state-action pair (st;at). Next, we define an
optimal reward redistribution.
Definition A11. A reward redistribution is optimal, if (T t 1;t) = 0 for06t6T 1.
Next theorem states that in general an MDP with optimal reward redistribution does not exist, which
is the reason why we will consider SDPs in the following.
Theorem A3. In general, an optimal reward redistribution violates the assumption that the reward
distribution is Markov, therefore the Bellman equation does not hold.
Proof. We assume an MDP ~Pwith ~r(sT;aT)6= 0 and which has policies that lead to different
expected returns at time t= 0. If all reward is given at time t= 0, all policies have the same expected
return at time t= 0. This violates our assumption, therefore not all reward can be given at t= 0. In
vector and matrix notation the Bellman equation is
q
t=rt+Pt!t+1q
t+1; (A62)
wherePt!t+1is the row-stochastic matrix with p(st+1jst;at)(at+1jst+1)at positions
((st;at);(st+1;at+1)). An optimal reward redistribution requires the expected future rewards to be
zero:
Pt!t+1q
t+1=0 (A63)
22
and, since optimality requires q
t+1=rt+1, we have
Pt!t+1rt+1=0; (A64)
wherert+1is the vector with components ~r(st+1;at+1). Since (i) the MDPs are return-equivalent,
(ii)~r(sT;aT)6= 0, and (iii) not all reward is given at t= 0, an(t+ 1) exists withrt+16=0. We can
construct an MDP ~Pwhich has (a) at least as many state-action pairs (st;at)as pairs (st+1;at+1)
and (b) the transition matrix Pt!t+1has full rank. Pt!t+1rt+1=0is now a contradiction to
rt+16= 0andPt!t+1has full rank. Consequently, simultaneously ensuring Markov properties and
ensuring zero future return is in general not possible.
For a particular , the next theorem states that an optimal reward redistribution, that is = 0, is
equivalent to a redistributed reward which expectation is the difference of consecutive Q-values of
the original delayed reward. The theorem states that an optimal reward redistribution exists but we
have to assume an SDP Pthat has a second order Markov reward redistribution.
Theorem A4. We assume a delayed reward MDP ~P, where the accumulated reward is given at
sequence end. An new SDP Pis obtained by a second order Markov reward redistribution, which
ensures thatPis return-equivalent to ~P. For a specific , the following two statements are equivalent:
(I)(T t 1;t) = 0 , i.e. the reward redistribution is optimal,
(II)E [Rt+1jst 1;at 1;st;at] = ~q(st;at) ~q(st 1;at 1): (A65)
Furthermore, an optimal reward redistribution fulfills for 16t6Tand06m6T t:
(m;t 1) = 0: (A66)
Proof. PART (I): we assume that the reward redistribution is optimal, that is,
(T t 1;t) = 0: (A67)
The redistributed reward Rt+1is second order Markov. We abbreviate the expected Rt+1byht:
E [Rt+1jst 1;at 1;st;at] =ht: (A68)
The assumptions of Lemma A3 hold for for the delayed reward MDP ~Pand the redistributed reward
SDPP. Therefore for a given state-action sub-sequence (s0;a0;:::;st;at),06t6T:
Eh
~G0js0;a0;:::;st;ati
= E[G0js0;a0;:::;st;at] (A69)
withG0=PT
=0R+1and~G0=~RT+1. The Markov property of the MDP ~Pensures that the
future reward from t+ 1on is independent of the past sub-sequence s0;a0;:::;st 1;at 1:
E"T tX
=0~Rt+1+jst;at#
= E"T tX
=0~Rt+1+js0;a0;:::;st;at#
: (A70)
The second order Markov property of the SDP Pensures that the future reward from t+ 2on is
independent of the past sub-sequence s0;a0;:::;st 1;at 1:
E"T t 1X
=0Rt+2+jst;at#
= E"T t 1X
=0Rt+2+js0;a0;:::;st;at#
: (A71)
23
Using these properties we obtain
~q(st;at) = E"T tX
=0~Rt+1+jst;at#
(A72)
= E"T tX
=0~Rt+1+js0;a0;:::;st;at#
= Eh
~RT+1js0;a0;:::;st;ati
= E"TX
=0~R+1js0;a0;:::;st;at#
= Eh
~G0js0;a0;:::;st;ati
= E[G0js0;a0;:::;st;at]
= E"TX
=0R+1js0;a0;:::;st;at#
= E"T t 1X
=0Rt+2+js0;a0;:::;st;at#
+tX
=0h
= E"T t 1X
=0Rt+2+jst;at#
+tX
=0h
=(T t 1;t) +tX
=0h
=tX
=0h:
We used
(T t 1;t) = E"T t 1X
=0Rt+2+jst;at#
= 0: (A73)
It follows that
E [Rt+1jst 1;at 1;st;at] =ht (A74)
= ~q(st;at) ~q(st 1;at 1):
PART (II): we assume that
E [Rt+1jst 1;at 1;st;at] =ht (A75)
= ~q(st;at) ~q(st 1;at 1):
The expectations E[:jst 1;at 1]likeEh
~RT+1jst 1;at 1i
are expectations over all episodes
starting in (st 1;at 1)and ending in some (sT;aT).
First, we consider m= 0 and16t6T, therefore(0;t 1) = E[Rt+1jst 1;at 1]. Since
~r(st 1;at 1) = 0 for16t6T, we have
~q(st 1;at 1) = ~r(st 1;at 1) +X
st;atp(st;atjst 1;at 1) ~q(st;at) (A76)
=X
st;atp(st;atjst 1;at 1) ~q(st;at):
24
Using this equation we obtain for 16t6T:
(0;t 1) = Est;at;Rt+1[Rt+1jst 1;at 1] (A77)
= Est;at[~q(st;at) ~q(st 1;at 1)jst 1;at 1]
=X
st;atp(st;atjst 1;at 1) (~q(st;at) ~q(st 1;at 1))
= ~q(st 1;at 1) X
st;atp(st;atjst 1;at 1) ~q(st 1;at 1)
= ~q(st 1;at 1) ~q(st 1;at 1) = 0:
Next, we consider the expectation ofPm
=0Rt+1+for16t6Tand16m6T t(form> 0)
(m;t 1) = E"mX
=0Rt+1+jst 1;at 1#
(A78)
= E"mX
=0(~q(s+t;a+t) ~q(s+t 1;a+t 1))jst 1;at 1#
= E[~q(st+m;at+m) ~q(st 1;at 1)jst 1;at 1]
= E"
E"TX
=t+m~R+1jst+m;at+m#
jst 1;at 1#
E"
E"TX
=t 1~R+1jst 1;at 1#
jst 1;at 1#
= Eh
~RT+1jst 1;at 1i
Eh
~RT+1jst 1;at 1i
= 0:
We used that ~Rt+1= 0fort<T .
Fort=+ 1andm=T t=T 1we have
(T 1;) = 0; (A79)
which characterizes an optimal reward redistribution.
Thus, an SDP with an optimal reward redistribution has a expected future rewards that are zero.
Equation(T t 1;t) = 0 means that the new SDP Phas no delayed rewards as shown in next
corollary.
Corollary A1. An SDP with an optimal reward redistribution fulfills for 066T t 1
E[Rt+1+jst 1;at 1] = 0: (A80)
The SDP has no delayed rewards since no state-action pair can increase or decrease the expectation
of a future reward.
Proof. For= 0we use(m;t 1) = 0 from Theorem A4 with m= 0:
E[Rt+1jst 1;at 1] =(0;t 1) = 0: (A81)
For >0, we also use (m;t 1) = 0 from Theorem A4:
E[Rt+1+jst 1;at 1] = E"X
k=0Rt+1+k 1X
k=0Rt+1+kjst 1;at 1#
(A82)
= E"X
k=0Rt+1+kjst 1;at 1#
E" 1X
k=0Rt+1+kjst 1;at 1#
=(;t 1) ( 1;t 1) = 0 0 = 0:
25
A related approach is to ensure zero return by reward shaping if the exact value function is known
[114].
The next theorem states the major advantage of an optimal reward redistribution: ~q(st;at)can be
estimated with an offset that depends only on stby estimating the expected immediate redistributed
reward. Thus, Q-value estimation becomes trivial and the the advantage function of the MDP ~Pcan
be readily computed.
Theorem A5. If the reward redistribution is optimal, then the Q-values of the SDP Pare given by
q(st;at) =r(st;at) = ~q(st;at) Est 1;at 1[~q(st 1;at 1)jst] (A83)
= ~q(st;at) (st):
The SDPPand the original MDP ~Phave the same advantage function. Using a behavior policy
the expected immediate reward is
E[Rt+1jst;at] = ~q(st;at) ;(st): (A84)
Proof. The expected reward r(st;at)is computed for 06t6T, wheres 1;a 1are states and
actions, which are introduced for formal reasons at the beginning of an episode. The expected reward
r(st;at)is with ~q(s 1;a 1) = 0 :
r(st;at) = Ert+1[Rt+1jst;at] = Est 1;at 1[~q(st;at) ~q(st 1;at 1)jst;at](A85)
= ~q(st;at) Est 1;at 1[~q(st 1;at 1)jst;at]:
The expectations E[:jst;at]likeEh
~RT+1jst;ati
are expectations over all episodes starting in
(st;at)and ending in some (sT;aT).
TheQ-values for the SDP Pare defined for 06t6Tas:
q(st;at) = E"T tX
=0Rt+1+jst;at#
(A86)
= E[~q(sT;aT) ~q(st 1;at 1)jst;at]
= E[~q(sT;aT)jst;at] E[~q(st 1;at 1)jst;at]
= ~q(st;at) Est 1;at 1[~q(st 1;at 1)jst;at]
=r(st;at):
The second equality uses
T tX
=0Rt+1+=T tX
=0~q(st+;at+) ~q(st+ 1;at+ 1) (A87)
= ~q(sT;aT) ~q(st 1;at 1):
The posterior p(st 1;at 1jst;at)is
p(st 1;at 1jst;at) =p(st;atjst 1;at 1)p(st 1;at 1)
p(st;at)(A88)
=p(stjst 1;at 1)p(st 1;at 1)
p(st)=p(st 1;at 1jst);
where we used p(st;atjst 1;at 1) =(atjst)p(stjst 1;at 1)andp(st;at) =(atjst)p(st).
The posterior does no longer contain at. We can express the mean of previous Q-values by the
posteriorp(st 1;at 1jst;at):
Est 1;at 1[~q(st 1;at 1)jst;at] =X
st 1;at 1p(st 1;at 1jst;at) ~q(st 1;at 1) (A89)
=X
st 1;at 1p(st 1;at 1jst) ~q(st 1;at 1) = Est 1;at 1[~q(st 1;at 1)jst] = (st);
with
(st) = Est 1;at 1[~q(st 1;at 1)jst]: (A90)
26
The SDPPand the MDP ~Phave the same advantage function, since the value functions are the
expectedQ-values across the actions and follow the equation v(st) = ~v(st) + (st). Therefore
(st)cancels in the advantage function of the SDP P.
Using a behavior policy the expected immediate reward is
E[Rt+1jst;at] = Ert+1;[Rt+1jst;at] = Est 1;at 1;[~q(st;at) ~q(st 1;at 1)jst;at]
(A91)
= ~q(st;at) Est 1;at 1;[~q(st 1;at 1)jst;at]:
The posterior p(st 1;at 1jst;at)is
p(st 1;at 1jst;at) =p(st;atjst 1;at 1)p(st 1;at 1)
p(st;at)(A92)
=p(stjst 1;at 1)p(st 1;at 1)
p(st)=p(st 1;at 1jst);
where we used p(st;atjst 1;at 1) = (atjst)p(stjst 1;at 1)andp(st;at) = (atj
st)p(st). The posterior does no longer contain at. We can express the mean of previous Q-values
by the posterior p(st 1;at 1jst;at):
Est 1;at 1;[~q(st 1;at 1)jst;at] =X
st 1;at 1p(st 1;at 1jst;at) ~q(st 1;at 1) (A93)
=X
st 1;at 1p(st 1;at 1jst) ~q(st 1;at 1) = Est 1;at 1;[~q(st 1;at 1)jst] = ;(st);
with
;(st) = Est 1;at 1;[~q(st 1;at 1)jst]: (A94)
Therefore we have
E[Rt+1jst;at] = ~q(st;at) ;(st): (A95)
A2.7 Novel Learning Algorithms based on Reward Redistributions
We assume
= 1 and a finite horizon or absorbing state original MDP ~Pwith delayed reward.
According to Theorem A5, ~q(st;at)can be estimated with an offset that depends only on st
by estimating the expected immediate redistributed reward. Thus, Q-value estimation becomes
trivial and the the advantage function of the MDP ~Pcan be readily computed. All reinforcement
learning methods like policy gradients that use arg maxat~q(st;at)or the advantage function
~q(st;at) Eat~q(st;at)of the original MDP ~Pcan be used. These methods either rely on
Theorem A5 and either estimate q(st;at)according to Eq. (A83) or the expected immediate reward
according to Eq. (A84) . Both approaches estimate ~q(st;at)with an offset that depends only on
st(either (st)or ;(st)). Behavior policies like “greedy in the limit with infinite exploration”
(GLIE) or “restricted rank-based randomized” (RRR) allow to prove convergence of SARSA [ 118].
These policies can be used with reward redistribution. GLIE policies can be realized by a softmax
with exploration coefficient on the Q-values, therefore (st)or ;(st)cancels. RRR policies
select actions probabilistically according to the ranks of their Q-values, where the greedy action
has highest probability. Therefore (st)or ;(st)is not required. For function approximation,
convergence of the Q-value estimation together with reward redistribution and GLIE or RRR policies
can under standard assumptions be proven by the stochastic approximation theory for two time-scale
update rules [ 17,64]. Proofs for convergence to an optimal policy are in general difficult, since
locally stable attractors may not correspond to optimal policies.
Reward redistribution can be used for
• (A)Q-value estimation,
• (B) policy gradients, and
• (C)Q-learning.
27
A2.7.1 Q-Value Estimation
Like SARSA, RUDDER learning continually predicts Q-values to improve the policy. Type (A)
methods estimate Q-values and are divided into variants (i), (ii), and (iii). Variant (i) assumes an
optimal reward redistribution and estimates ~q(st;at)with an offset depending only on st. The
estimates are based on Theorem A5 either by on-policy direct Q-value estimation according to
Eq.(A83) or by off-policy immediate reward estimation according to Eq. (A84) . Variant (ii) methods
assume a non-optimal reward redistribution and correct Eq. (A83) by estimating . Variant (iii)
methods use eligibility traces for the redistributed reward.
Variant (i): Estimation of ~q(st;at)with an offset assuming optimality. Theorem A5 justifies
the estimation of ~q(st;at)with an offset by on-policy direct Q-value estimation via Eq. (A83) or by
off-policy immediate reward estimation via Eq. (A84) . RUDDER learning can be based on policies
like “greedy in the limit with infinite exploration” (GLIE) or “restricted rank-based randomized”
(RRR) [ 118]. GLIE policies change toward greediness with respect to the Q-values during learning.
Variant (ii): TD-learning of and correction of the redistributed reward. For non-optimal
reward redistributions (T t 1;t)can be estimated to correct the Q-values. TD-learning of .
The expected sum of delayed rewards (T t 1;t)can be formulated as
(T t 1;t) = E"T t 1X
=0Rt+2+jst;at#
(A96)
= E2
4Rt+2+T (t+1) 1X
=0R(t+1)+2+jst;at3
5
= Est+1;at+1;rt+22
4Rt+2+ E2
4T (t+1) 1X
=0R(t+1)+2+jst+1;at+13
5jst;at3
5
= Est+1;at+1;rt+2[Rt+2+(T t 2;t+ 1)jst;at]:
Therefore,(T t 1;t)can be estimated by Rt+2and(T t 2;t+ 1), if the last two are drawn
together, i.e. considered as pairs. Otherwise the expectations of Rt+2and(T t 2;t+ 1) given
(st;at)must be estimated. We can use TD-learning if the immediate reward and the sum of delayed
rewards are drawn as pairs, that is, simultaneously. The TD-error becomes
(T t 1;t) =Rt+2+(T t 2;t+ 1) (T t 1;t): (A97)
We now define eligibility traces for . Let then-step return samples of for16n6T tbe
(1)(T t 1;t) =Rt+2+(T t 2;t+ 1) (A98)
(2)(T t 1;t) =Rt+2+Rt+3+(T t 3;t+ 2)
:::
(n)(T t;t) =Rt+2+Rt+3+:::+Rt+n+1+(T t n 1;t+n):
The-return foris
()(T t 1;t) = (1 )T t 1X
n=1n 1(n)(T t 1;t) +T t 1(T t)(T t 1;t):
(A99)
We obtain
()(T t 1;t) =Rt+2+(T t 2;t+ 1) (A100)
+(Rt+3+(T t 3;t+ 2) (T t 2;t+ 1))
+2(Rt+4+(T t 4;t+ 3) (T t 3;t+ 2))
:::
+T 1 t(RT+1+(0;T 1) (1;T 2)):
28
We can reformulate this as
()(T t 1;t) =(T t 1;t) +T t 1X
n=0n(T t n 1;t+n): (A101)
Theerror is
(T t 1;t) =()(T t 1;t) (T t 1;t) =T t 1X
n=0n(T t n 1;t+n):
(A102)
The derivative of
1=2 (T t 1;t)2= 1=2
()(T t 1;t) (T t 1;t;w)2
(A103)
with respect to wis
()(T t 1;t) (T t 1;t;w)
rw(T t 1;t;w) (A104)
= T t 1X
n=0n(T t n 1;t+n)rw(T t 1;t;w):
The full gradient of the sum of errors is
1=2rwT 1X
t=0(T t 1;t)2(A105)
= T 1X
t=0T t 1X
n=0n(T t n 1;t+n)rw(T t 1;t;w)
= T 1X
t=0T 1X
=t t(T 1;)rw(T t 1;t;w)
= T 1X
=0(T 1;)X
t=0 trw(T t 1;t;w):
We setn= t, so thatn= 0 becomes=tandn=T t 1becomes=T 1. The
recursion
f(t) =f(t 1) +at; f (0) = 0 (A106)
can be written as
f(T) =TX
t=1T tat: (A107)
Therefore, we can use following update rule for minimizingPT 1
t=0(T;t)2with respect to wwith
166T 1:
z 1= 0 (A108)
z=z 1+rw(T ;;w) (A109)
(T ;) =R+2+(T 1;+ 1;w) (T ;;w) (A110)
wnew=w+(T ;)z: (A111)
Correction of the reward redistribution. For correcting the redistributed reward, we apply a
method similar to reward shaping or look-back advice. This method ensures that the corrected
redistributed reward leads to an SDP that is has the same return per sequence as the SDP P. The
reward correction is
F(st;at;st 1;at 1) =(m;t) (m;t 1); (A112)
29
we define the corrected redistributed reward as
Rc
t+1=Rt+1+F(st;at;st 1;at 1) =Rt+1+(m;t) (m;t 1): (A113)
We assume that (m; 1) =(m;T + 1) = 0 , therefore
T+1X
t=0F(st;at;st 1;at 1) =T+1X
t=0(m;t) (m;t 1) =(m;T + 1) (m; 1) = 0:
(A114)
Consequently, the corrected redistributed reward Rc
t+1does not change the expected return for a
sequence, therefore, the resulting SDP has the same optimal policies as the SDP without correction.
For a predictive reward of at timet=k, which can be predicted from time t=l < k to time
t=k 1, we have:
(m;t) =8
<
:0;fort<l;
; forl6t<k;
0;fort>k:(A115)
The reward correction is
F(st;at;st 1;at 1) =8
>>>>><
>>>>>:0; fort<l;
; fort=l;
0; forl<t<k;
; fort=k;
0; fort>k:(A116)
Usingas auxiliary task in predicting the return for return decomposition. Aprediction can
serve as additional output of the function gthat predicts the return and is the basis of the return
decomposition. Even a partly prediction of means that the reward can be distributed further back.
Ifgcan partly predict , thenghas all information to predict the return earlier in the sequence. If the
return is predicted earlier, then the reward will be distributed further back. Consequently, the reward
redistribution comes closer to an optimal reward redistribution. However, at the same time, can no
longer be predicted. The function gmust find another that can be predicted. If no such is found,
then optimal reward redistribution is indicated.
Variant (iii): Eligibility traces assuming optimality. We can use eligibility traces to further
distribute the reward back. For an optimal reward redistribution, we have Est+1[V(st+1)] = 0 . The
new returnsRtare given by the recursion
Rt=rt+1+Rt+1; (A117)
RT+2= 0: (A118)
The expected policy gradient updates with the new returns RareE[rlog(atjst;)Rt]. To
avoid an estimation of the value function V(st+1), we assume optimality, which might not be valid.
However, the error should be small if the return decomposition works well. Instead of estimating a
value function, we can use a correction as it is shown in next paragraph.
A2.7.2 Policy Gradients
Type (B) methods are policy gradients. In the expected updates E[rlog(ajs;)q(s;a)]
of policy gradients, the value q(s;a)is replaced by an estimate of r(s;a)or by samples of the
redistributed reward. Convergence to optimal policies is guaranteed even with the offset (s)in
Eq.(A83) similar to baseline normalization for policy gradients. With baseline normalization, the
baselineb(s) = Ea[r(s;a)] =P
a(ajs)r(s;a)is subtracted from r(s;a), which gives the policy
gradient E[rlog(ajs;)(r(s;a) b(s))]. With eligibility traces using 2[0;1]forG
t[128],
we have the new returns Gt=rt+Gt+1withGT+2= 0. The expected updates with the new returns
GareE[rlog(atjst;)Gt].
A2.7.3 Q-Learning
The type (C) method is Q-learning with the redistributed reward. Here, Q-learning is justified if
immediate and future reward are drawn together, as typically done. Also other temporal difference
methods are justified when immediate and future reward are drawn together.
30
A2.8 Return Decomposition to construct a Reward Redistribution
We now propose methods to construct reward redistributions which ideally would be optimal. Learn-
ing with non-optimal reward redistributions does work since the optimal policies do not change
according to Theorem A2. However reward redistributions that are optimal considerably speed up
learning, since future expected rewards introduce biases in TD-methods and the high variance in
MC-methods. The expected optimal redistributed reward is according to Eq. (A65) the difference
ofQ-values. The more a reward redistribution deviates from these differences, the larger are the
absolute-values and, in turn, the less optimal is the reward redistribution. Consequently we aim
at identifying the largest Q-value differences to construct a reward redistribution which is close to
optimal. Assume a grid world where you have to take a key to later open a door to a treasure room.
Taking the key increases the chances to receive the treasure and, therefore, is associated with a large
positiveQ-value difference. Smaller positive Q-value difference are steps toward the key location.
Reinforcement Learning as Pattern Recognition. We want to transform the reinforcement learn-
ing problem into a pattern recognition problem to employ deep learning approaches. The sum of the
Q-value differences gives the difference between expected return at sequence begin and the expected
return at sequence end (telescope sum). Thus, Q-value differences allow to predict the expected
return of the whole state-action sequence. Identifying the largest Q-value differences reduce the
prediction error most. Q-value differences are assumed to be associated with patterns in state-action
transitions like taking the key in our example. The largest Q-value differences are expected to be
found more frequently in sequences with very large or very low return. The resulting task is to predict
the expected return from the whole sequence and identify which state-action transitions contributed
most to the prediction. This pattern recognition task is utilized to construct a reward redistribution,
where redistributed reward corresponds to the contribution.
A2.8.1 Return Decomposition Idea
Thereturn decomposition idea is to predict the realization of the return or its expectation by a function
gfrom the state-action sequence
(s;a)0:T:= (s0;a0;s1;a1;:::;sT;aT): (A119)
The return is the accumulated reward along the whole sequence (s;a)0:T. The function gdepends on
the policythat is used to generate the state-action sequences. Subsequently, the prediction or the
realization of the return is distributed over the sequence with the help of g. One important advantage
of a deterministic function gis that it predicts with proper loss functions and if being perfect the
expected return. Therefore, it removes the sampling variance of returns. In particular the variance
of probabilistic rewards is averaged out. Even an imperfect function gremoves the variance as it is
deterministic. As described later, the sampling variance may be reintroduced when strictly return-
equivalent SDPs are ensured. We want to determine for each sequence element its contribution to the
prediction of the function g. Contribution analysis computes the contribution of each state-action pair
to the prediction, that is, the information of each state-action pair about the prediction. In principle,
we can use any contribution analysis method. However, we prefer three methods: (A) Differences
in predictions. If we can ensure that gpredicts the sequence-wide return at every time step. The
difference of two consecutive predictions is a measure of the contribution of the current state-action
pair to the return prediction. The difference of consecutive predictions is the redistributed reward.
(B) Integrated gradients (IG) [ 125]. (C) Layer-wise relevance propagation (LRP) [ 3]. The methods
(B) and (C) use information later in the sequence for determining the contribution of the current
state-action pair. Therefore, they introduce a non-Markov reward. However, the non-Markov reward
can be viewed as probabilistic reward. Since probabilistic reward increases the variance, we prefer
method (A).
Explaining Away Problem. We still have to tackle the problem that reward causing actions do
not receive redistributed rewards since they are explained away by later states. To describe the
problem, assume an MDP ~Pwith the only reward at sequence end. To ensure the Markov property,
states in ~Phave to store the reward contributions of previous state-actions; e.g. sThas to store all
previous contributions such that the expectation ~r(sT;aT)is Markov. The explaining away problem
is that later states are used for return prediction, while reward causing earlier actions are missed. To
avoid explaining away, between the state-action pair (st;at)and its predecessor (st 1;at 1), where
(s 1;a 1)are introduced for starting an episode. The sequence of differences is defined as
0:T:=
(s 1;a 1;s0;a0);:::; (sT 1;aT 1;sT;aT)
: (A120)
31
We assume that the differences are mutually independent [60]:
p((st 1;at 1;st;at)j(s 1;a 1;s0;a0);:::; (st 2;at 2;st 1;at 1); (A121)
(st;at;st+1;at+1):::;(sT 1;aT 1;sT;aT)) =p((st 1;at 1;st;at)):
The function gpredicts the realization of the sequence-wide return or its expectation from the
sequence 0:T:
g
0:T
= Eh
~RT+1jsT;aTi
= ~rT+1: (A122)
Return decomposition deconstructs ginto contributions ht=h((st 1;at 1;st;at)at timet:
g
0:T
=TX
t=0h((st 1;at 1;st;at)) = ~rT+1: (A123)
If we can assume that gcan predict the return at every time step:
g
0:t
= Eh
~RT+1jst;ati
; (A124)
then we use the contribution analysis method "differences of return predictions", where the contribu-
tions are defined as:
h0=h((s 1;a 1;s0;a0)) :=g
0:0
(A125)
ht=h((st 1;at 1;st;at)) :=g
0:t
g
0:(t 1)
: (A126)
We assume that the sequence-wide return cannot be predicted from the last state. The reason is
that either immediate rewards are given only at sequence end without storing them in the states or
information is removed from the states. Therefore, a relevant event for predicting the final reward
must be identified by the function g. The prediction errors at the end of the episode become, in
general, smaller since the future is less random. Therefore, prediction errors later in the episode are
up-weighted while early predictions ensure that information is captured in htfor being used later.
The prediction at time Thas the largest weight and relies on information from the past.
Ifgdoes predict the return at every time step, contribution analysis decomposes g. For decomposing
a lineargone can use the Taylor decomposition (a linear approximation) of gwith respect to the
h[3,83]. A non-linear gcan be decomposed by layerwise relevance propagation (LRP) [ 3,84] or
integrated gradients (IG) [125].
A2.8.2 Reward Redistribution based on Return Decomposition
We assume a return decomposition
g
0:T
=TX
t=0ht; (A127)
with
h0=h((s 1;a 1;s0;a0)); (A128)
ht=h((st 1;at 1;st;at))for0<t6T : (A129)
We use these contributions for redistributing the reward. The reward redistribution is given by the
random variable Rt+1for the reward at time t+ 1. These new redistributed rewards Rt+1must have
the contributions htas mean:
E [Rt+1jst 1;at 1;st;at] =ht (A130)
The reward ~RT+1of~Pis probabilistic and the function gmight not be perfect, therefore neither
g(0:T) = ~rT+1for the return realization ~rT+1norg(0:T) = ~r(sT;aT)for the expected return
holds. To assure strictly return-equivalent SDPs, we have to compensate for both a probabilistic
reward ~RT+1and an imperfect function g. The compensation is given by
~rT+1 TX
=0ht: (A131)
32
We compensate with an extra reward RT+2at timeT+ 2which is immediately given after RT+1at
timeT+ 1after the state-action pair (sT;aT). The new redistributed reward Rt+1is
E [R1js0;a0] =h0; (A132)
E [Rt+1jst 1;at 1;st;at] =htfor0<t6T ; (A133)
RT+2=~RT+1 TX
t=0ht; (A134)
where the realization ~rT+1is replaced by its random variable ~RT+1. If the the prediction of gis
perfect, then we can set RT+2= 0and redistribute the expected return which is the predicted return.
RT+2compensates for both a probabilistic reward ~RT+1and an imperfect function g. Consequently
all variance of sampling the return is moved to RT+2. Only the imperfect function gmust be corrected
while the variance does not matter. However, we cannot distinguish, e.g. in early learning phases,
between errors of gand random reward. A perfectgresults in an optimal reward redistribution.
Next theorem shows that Theorem A4 holds also for the correction RT+2.
Theorem A6. The optimality conditions hold also for reward redistributions with corrections:
(T t+ 1;t 1) = 0: (A135)
Proof. The expectation of (T t+ 1;t 1) =PT t+1
=0Rt+1+, that is(m;t 1)withm=
T t+ 1.
E"T t+1X
=0Rt+1+jst 1;at 1#
(A136)
= E"
~RT+1 ~q(sT;aT) +T tX
=0(~q(s+t;a+t) ~q(s+t 1;a+t 1))jst 1;at 1#
= Eh
~RT+1 ~q(st 1;at 1)jst 1;at 1i
= Eh
~RT+1jst 1;at 1i
E"
E"TX
=t 1~R+1jst 1;at 1#
jst 1;at 1#
= Eh
~RT+1jst 1;at 1i
Eh
~RT+1jst 1;at 1i
= 0:
If we substitute t 1byt(tone step further and mone step smaller) it follows
(T t;t) = 0: (A137)
Next, we consider the case t=T+ 1, that is(0;T), which is the expected correction. We will use
following equality for the expected delayed reward at sequence end:
~q(sT;aT) = E ~RT+1h
~RT+1jsT;aTi
= ~rT+1(sT;aT); (A138)
since ~q(sT+1;aT+1) = 0 . Fort=T+ 1we obtain
ERT+2[RT+2jsT;aT] = E ~RT+1h
~RT+1 ~q(sT;aT)jsT;aTi
(A139)
= ~rT+1(sT;aT) ~rT+1(sT;aT) = 0:
In the experiments we also use a uniform compensation where each reward has the same contribution
to the compensation:
R1=h0+1
T+ 1
~RT+1 TX
=0h((s 1;a 1;s;a))!
(A140)
Rt+1=ht+1
T+ 1
~RT+1 TX
=0h((s 1;a 1;s;a))!
: (A141)
33
Consequently all variance of sampling the return is uniformly distributed across the sequence. Also
the error ofgis uniformly distributed across the sequence.
An optimal reward redistribution implies
g
0:t
=tX
=0h((s 1;a 1;s;a)) = ~q(st;at) (A142)
since the expected reward is
E [Rt+1jst 1;at 1;st;at] =h((st 1;at 1;st;at)) (A143)
= ~q(st;at) ~q(st 1;at 1)
according to Eq. (A65) in Theorem A4 and
h0=h((s 1;a 1;s0;a0)) (A144)
=g
0:0
= ~q(s0;a0):
A2.9 Remarks on Return Decomposition
A2.9.1 Return Decomposition for Binary Reward
A special case is a reward that indicates success or failure by giving a reward of 1 or 0, respectively.
The return is equal to the final reward R, which is a Bernoulli variable. For each state sor each
state-action pair (s;a)the expected return can be considered as a Bernoulli variable with success
probabilitypR(s)orpR(s;a). The value function is v(s) = E(Gjs) =pR(s)and the action-
value isq(s) = E(Gjs;a) =pR(s;a)which is in both cases the expectation of success. In this
case, the optimal reward redistribution tracks the success probability
R1=h0=h((s 1;a 1;s0;a0)) = ~q(s0;a0) =pR(s0;a0) (A145)
Rt+1=ht=h((st 1;at 1;st;at)) = ~q(st;at) ~q(st 1;at 1) (A146)
=pR(st;at) pR(st 1;at 1)for0<t6T
RT+2=~RT+1 ~rT+1=R pR(sT;aT): (A147)
The redistributed reward is the change in the success probability. A good action increases the success
probability and obtains a positive reward while a bad action reduces the success probability and
obtains a negative reward.
A2.9.2 Optimal Reward Redistribution reduces the MDP to a Stochastic Contextual Bandit
Problem
The new SDPPhas a redistributed reward with random variable Rtat timetdistributed according to
p(rjst;at). Theorem A5 states
q(st;at) =r(st;at): (A148)
This equation looks like a contextual bandit problem, where r(st;at)is an estimate of the mean
reward for action atfor state or context st. Contextual bandits [ 72, p. 208] are characterized by a
conditionally -subgaussian noise (Def. 5.1 [ 72, p. 68]). We define the zero mean noise variable by
t=(st;at) =Rt r(st;at); (A149)
where we assume that tis a conditionally -subgaussian noise variable. Therefore, is distributed
according to p(r r(st;at)jst;at)and fulfills
E [(st;at)] = 0; (A150)
E [exp((st;at)]6exp(22=2): (A151)
Subgaussian random variables have tails that decay almost as fast as a Gaussian. If the reward ris
bounded byjrj< B , thenis bounded byjj< B and, therefore, a B-subgaussian. For binary
rewards it is of interest that a Bernoulli variable is 0.5-subgaussian [ 72, p. 71]. In summary, an
optimal reward redistribution reduces the MDP to a stochastic contextual bandit problem.
34
A2.9.3 Relation to ”Backpropagation through a Model´´
The relation of reward redistribution if applied to policy gradients and ”Backpropagation through a
Model ´´is discussed here. For a delayed reward that is only received at the end of an episode, we
decompose the return ~rT+1into
g(0:T) = ~rT+1=TX
t=0h((st 1;at 1;st;at)): (A152)
The policy gradient for an optimal reward redistribution is
E[rlog(atjst;)h((st 1;at 1;st;at))]: (A153)
Summing up the gradient for one episode, the gradient becomes
E"TX
t=0rlog(atjst;)h((st 1;at 1;st;at))#
(A154)
= E[J(log(ajs;))h((s0;a0;s;a))];
wherea0= (a 1;a0;a1;:::;aT 1)anda= (a0;a1;:::;aT)are the sequences of actions,
s0= (s 1;s0;s1;:::;sT 1)ands= (s0;s1;:::;sT)are the sequences of states, J(log)is
the Jacobian of the log-probability of the state sequence with respect to the parameter vector , and
h((s0;a0;s;a))is the vector with entries h((st 1;at 1;st;at)).
An alternative approach via sensitivity analysis is ”Backpropagation through a Model ´´, where
g(0:T)is maximized, that is, the return is maximized. Continuous actions are directly fed into g
while probabilistic actions are sampled before entering g. Analog to gradients used for Restricted
Boltzmann Machines, for probabilistic actions the log-likelihood of the actions is used to construct a
gradient. The likelihood can also be formulated as the cross-entropy between the sampled actions
and the action probability. The gradient for ”Backpropagation through a Model´´ is
E[J(log(ajs;))rag(0:T)]; (A155)
whererag(0:T)is the gradient of gwith respect to the action sequence a.
If for ”Backpropagation through a Model ´´the model gradient with respect to actions is replaced by
the vector of contributions of actions in the model, then we obtain redistribution applied to policy
gradients.
A3 Bias-Variance Analysis of MDP Q-Value Estimators
Bias-variance investigations have been done for Q-learning. Grünewälder & Obermayer [ 41] investi-
gated the bias of temporal difference learning (TD), Monte Carlo estimators (MC), and least-squares
temporal difference learning (LSTD). Mannor et al. [ 77] and O’Donoghue et al. [ 88] derived bias and
variance expressions for updating Q-values.
The true, but unknown, action-value function qis the expected future return. We assume to have
the dataD, which is a set of state-action sequences with return, that is a set of episodes with return.
Using dataD,qis estimated by ^q= ^q(D), which is an estimate with bias and variance. For bias
and variance we have to compute the expectation ED[:]over the data D. The mean squared error
(MSE) of an estimator ^q(s;a)is
mse ^q(s;a) = EDh
^q(s;a) q(s;a)2i
: (A156)
The bias of an estimator ^q(s;a)is
bias ^q(s;a) = ED[^q(s;a)] q(s;a): (A157)
The variance of an estimator ^q(s;a)is
var^q(s;a) = EDh
^q(s;a) ED[^q(s;a)]2i
: (A158)
The bias-variance decomposition of the MSE of an estimator ^q(s;a)is
mse ^q(s;a) =var^q(s;a) +
bias ^q(s;a)2: (A159)
35
The bias-variance decomposition of the MSE of an estimator ^qas a vector is
mse ^q= ED"X
s;a
^q(s;a) q(s;a)2#
= ED
k^q qk2
; (A160)
bias ^q= ED[^q] q; (A161)
var^q= ED"X
s;a
^q(s;a) ED[^q(s;a)]2#
= TrVarD[^q]; (A162)
mse ^q=var^q+
bias ^qTbias ^q: (A163)
A3.1 Bias-Variance for MC and TD Estimates of the Expected Return
Monte Carlo (MC) computes the arithmetic mean ^q(s;a)ofGtfor(st=s;at=a)over the
episodes given by the data.
Fortemporal difference (TD) methods, like SARSA, with learning rate the updated estimate of
q(st;at)is:
(^q)new(st;at) = ^q(st;at)
^q(st;at) Rt+1
^q(st+1;at+1)
= (1 ) ^q(st;at) +
Rt+1+
^q(st+1;at+1)
: (A164)
Similar updates are used for expected SARSA and Q-learning, where only at+1is chosen differently.
Therefore, for the estimation of ^q(st;at), SARSA and Q-learning perform an exponentially weighted
arithmetic mean of (Rt+1+
^q(st+1;at+1)). If for the updates ^q(st+1;at+1)is fixed on some data,
then SARSA and Q-learning perform an exponentially weighted arithmetic mean of the immediate
rewardRt+1plus averaging over which ^q(st+1;at+1)(which (st+1;at+1)) is chosen. In summary,
TD methods like SARSA and Q-learning are biased via ^q(st+1;at+1)and perform an exponentially
weighted arithmetic mean of the immediate reward Rt+1and the next (fixed) ^q(st+1;at+1).
Bias-Variance for Estimators of the Mean. Both Monte Carlo and TD methods, like SARSA
andQ-learning, respectively, estimate q(s;a) = E [Gtjs;a], which is the expected future return.
The expectations are estimated by either an arithmetic mean over samples with Monte Carlo or an
exponentially weighted arithmetic mean over samples with TD methods. Therefore, we are interested
in computing the bias and variance of these estimators of the expectation. In particular, we consider
the arithmetic mean and the exponentially weighted arithmetic mean.
We assume nsamples for a state-action pair (s;a). However, the expected number of samples
depends on the probabilistic number of visits of (s;a)per episode.
Arithmetic mean. FornsamplesfX1;:::;Xngfrom a distribution with mean and variance 2,
the arithmetic mean, its bias and and its variance are:
^n=1
nnX
i=1Xi;bias(^n) = 0;var(^n) =2
n: (A165)
The estimation variance of the arithmetic mean is determined by 2, the variance of the distribution
the samples are drawn from.
Exponentially weighted arithmetic mean. FornsamplesfX1;:::;Xngfrom a distribution with
meanand variance , the variance of the exponential mean with initial value 0is
^0=0;^k= (1 ) ^k 1+Xk; (A166)
which gives
^n=nX
i=1(1 )n iXi+ (1 )n0: (A167)
This is a weighted arithmetic mean with exponentially decreasing weights, since the coefficients sum
up to one:
nX
i=1(1 )n i+ (1 )n=1 (1 )n
1 (1 )+ (1 )n(A168)
= 1 (1 )n+ (1 )n= 1:
36
The estimator ^nis biased, since:
bias(^n) = E [^n] = E"
nX
i=1(1 )n iXi#
+ (1 )n0 (A169)
=nX
i=1(1 )n iE [Xi] + (1 )n0
=n 1X
i=0(1 )i+ (1 )n0
=(1 (1 )n) + (1 )n0 = (1 )n(0 ):
Asymptotically ( n!1 ) the estimate is unbiased. The variance is
var(^n) = E
^2
n
E2[^n] (A170)
= E2
42nX
i=1nX
j=1(1 )n iXi(1 )n jXj3
5
+ E"
2 (1 )n0nX
i=1(1 )n iXi#
+ (1 )2n2
0
((1 )n(0 ) +)2
=2E2
4nX
i=1(1 )2(n i)X2
i+nX
i=1nX
j=1;j6=i(1 )n iXi(1 )n jXj3
5
+ 2 (1 )n0nX
i=1(1 )n i+ (1 )2n2
0
((1 )n0+ (1 (1 )n))2
=2 nX
i=1(1 )2(n i)
2+21
A+nX
i=1nX
j=1;j6=i(1 )n i(1 )n j21
A
+ 2 (1 )n0(1 (1 )n) + (1 )2n2
0
(1 )2n2
0 2 (1 )n0(1 (1 )n) (1 (1 )n)22
=22n 1X
i=0
(1 )2i+22 n 1X
i=0(1 )i!2
(1 (1 )n)22
=221 (1 )2n
1 (1 )2=2(1 (1 )2n)
2 :
Also the estimation variance of the exponentially weighted arithmetic mean is proportional to 2,
which is the variance of the distribution the samples are drawn from.
The deviation of random variable Xfrom its mean can be analyzed with Chebyshev’s inequality.
Chebyshev’s inequality [ 15,131] states that for a random variable Xwith expected value and
variance ~2and for any real number >0:
Pr [jX j>~]61
2(A171)
or, equivalently,
Pr [jX j>]6~2
2: (A172)
37
FornsamplesfX1;:::;Xngfrom a distribution with expectation and variance we compute the
arithmetic mean1
nPn
i=1Xi. IfXis the arithmetic mean, then ~2=2=nand we obtain
Pr"1
nnX
i=1Xi >#
62
n2: (A173)
Following Grünewälder and Obermayer [ 41], Bernstein’s inequality can be used to describe the
deviation of the arithmetic mean (unbiased estimator of ) from the expectation (see Theorem 6 of
Gábor Lugosi’s lecture notes [75]):
Pr"1
nnX
i=1Xi >#
62 exp
2n
22+2M
3!
; (A174)
wherejX j<M .
A3.2 Mean and Variance of an MDP Sample of the Return
Since the variance of the estimators of the expectations (arithmetic mean and exponentially weighted
arithmetic mean) is governed by the variance of the samples, we compute mean and variance of the
return estimate q(s;a). We follow [121, 129, 130] for deriving the mean and variance.
We consider an MDP with finite horizon T, that is, each episode has length T. The finite horizon
MDP can be generalized to an MDP with absorbing (terminal) state s=E. We only consider proper
policies, that is there exists an integer nsuch that from any initial state the probability of achieving
the terminal state Eafternsteps is strictly positive. Tis the time to the first visit of the terminal state:
T= minkjsk=E. The return G0is:
G0=TX
k=0
kRk+1: (A175)
The action-value function, the Q-function, is the expected return
Gt=T tX
k=0
kRt+k+1 (A176)
if starting in state St=sand actionAt=a:
q(s;a) = E[Gtjs;a]: (A177)
The second moment of the return is:
M(s;a) = E
G2
tjs;a
: (A178)
The variance of the return is:
V(s;a) = Var[Gtjs;a] =M(s;a)
q(s;a)2: (A179)
Using Es0;a0(f(s0;a0)) =P
s0p(s0js;a)P
a0(a0js0)f(s0;a0), and analogously Vars0;a0and
Varr, the next Theorem A7 gives mean and variance V(s;a) = Var[Gtjs;a]of sampling returns
from an MDP.
Theorem A7. The meanqand variance Vof sampled returns from an MDP are
q(s;a) =X
s0;rp(s0;rjs;a)
r+
X
a0(a0js0)q(s0;a0)!
=r(s;a) +
Es0;a0[q(s0;a0)js;a];
V(s;a) = Varr[rjs;a] +
2(Es0;a0[V(s0;a0)js;a] + Vars0;a0[q(s0;a0)js;a]): (A180)
Proof. The Bellman equation for Q-values is
q(s;a) =X
s0;rp(s0;rjs;a)
r+
X
a0(a0js0)q(s0;a0)!
(A181)
=r(s;a) +
Es0;a0[q(s0;a0)js;a]:
38
This equation gives the mean if drawing one sample. We use
r(s;a) =X
rrp(rjs;a); (A182)
r2(s;a) =X
rr2p(rjs;a): (A183)
For the second moment, we obtain [129]:
M(s;a) = E
G2
tjs;a
(A184)
= E2
4 T tX
k=0
kRt+k+1!2
js;a3
5
= E2
4
Rt+1+T tX
k=1
kRt+k+1!2
js;a3
5
=r2(s;a) + 2r(s;a) E"T tX
k=1
kRt+k+1js;a#
+ E2
4 T tX
k=1
kRt+k+1!2
js;a3
5
=r2(s;a) + 2
r(s;a)X
s0p(s0js;a)X
a0(a0js0)q(s0;a0)
+
2X
s0p(s0js;a)X
a0(a0js0)M(s0;a0)
=r2(s;a) + 2
r(s;a) Es0;a0[q(s0;a0)js;a] +
2Es0;a0[M(s0;a0)js;a]:
For the variance, we obtain:
V(s;a) =M(s;a)
q(s;a)2(A185)
=r2(s;a) (r(s;a))2+
2Es0;a0[M(s0;a0)js;a]
2E2
s0;a0[q(s0;a0)js;a]
= Varr[rjs;a] +
2
Es0;a0h
M(s0;a0)
q(s0;a0)2js;ai
E2
s0;a0[q(s0;a0)js;a] + Es0;a0h
q(s0;a0)2js;ai
= Varr[rjs;a] +
2(Es0;a0[V(s0;a0)js;a] + Vars0;a0[q(s0;a0)js;a]):
For deterministic reward, that is, Varr[rjs;a] = 0 , the corresponding result is given as Equation (4)
in Sobel 1982 [121] and as Proposition 3.1 (c) in Tamar et al. 2012 [129].
For temporal difference (TD) learning, the next Q-values are fixed to ^q(s0;a0)when drawing a
sample. Therefore, TD is biased, that is, both SARSA and Q-learning are biased. During learning with
according updates of Q-values, ^q(s0;a0)approachesq(s0;a0), and the bias is reduced. However,
this reduction of the bias is exponentially small in the number of time steps between reward and
updatedQ-values, as we will see later. The reduction of the bias is exponentially small for eligibility
traces, too.
The variance recursion Eq. (A180) of sampled returns consists of three parts:
•(1) the immediate variance Varr[rjs;a]of the immediate reward stemming from the
probabilistic reward p(rjs;a),
•(2) the local variance
2Vars0;a0[q(s0;a0)js;a]from state transitions p(s0js;a)and new
actions(a0js0),
• (3) the expected variance
2Es0;a0[V(s0;a0)js;a]of the nextQ-values.
For different settings the following parts may be zero:
39
• (1) the immediate variance Varr[rjs;a]is zero for deterministic immediate reward,
•(2) the local variance
2Vars0;a0[q(s0;a0)js;a]is zero for (i) deterministic state transitions
and deterministic policy and for (ii)
= 0(only immediate reward),
•(3) the expected variance
2Es0;a0[V(s0;a0)js;a]of the nextQ-values is zero for (i)
temporal difference (TD) learning, since the next Q-values are fixed and set to their current
estimates (if just one sample is drawn) and for (ii)
= 0(only immediate reward).
The local variance Vars0;a0[q(s0;a0)js;a]is the variance of a linear combination of Q-values
weighted by a multinomial distributionP
s0p(s0js;a)P
a0(a0js0). The local variance is
Vars0;a0[q(s0;a0)js;a] =X
s0p(s0js;a)X
a0(a0js0)
q(s0;a0)2(A186)
X
s0p(s0js;a)X
a0(a0js0)q(s0;a0)!2
:
This result is Equation (6) in Sobel 1982 [ 121]. Sobel derived these formulas also for finite horizons
and an analog formula if the reward depends also on the next state, that is, for p(rjs;a;s0).
Monte Carlo uses the accumulated future rewards for updates, therefore its variance is given by the
recursion in Eq. (A180) . TD, however, fixes q(s0;a0)to the current estimates ^q(s0;a0), which do
not change in the current episode. Therefore, TD has Es0;a0[V(s0;a0)js;a] = 0 and only the local
variance Vars0;a0[q(s0;a0)js;a]is present. For n-step TD, the recursion in Eq. (A180) must be
applied (n 1)times. Then, the expected next variances are zero since the future reward is estimated
by^q(s0;a0).
Delayed rewards . For TD and delayed rewards, information on new data is only captured by the
last step of an episode that receives a reward. This reward is used to update the estimates of the
Q-values of the last state ^q(sT;aT). Subsequently, the reward information is propagated one step
back via the estimates ^qfor each sample. The drawn samples (state action sequences) determine
where information is propagated back. Therefore, delayed reward introduces a large bias for TD over
a long period of time, since the estimates ^q(s;a)need a long time to reach their true Q-values.
For Monte Carlo and delayed rewards, the immediate variance Varr[rjs;a] = 0 except for the last
step of the episode. The delayed reward increases the variance of Q-values according to Eq. (A180) .
Sample Distribution Used by Temporal Difference and Monte Carlo. Monte Carlo (MC) sam-
pling uses the true mean and true variance, where the true mean is
q(s;a) =r(s;a) +
Es0;a0[q(s0;a0)js;a] (A187)
and the true variance is
V(s;a) = Varr[rjs;a] +
2(Es0;a0[V(s0;a0)js;a] + Vars0;a0[q(s0;a0)js;a]):
(A188)
Temporal difference (TD) methods replace q(s0;a0)by^q(s0;a0)which does not depend on the
drawn sample. The mean which is used by temporal difference is
q(s;a) =r(s;a) +
Es0;a0[^q(s0;a0)js;a]: (A189)
This mean is biased by
(Es0;a0[^q(s0;a0)js;a] Es0;a0[q(s0;a0)js;a]): (A190)
The variance used by temporal difference is
V(s;a) = Varr[rjs;a] +
2Vars0;a0[^q(s0;a0)js;a]; (A191)
sinceV(s0;a0) = 0 if^q(s0;a0)is used instead of the future reward of the sample. The variance of
TD is smaller than for MC, since variances are not propagated back.
A3.3 TD corrects Bias exponentially slowly with Respect to Reward Delay
Temporal Difference. We show that TD updates for delayed rewards are exponentially small,
fading exponentially with the number of delay steps. Q-learning with learning rates 1=iat the
ith update leads to an arithmetic mean as estimate, which was shown to be exponentially slow
[9]. If for a fixed learning rate the agent always travels along the same sequence of states, then
TD is superquadratic [ 9]. We, however, consider the general case where the agent travels along
40
random sequences due to a random environment or due to exploration. For a fixed learning rate, the
information of the delayed reward has to be propagated back either through the Bellman error or via
eligibility traces. We first consider backpropagation of reward information via the Bellman error. For
each episode the reward information is propagated back one step at visited state-action pairs via the
TD update rule. We denote the Q-values of episode iasqiand assume that the state action pairs
(st;at)are the most visited ones. We consider the update of qi(st;at)of a state-action pair (st;at)
that is visited at time tin theith episode:
qi+1(st;at) =qi(st;at) +t; (A192)
t=rt+1+ max
a0qi(st+1;a0) qi(st;at)(Q-learning) (A193)
t=rt+1+X
a0(a0jst+1)qi(st+1;a0) qi(st;at)(expected SARSA) :(A194)
Temporal Difference with Eligibility Traces. Eligibility traces have been introduced to propagate
back reward information of an episode and are now standard for TD( ) [119]. However, the eligibility
traces are exponentially decaying when propagated back. The accumulated trace is defined as [ 119]:
et+1(s;a) =
et(s;a) fors6=stora6=at;
et(s;a) + 1 fors=standa=at;(A195)
while the replacing trace is defined as [119]:
et+1(s;a) =
et(s;a)fors6=stora6=at;
1 fors=standa=at:(A196)
With eligibility traces using 2[0;1], the-returnG
tis [128]
G
t= (1 )1X
n=1n 1G(n)
t; (A197)
G(n)
t=rt+1+
rt+2+:::+
n 1rt+n+
n 1V(st+n): (A198)
We obtain
G
t= (1 )1X
n=1n 1G(n)
t (A199)
= (1 )
rt+1+
V(st+1) +1X
n=2n 1G(n)
t!
= (1 )
rt+1+
V(st+1) +1X
n=1nG(n+1)
t!
= (1 )
rt+1+
V(st+1) +
1X
n=1n 1G(n)
t+1+1X
n=1nrt+1!
= (1 )1X
n=0nrt+1+ (1 )
V(st+1) +
G
t+1
=rt+1+ (1 )
V(st+1) +
G
t+1:
We use the naive Q(), where eligibility traces are not set to zero. In contrast, Watkins’ Q()[140]
zeros out eligibility traces after non-greedy actions, that is, if not the maxais chosen. Therefore,
the decay is even stronger for Watkin’s Q(). Another eligibility trace method is Peng’s Q()[90]
which also does not zero out eligibility traces.
The next Theorem A8 states that the decay of TD is exponential for Q-value updates in an MDP
with delayed reward, even for eligibility traces. Thus, for delayed rewards TD requires exponentially
many updates to correct the bias, where the number of updates is exponential in the delay steps.
Theorem A8. For initialization q0(st;at) = 0 and delayed reward with rt= 0 fort6T,
q(sT i;aT i)receives its first update not earlier than at episode iviaqi(sT i;aT i) =i+1r1
T+1,
wherer1
T+1is the reward of episode 1. Eligibility traces with 2[0;1)lead to an exponential decay
of(
)kwhen the reward is propagated ksteps back.
41
Proof. If we assume that Q-values are initialized with zero, then q0(st;at) = 0 for all (st;at). For
delayed rewards we have rt= 0fort6T. TheQ-valueq(sT i;aT i)at timeT ican receive an
update for the first time at episode i. Since allQ-values have been initialized with zero, the update is
qi(sT i;aT i) =i+1r1
T+1; (A200)
wherer1
T+1is the reward at time T+ 1for episode 1.
We move on to eligibility traces, where the update for a state sis
qt+1(s;a) =qt(s;a) +tet(s;a); (A201)
t=rt+1+ max
a0qt(st+1;a0) qt(st;at): (A202)
If states are not revisited, the eligiblity trace at time t+kfor a visit of state stat timetis:
et+k(st;at) =
k: (A203)
If allt+iare zero except for t+k, then the update of q(s;a)is
qt+k+1(s;a) =qt+k(s;a) +t+ket+k(s;a) =qt+k(s;a) +
kt+k:(A204)
A learning rate of = 1does not work since it would imply to forget all previous learned estimates,
and therefore no averaging over episodes would exist. Since <1, we observe exponential decay
backwards in time for online updates.
A3.4 MC affects the Variance of Exponentially Many Estimates with Delayed Reward
The variance for Monte Carlo is
V(s;a) = Varr[rjs;a] +
2(Es0;a0[V(s0;a0)js;a] + Vars0;a0[q(s0;a0)js;a]):
(A205)
This is a Bellman equation of the variance. For undiscounted reward
= 1, we obtain
V(s;a) = Varr[rjs;a] + Es0;a0[V(s0;a0)js;a] + Vars0;a0[q(s0;a0)js;a]:(A206)
If we define the “on-site” variance !as
!(s;a) = Varr[rjs;a] + Vars0;a0[q(s0;a0)js;a]; (A207)
we get
V(s;a) =!(s;a) + Es0;a0[V(s0;a0)js;a]: (A208)
This is the solution of the general formulation of the Bellman operator. The Bellman operator is
defined component-wise for any variance Vas
T[V] (s;a) =!(s;a) + Es0;a0[V(s0;a0)js;a]: (A209)
According to the results in Section A7.1, for proper policies a unique fixed point Vexists:
V= T[V] (A210)
V= lim
k!1(T)kV ; (A211)
whereVis any initial variance. In Section A7.1 it was shown that the operator Tis continuous,
monotonically increasing (component-wise larger or smaller), and a contraction mapping for a
weighted sup-norm. If we define the operator Tas depending on the on-site variance !, that is T
!,
then it is monotonically in !. We obtain component-wise for !>~!:
T
![q] (s;a) T
~![q] (s;a) (A212)
= (!(s;a) + Es0;a0[q(s0;a0)]) (~!(s;a) + Es0;a0[q(s0;a0)])
=!(s;a) ~!(s;a)>0:
It follows for the fixed points VofT
!andeVofT
~!:
V(s;a)>eV(s;a): (A213)
Therefore if
!(s;a) = Varr[rjs;a] + Vars0;a0[q(s0;a0)js;a]> (A214)
~!(s;a) =gVarr[rjs;a] +gVars0;a0[q(s0;a0)js;a]
then
V(s;a)>eV(s;a): (A215)
42
Theorem A9. Starting from the sequence end at t=T, as long as!(st;at)>~!(st;at)holds also
the following holds:
V(st;at)>eV(st;at): (A216)
If for (st;at)the strict inequality !(st;at)>~!(st;at)holds, then we have the strict inequality
V(st;at)>eV(st;at): (A217)
Ifp(st;atjst 1;at 1)6= 0for some (st 1;at 1)then
Est;at[V(st;at)jst 1;at 1]>Est;ath
eV(st;at)jst 1;at 1i
: (A218)
Therefore, the strict inequality !(st;at)>~!(st;at)is propagated back as a strict inequality of
variances.
Proof. Proof by induction: Induction base: V(sT+1;aT+1) =eV(sT+1;aT+1) = 0 and
!(sT;aT) = ~!(sT;aT) = 0 .
Induction step ( (t+ 1)!t): The induction hypothesis is that for all (st+1;at+1)we have
V(st+1;at+1)>eV(st+1;at+1) (A219)
and!(st;at)>~!(st;at). It follows that
Est+1;at+1[V(st+1;at+1)]>Est+1;at+1h
eV(st+1;at+1)i
: (A220)
We obtain
V(st;at) eV(st;at) (A221)
=
!(st;at) + Est+1;at+1[V(st+1;at+1)]
~!(st;at) + Est+1;at+1h
eV(st+1;at+1)i
=!(st;at) ~!(st;at)>0:
If for (st;at)the strict inequality !(st;at)>~!(st;at)holds, then we have the strict inequality
V(st;at)>eV(st;at). Ifp(st;atjst 1;at 1)6= 0for some (st 1;at 1)then
Est;at[V(st;at)jst 1;at 1]>Est;ath
eV(st;at)jst 1;at 1i
: (A222)
Therefore, the strict inequality !(st;at)>~!(st;at)is propagated back as a strict inequality of
variances as long as p(st;atjst 1;at 1)6= 0for some (st 1;at 1).
The induction goes through as long as !(st;at)>~!(st;at).
In Stephen Patek’s PhD thesis, [ 89] Lemma 5.1 on page 88-89 and proof thereafter state that if
~!(s;a) =!(s;a) , then the solution eVis continuous and decreasing in . From the inequality
above it follows that
V(s;a) eV(s;a) = (T
!V) (s;a)
T
~!eV
(s;a) (A223)
=!(s;a) ~!(s;a) + Es0;a0h
V(s0;a0) eV(s0;a0)js;ai
>!(s;a) ~!(s;a):
Time-Agnostic States. We defined a Bellman operator as
T[V] (s;a) =!(s;a) +X
s0p(s0js;a)X
a0(a0js0)V(s0;a0) (A224)
=!(s;a) + (V)Tp(s;a);
whereVis the vector with value V(s0;a0)at position (s0;a0)andp(s;a)is the vector with value
p(s0js;a)(a0js0)at position (s0;a0). The fixed point equation is known as the Bellman equation .
In vector and matrix notation the Bellman equation reads
T[V] =!+P V; (A225)
43
wherePis the row-stochastic matrix with p(s0js;a)(a0js0)at position ((s;a);(s0;a0)). We
assume that the set of state-actions f(s;a)gis equal to the set of next state-actions f(s0;a0)g, therefore
Pis a square row-stochastic matrix. This Bellman operator has the same characteristics as the
Bellman operator for the action-value function q.
SincePis a row-stochastic matrix, the Perron-Frobenius theorem says that (1) Phas as largest
eigenvalue 1 for which the eigenvector corresponds to the steady state and (2) the absolute value
of each (complex) eigenvalue is smaller equal 1. Only the eigenvector to eigenvalue 1 has purely
positive real components. Equation 7 of Bertsekas and Tsitsiklis, 1991, [13] states that
(T)t[V] =t 1X
k=0Pk!+PtV: (A226)
Applying the operator Trecursivelyttimes can be written as [13]:
(T)t[V] =t 1X
k=0Pk!+PtV: (A227)
In particular for V=0, we obtain
(T)t[0] =t 1X
k=0Pk!: (A228)
For finite horizon MDPs, the values V=0are correct for time step T+ 1since no reward for
t>T + 1exists. Therefore, the “backward induction algorithm” [ 95,96] gives the correct solution:
V= (T)T[0] =T 1X
k=0Pk!: (A229)
The product of square stochastic matrices is a stochastic matrix, therefore Pkis a stochastic matrix.
Perron-Frobenius theorem states that the spectral radius R(Pk)of the stochastic matrix Pkis:
R(Pk) = 1 . Furthermore, the largest eigenvalue is 1 and all eigenvalues have absolute values smaller
or equal one. Therefore, !can have large influence on Vat every time step.
Time-Aware States. Next we consider time-aware MDPs, where transitions occur only from states
sttost+1. The transition matrix from states sttost+1is denoted by Pt. We assume that Ptare
row-stochastic matrices which are rectangular, that is Pt2Rmn.
Definition A12. A row-stochastic matrix A2Rmnhas non-negative entries and the entries of
each row sum up to one.
It is known that the product of square stochastic matrices A2Rnnis a stochastic matrix. We show
in next theorem that this holds also for rectangular matrices.
Lemma A4. The productC=AB withC2Rmkof a row-stochastic matrix A2Rmnand a
row-stochastic matrix B2Rnkis row-stochastic.
Proof. All entries ofCare non-negative since they are sums and products of non-negative entries of
AandB. The row-entries of Csum up to one:
X
kCik=X
kX
jAijBjk=X
jAijX
kBjk=X
jAij= 1: (A230)
We will use the1-norm and the 1-norm of a matrix, which are defined based on the 1-norm
kxk1= maxijxijand 1-normkxk1=P
ijxijof a vectorx.
Definition A13. The1-norm of a matrix is the maximum absolute row sum:
kAk1= max
kxk1=1kAxk1= max
iX
jjAijj: (A231)
The 1-norm of a matrix is the maximum absolute column sum:
kAk1= max
kxk1=1kAxk1= max
jX
ijAijj: (A232)
44
The statements of next theorem are known as Perron-Frobenius theorem for square stochastic matrices
A2Rnn, e.g. that the spectral radius RisR(A) = 1 . We extend the theorem to a “ 1-norm
equals one” property for rectangular stochastic matrices A2Rmn.
Lemma A5 (Perron-Frobenius) .IfA2Rmnis a row-stochastic matrix, then
kAk1= 1;
AT
1= 1;and forn=mR(A) = 1: (A233)
Proof.A2Rmnis a row-stochastic matrix, therefore Aij=jAijj. Furthermore, the rows of A
sum up to one. Thus, kAk1= 1. Since the column sums of ATare the row sums of A, it follows
that
AT
1= 1.
For square stochastic matrices, that is m=n, Gelfand’s Formula (1941) says that for any matrix
normk:k, for the spectral norm R(A)of a matrixA2Rnnwe obtain:
R(A) = lim
k!1
Ak
1=k: (A234)
Since the product of row-stochastic matrices is a row-stochastic matrix, Akis a row-stochastic
matrix. Consequently
Ak
1= 1 and
Ak
1=k
1= 1. Therefore, the spectral norm R(A)of a
row-stochastic matrix A2Rnnis
R(A) = 1: (A235)
The last statement follows from Perron-Frobenius theorem, which says that the spectral radius of P
is 1.
Using random matrix theory, we can guess how much the spectral radius of a rectangular matrix
deviates from that of a square matrix. Let A2Rmnbe a matrix whose entries are independent
copies of some random variable with zero mean, unit variance, and finite fourth moment. The
Marchenko-Pastur quarter circular law for rectangular matrices says that for n=mthe maximal
singular value is 2pm[79]. Asymptotically we have for the maximal singular value smax(A)/pm+pn[104]. A bound on the largest singular value is given by [122]:
s2
max(A)6(pm+pn)2+O(pnlog(n))a.s. (A236)
Therefore, a rectangular matrix modifies the largest singular value by a factor of a= 0:5(1 +p
n=m)
compared to a mmsquare matrix. In the case that tstates are time aware, transitions only occur
from statessttost+1. The transition matrix from states sttost+1is denoted byPt.
States affected by the on-site variance !k(reachable states). Typically, states in sthave only few
predecessor states in st 1compared to Nt 1, the number of possible states in st 1. Only for those
states inst 1the transition probability to the state in stis larger than zero. That is, each i2st+1
has only few j2stfor whichpt(ijj)>0. We now want to know how many states have increased
variance due to !k, that is how many states are affected by !k. In a general setting, we assume
random connections.
LetNtbe the number of all states stthat are reachable after ttime steps of an episode. N=
1=kPk
t=1Ntis the arithmetic mean of Nt. Letctbe the average connectivity of a state in stto states
inst 1andc= Qk
t=1ct1=kthe geometric mean of the ct. Letntbe the number of states in stthat
are affected by the on-site variance !kat timekfort6k. The number of states affected by !kis
ak=Pk
t=0nt. We assume that !konly has one component larger than zero, that is, only one state
at timet=kis affected:nk= 1. The number of affected edges from sttost 1isctnt. However,
states inst 1may be affected multiple times by different affected states in st. Figure A1 shows
examples of how affected states affect states in a previous time step. The left panel shows no overlap
since affected states in st 1connect only to one affected state in st. The right panel shows some
overlap since affected states in st 1connect to multiple affected states in st.
The next theorem states that the on-site variance !kcan have large effect on the variance of each
previous state-action pair. Furthermore, for small kthe number of affected states grows exponentially,
while for large kit grows only linearly after some time ^t. Figure A2 shows the function which
determines how much akgrows withk.
Theorem A10. Fort6k,!kcontributes to V
tby the termPt k!k, wherekPt kk1= 1.
The number akof states affected by the on-site variance !kis
ak=kX
t=0
1
1 ct
Nt 1nt
Nt 1: (A237)
45
t= 1 s11 s12 s13 s14 s15 s16 s17 s18t= 2 s21 s22 s23 s24 s25 s26 s27 s28t= 3 s31 s32 s33 s34 s35 s36 s37 s38t= 4 s41 s42 s43 s44 s45 s46 s47 s48t= 5 s51 s52 s53 s54 s55 s56 s57 s58
t= 1 s11 s12 s13 s14 s15 s16 s17 s18t= 2 s21 s22 s23 s24 s25 s26 s27 s28t= 3 s31 s32 s33 s34 s35 s36 s37 s38t= 4 s41 s42 s43 s44 s45 s46 s47 s48t= 5 s51 s52 s53 s54 s55 s56 s57 s58Figure A1: Examples of how affected states (cyan) affect states in a previous time step (indicated by
cyan edges) starting with n5= 1(one affected state). The left panel shows no overlap since affected
states inst 1connect only to one affected state in st. The right panel shows some overlap since
affected states in st 1connect to multiple affected states in st.
Proof. The “backward induction algorithm” [ 95,96] gives withV
T+1=0and on-site variance
!T+1=0:
V
t=TX
k=tk 1Y
=tP!k; (A238)
where we defineQt 1
=tP=Iand[!k](sk;ak)=!(sk;ak).
Since the product of two row-stochastic matrices is a row-stochastic matrix according to Lemma A4,
Pt k=Qk 1
=tPis a row-stochastic matrix. Since kPt kk1= 1according to Lemma A5, each
on-site variance !kwitht6kcan have large effects on V
t. Using the row-stochastic matrices
Pt k, we can reformulate the variance:
V
t=TX
k=tPt k!k; (A239)
withkPt kk1= 1. The on-site variance !kat stepkincreases all variances V
twitht6k.
Next we proof the second part of the theorem, which considers the growth of ak. To compute akwe
first have to know nt. For computing nt 1fromnt, we want to know how many states are affected
inst 1ifntstates are affected in st. The answer to this question is the expected coverage when
searching a document collection using a set of independent computers [ 19]. We follow the approach
of Cox et al. [ 19]. The minimal number of affected states in st 1isct, where each of the ctaffected
states inst 1connects to each of the ntstates inst(maximal overlap). The maximal number of
affected states in st 1isctnt, where each affected state in st 1connects to only one affected state in
st(no overlap). We consider a single state in st. The probability of a state in st 1being connected
to this single state in stisct=Nt 1and being not connected to this state in stis1 ct=Nt 1. The
probability of a state in st 1being not connected to any of the ntaffected states in stis
1 ct
Nt 1nt
: (A240)
The probability of a state in st 1being at least connected to one of the ntaffected states in stis
1
1 ct
Nt 1nt
: (A241)
46
0.000.250.500.751.00
0 10 20 30 40 50
x1.0 − (1.0−c/N)^xc/N
0.1
0.3Scaling Function for N_tFigure A2: The function
1
1 ct
Nt 1nt
which scales Nt 1in Theorem A10. This function
determines the growth of ak, which is exponentially at the beginning, and then linearly when the
function approaches 1.
Thus, the expected number of distinct states in st 1being connected to one of the ntaffected states
instis
nt 1=
1
1 ct
Nt 1nt
Nt 1: (A242)
The number akof affected states by !kis
ak=kX
t=0
1
1 ct
Nt 1nt
Nt 1: (A243)
Corollary A2. For smallk, the number akof states affected by the on-site variance !kat stepk
grows exponentially with kby a factor of c:
ak>ck: (A244)
For largekand after some time t>^t, the number akof states affected by !kgrows linearly with k
with a factor of N:
aka^t 1+ (k ^t+ 1) N : (A245)
Proof. For smallntwithctnt
Nt 11, we have
1 ct
Nt 1nt
1 ctnt
Nt 1; (A246)
47
thus
nt 1ctnt: (A247)
For largeNt 1compared to the number of connections ctof a single state in stto states inst 1, we
have the approximation
1 ct
Nt 1nt
=
1 + ct
Nt 1Nt 1!nt=Nt 1
exp( (ctnt)=Nt 1): (A248)
We obtain
nt 1= (1 exp( (ctnt)=Nt 1))Nt 1: (A249)
For smallnt, we again have
nt 1ctnt: (A250)
Therefore, for small k t, we obtain
ntkY
=tcck t: (A251)
Thus, for small kthe numberakof states affected by !kis
ak=kX
t=0ntkX
t=0ck t=kX
t=0ct=ck+1 1
c 1>ck: (A252)
Consequently, for small kthe number akof states affected by !kgrows exponentially with kby
a factor of c. For largek, at a certain time t >^t,nthas grown such that ctnt> Nt 1, yielding
exp( (ctnt)=Nt 1)0, and thus
ntNt: (A253)
Therefore
ak a^t 1=kX
t=^tntkX
t=^tNt(k ^t+ 1) N : (A254)
Consequently, for large kthe numberakof states affected by !kgrows linearly with kby a factor of
N.
Therefore, we aim for decreasing the on-site variance !kfor largek, in order to reduce the variance.
In particular, we want to avoid delayed rewards and provide the reward as soon as possible in each
episode. Our goal is to give the reward as early as possible in each episode to reduce the variance of
action-values that are affected by late rewards and their associated immediate and local variances.
48
A4 Experiments
A4.1 Artificial Tasks
This section provides more details for the artificial tasks (I), (II) and (III) in the main paper. Addition-
ally, we include artificial task (IV) characterized by deterministic reward and state transitions, and
artificial task (V) which is solved using policy gradient methods.
A4.1.1 Task (I): Grid World
This environment is characterized by probabilistic delayed rewards. It illustrates a situation, where
a time bomb explodes at episode end. The agent has to defuse the bomb and then run away as far
as possible since defusing fails with a certain probability. Alternatively, the agent can immediately
run away, which, however, leads to less reward on average since the bomb always explodes. The
Grid World is a quadratic 3131grid with bomb at coordinate [30;15]andstart at[30 d;15],
wheredis the delay of the task. The agent can move in four different directions ( up,right ,left, and
down ). Only moves are allowed that keep the agent on the grid. The episode finishes after 1:5dsteps.
At the end of the episode, with a given probability of 0.5, the agent receives a reward of 1000 if it
has visited bomb . At each time step the agent receives an immediate reward of cth, where the
factorcdepends on the chosen action, tis the current time step, and his the Hamming distance to
bomb . Each move of the agent, which reduces the Hamming distance to bomb , is penalized by the
immediate reward via c= 0:09. Each move of the agent, which increases the Hamming distance to
bomb , is rewarded by the immediate reward via c= 0:1. The agent is forced to learn the Q-values
precisely, since the immediate reward of directly running away hints at a sub-optimal policy.
For non-deterministic reward, the agent receives the delayed reward for having visited bomb with
probabilityp(rT+1= 100jsT;aT). For non-deterministic transitions, the probability of transiting
to next state s0isp(s0js;a). For the deterministic environment these probabilities were either 1 or
zero.
Policy evaluation: learning the action-value estimator for a fixed policy. First, the theoretical
statements on bias and variance of estimating the action-values by TD in Theorem A8 and by MC
in Theorem A10 are experimentally verified for a fixed policy. Secondly, we consider the bias and
variance of TD and MC estimators of the transformed MDP with optimal reward redistribution
according to Theorem A5.
The new MDP with an optimal reward redistribution has advantages over the original MDP both for
TD and MC. For TD, the new MDP corrects the bias exponentially faster and for MC it has fewer
number of action-values with high variance. Consequently, estimators for the new MDP learn faster
than the same estimators in the original MDP.
Since the bias-variance analysis is defined for a particular number of samples drawn from a fixed
distribution, we need to fix the policy for sampling. We use an -greedy version of the optimal policy,
whereis chosen such that on average in 10% of the episodes the agent visits bomb . For the analysis,
the delay ranges from 5 to 30 in steps of 5. The true Q-table for each delay is computed by backward
induction and we use 10 different action-value estimators for computing bias and variance.
For the TD update rule we use the exponentially weighted arithmetic mean that is sample-updates,
with initial value q0(s;a) = 0 . We only monitor the mean and the variance for action-value estimators
at the first time step, since we are interested in the time required for correcting the bias. 10 different
estimators are run for 10,000 episodes. Figure A3a shows the bias correction for different delays,
normalized by the first error.
For the MC update rule we use the arithmetic mean for policy evaluation (later we will use constant-
MC for learning the optimal policy). For each delay, a test set of state-actions for each delay
is generated by drawing 5,000 episodes with the -greedy optimal policy. For each action-value
estimator the mean and the variance is monitored every 10 visits. If every action-value has 500 updates
(visits), learning is stopped. Bias and variance are computed based on 10 different action-value
estimators. As expected from Section A3.1, in Figure A3b the variance decreases by 1=n, wherenis
the number of samples. Figure A3b shows that the number of state-actions with a variance larger than
a threshold increases exponentially with the delay. This confirms the statements of Theorem A10.
Learning the optimal policy. For finding the optimal policy for the Grid World task, we apply
Monte Carlo Tree Search (MCTS), Q-learning, and Monte Carlo (MC). We train until the greedy
policy reaches 90% of the return of the optimal policy. The learning time is measured by the number
of episodes. We use sample updates forQ-learning and MC [ 128]. For MCTS the greedy policy
uses0for the exploration constant in UCB1 [ 68]. The greedy policy is evaluated in 100 episodes
49
0 2000 4000 6000 8000 10000
samples0.00.20.40.60.81.0normalized squared biasdelay 5
delay 10
delay 15
delay 20
delay 25(a)
0 100 200 300 400 500
number of samples0500100015002000variancedelay 5
delay 10
delay 15
delay 20
delay 25
delay 30 (b)
Figure A3: (a) Experimental evaluation of bias and variance of different Q-value estimators on the
Grid World. (b) Normalized bias reduction for different delays. Right: Average variance reduction
for the 10th highest values.
intervals. The MCTS selection step begins in the start state, which is the root of the game tree that is
traversed using UCB1 [ 68] as the tree policy. If a tree-node gets visited the first time, it is expanded
with an initial value obtained by 100simulated trajectories that start at this node. These simulations
use a uniform random policy whose average Return is calculated. The backpropagation step uses
the MCTS(1) update rule [ 66]. The tree policies exploration constant isp
2.Q-learning and MC
use a learning rate of 0:3and an-greedy policy with = 0:3. For RUDDER the optimal reward
redistribution using a return decomposition as stated in Section A2.6.1 is used. For each delay and
each method, 300 runs with different seeds are performed to obtain statistically relevant results.
Estimation of the median learning time and quantiles. The performance of different methods is
measured by the median learning time in terms of episodes. We stop training at 100million episodes.
Some runs, especially for long delays, have taken too long and have thus been stopped. To resolve
this bias the quantiles of the learning time are estimated by fitting a distribution using right censored
data [ 33] .The median is still robustly estimated if more than 50% of runs have finished, which is the
case for all plotted datapoints. We find that for delays where all runs have finished the learning time
follows a Log-normal distribution. Therefore, we fit a Log-normal distribution on the right censored
data. We estimate the median from the existing data, and use maximum likelihood estimation to
obtain the second distribution parameter 2. The start value of the 2estimation is calculated by the
measured variance of the existing data which is algebraically transformed to get the parameter.
A4.1.2 Task (II): The Choice
In this experiment we compare RUDDER, temporal difference (TD) and Monte Carlo (MC) in
an environment with delayed deterministic reward and probabilistic state transitions to investigate
how reward information is transferred back to early states. This environment is a variation of our
introductory pocket watch example and reveals problems of TD and MC, while contribution analysis
excels. In this environment, only the first action at the very beginning determines the reward at the
end of the episode.
The environment is an MDP consisting of two actions a2A =f+; g, an initial state s0,
twocharged statess+,s , two neutral statess,s , and a final state sf. After the first action
a02A=f+; gin states0, the agent transits to state s+for actiona0= + and tos for action
a0= . Subsequent state transitions are probabilistic and independent on actions. With probability
pCthe agent stays in the charged statess+ors , and with probability (1 pC)it transits from s+or
s to the neutral statessors , respectively. The probability to go from neutral states to charged
states ispC, and the probability to stay in neutral states is (1 pC). Probabilities to transit from s+
50
s0s+
s s
s sF11
11a+
a 11
Figure A4: State transition diagram for The Choice task. The diagram is a simplification of the actual
MDP.
orstos ors or vice versa are zero. Thus, the first action determines whether that agent stays
in "+"-states or " "-states. The reward is determined by how many times the agent visits charged
states plus a bonus reward depending on the agent’s first action. The accumulative reward is given at
sequence end and is deterministic. After Ttime steps, the agent is in the final state sf, in which the
rewardRT+1is provided. RT+1is the sum of 3 deterministic terms:
1.R0, the baseline reward associated to the first action;
2.RC, the collected reward across states, which depends on the number of visits nto the
charged states;
3.Rb, a bonus if the first action a0= +.
The expectations of the accumulative rewards for R0andRChave the same absolute value but
opposite signs, therefore they cancel in expectation over episodes. Thus, the expected return of an
episode is the expected reward Rb:p(a0= +)b. The rewards are defined as follows:
c0=1ifa0= +
1ifa0= ;(A255)
Rb=bifa0= +
0ifa0= ;(A256)
RC=c0Cn; (A257)
R0= c0CpCT ; (A258)
RT+1=RC+R0+Rb; (A259)
whereCis the baseline reward for charged states, andpCthe probability of staying in or transiting
tocharged states. The expected visits of charged states is E[n] =pCTandE[RT+1] = E[Rb] =
p(a0= +)b.
Methods compared: The following methods are compared:
1.Q-learning with eligibility traces according to Watkins [140],
2. Monte Carlo,
3. RUDDER with reward redistribution.
For RUDDER, we use an LSTM without lessons buffer and without safe exploration. Contribution
analysis is realized by differences of return predictions. For MC, Q-values are the exponential moving
average of the episode return. For RUDDER, the Q-values are estimated by an exponential moving
average of the reward redistribution.
Performance evaluation and results. The task is considered as solved when the exponential
moving average of the selection of the desired action at time t= 0 is equal to 1 , where
is the exploration rate. The performances of the compared methods are measured by the average
learning time in the number of episodes required to solve the task. A Wilcoxon signed-rank test
is performed between the learning time of RUDDER and those of the other methods. Statistical
significance p-values are obtained by Wilcoxon signed-rank test. RUDDER with reward redistribution
is significantly faster than all other methods with p-values <10 8. Table A1 reports the number of
episodes required by different methods to solve the task. RUDDER with reward redistribution clearly
outperforms all other methods.
51
Table A1: Number of episodes required by different methods to solve the grid world task with delayed reward. Numbers give the mean and the standard deviation
over 100 trials. RUDDER with reward redistribution clearly outperforms all other TD methods.
Method Delay 10 Delay 15 Delay 20
RUDDER 3520.06 2343.79 p = 5.00E-01 3062.07 1278.92 p = 5.00E-01 3813.96 2738.18 p = 5.00E-01
MC 10920.64 7550.04 p = 5.03E-24 17102.89 12640.09 p = 1.98E-30 22910.85 19149.02 p = 1.25E-28
Q 66140.76 1455.33 p = 1.28E-34 115352.25 1962.20 p = 1.28E-34 171571.94 2436.25 p = 1.28E-34
Method Delay 25 Delay 30 Delay 35
MC 39772 47460 p < 1E-29 41922 36618 p < 1E-30 50464 60318 p < 1E-30
Q 234912 2673 p < 1E-33 305894 2928 p < 1E-33 383422 4346 p < 1E-22
RUDDER 4112 3769 3667 1776 3850 2875
Method Delay 40 Delay 45 Delay 50
MC 56945 54150 p < 1E-30 69845 79705 p < 1E-31 73243 70399 p = 1E-31
Q 466531 3515 p = 1E-22
RUDDER 3739 2139 4151 2583 3884 2188
Method Delay 100 Delay 500
MC 119568 110049 p < 1E-11 345533 320232 p < 1E-16
RUDDER 4147 2392 5769 4309
52
A4.1.3 Task(III): Trace-Back
This section supports the artificial task (III) – Trace-Back – in the main paper. RUDDER is compared
to potential-based reward shaping methods. In this experiment, we compare reinforcement learning
methods that have to transfer back information about a delayed reward. These methods comprise
RUDDER, TD( ) and potential-based reward shaping approaches. For potential-based reward shaping
we compare the original reward shaping [87],look-forward advice , and look-back advice [143] with
three different potential functions. Methods that transfer back reward information are characterized by
low variance estimates of the value function or the action-value function, since they use an estimate
of the future return instead of the future return itself. To update the estimates of the future returns,
reward information has to be transferred back. The task in this experiment can be solved by Monte
Carlo estimates very fast, which do not transfer back information but use samples of the future
return for the estimation instead. However, Monte Carlo methods have high variance, which is not
considered in this experiment.
The environment is a 1515 grid, where actions move the agent from its current position in 4
adjacent positions ( up, down, left, right ), except the agent would be moved outside the grid. The
number of steps (moves) per episode is T= 20 . The starting position is (7;7)in the middle of the
grid. The maximal return is a combination of negative immediate reward and positive delayed reward.
To obtain the maximum return, the policy must move the agent upin the time step t= 1andright in
the following time step t= 2. In this case, the agent receives an immediate reward of -50 at t= 2
and a delayed reward of 150 at the end of the episode at t= 20 , that is, a return of 100. Any other
combination of actions gives the agent immediate reward of 50 at t= 2without any delayed reward,
that is, a return of 50. To ensure Markov properties the position of the agent, the time, as well as
the delayed reward are coded in the state. The future reward discount rate
is set to 1. The state
transition probabilities are deterministic for the first two moves. For t>2and for each action, state
transition probabilities are equal for each possible next state (uniform distribution), meaning that
actions after t= 2do not influence the return. For comparisons of long delays, both the size of the
grid and the length of the episode are increased. For a delay of n, a(3n=4)(3n=4)grid is used
with an episode length of n, and starting position (3n=8;3n=8).
Compared methods. We compare different TD( ) and potential-based reward shaping methods.
For TD(), the baseline is Q(), with eligibility traces = 0:9and= 0and Watkins’ implementa-
tion [ 140]. The potential-based reward shaping methods are the original reward shaping, look-ahead
advice as well as look-back advice. For look-back advice, we use SARSA( ) [105] instead ofQ()as
suggested by the authors [ 143].Q-values are represented by a state-action table, that is, we consider
only tabular methods. In all experiments an -greedy policy with = 0:2is used. All three reward
shaping methods require a potential function , which is based on the reward redistribution ( ~rt) in
three different ways:
(I) The Potential function is the difference of LSTM predictions, which is the redistributed reward
Rt:
(st) = E [Rt+1jst]or (A260)
(st;at) = E [Rt+1jst;at]: (A261)
(II) The potential function is the sum of future redistributed rewards, i.e. the q-value of the
redistributed rewards. In the optimal case, this coincides with implementation (I):
(st) = E"TX
=tR+1jst#
or (A262)
(st;at) = E"TX
=tR+1jst;at#
: (A263)
(III) The potential function corresponds to the LSTM predictions. In the optimal case this corre-
sponds to the accumulated reward up to tplus the q-value of the delayed MDP:
(st) = E"TX
=0~R+1jst#
or (A264)
(st;at) = E"TX
=0~R+1jst;at#
: (A265)
53
The following methods are compared:
1.Q-learning with eligibility traces according to Watkins ( Q()),
2. SARSA with eligibility traces (SARSA( )),
3.Reward Shaping with potential functions (I), (II), or (III) according to Q-learning and
eligibility traces according to Watkins,
4. Look-ahead advise with potential functions (I), (II), or (III) with Q(),
5. Look-back advise with potential functions (I), (II), or (III) with SARSA( ),
6.RUDDER with reward redistribution for Q-value estimation and RUDDER applied on top
ofQ-learning.
RUDDER is implemented with an LSTM architecture without output gate nor forget gate. For this
experiments, RUDDER does not use lessons buffer nor safe exploration. For contribution analysis we
use differences of return predictions. For RUDDER, the Q-values are estimated by an exponential
moving average (RUDDER Q-value estimation) or alternatively by Q-learning.
Performance evaluation: The task is considered solved when the exponential moving average of
the return is above 90, which is 90% of the maximum return. Learning time is the number of episodes
required to solve the task. The first evaluation criterion is the average learning time. The Q-value
differences at time step t= 2are monitored. The Q-values att= 2are the most important ones,
since they have to predict whether the maximal return will be received or not. At t= 2the immediate
reward acts as a distraction since it is -50 for the action leading to the maximal return ( a+) and 50 for
all other actions ( a ). At the beginning of learning, the Q-value difference between a+anda is
about -100, since the immediate reward is -50 and 50, respectively. Once the Q-values converge to
the optimal policy, the difference approaches 50. However, the task will already be correctly solved
as soon as this difference is positive. The second evaluation criterion is the Q-value differences at
time stept= 2, since it directly shows to what extend the task is solved.
Results: Table A1 reports the number of episodes required by different methods to solve the
task. The mean and the standard deviation over 100 trials are given. A Wilcoxon signed-rank test
is performed between the learning time of RUDDER and those of the other methods. Statistical
significance p-values are obtained by Wilcoxon signed-rank test. RUDDER with reward redistribution
is significantly faster than all other methods with p-values <10 17. Tables A2,A3 report the results
for all methods.
54
Table A2: Number of episodes required by different methods to solve the Trace-Back task with delayed reward. The numbers represent the mean and the standard
deviation over 100 trials. RUDDER with reward redistribution significantly outperforms all other methods.
Method Delay 6 Delay 8 Delay 10
Look-back I 6074 952 p = 1E-22 13112 2024 p = 1E-22 21715 4323 p = 1E-06
Look-back II 4584 917 p = 1E-22 9897 2083 p = 1E-22 15973 4354 p = 1E-06
Look-back III 4036.48 1424.99 p = 5.28E-17 7812.72 2279.26 p = 1.09E-23 10982.40 2971.65 p = 1.03E-07
Look-ahead I 14469.10 1520.81 p = 1.09E-23 28559.32 2104.91 p = 1.09E-23 46650.20 3035.78 p = 1.03E-07
Look-ahead II 12623.42 1075.25 p = 1.09E-23 24811.62 1986.30 p = 1.09E-23 43089.00 2511.18 p = 1.03E-07
Look-ahead III 16050.30 1339.69 p = 1.09E-23 30732.00 1871.07 p = 1.09E-23 50340.00 2102.78 p = 1.03E-07
Reward Shaping I 14686.12 1645.02 p = 1.09E-23 28223.94 3012.81 p = 1.09E-23 46706.50 3649.57 p = 1.03E-07
Reward Shaping II 11397.10 905.59 p = 1.09E-23 21520.98 2209.63 p = 1.09E-23 37033.40 1632.24 p = 1.03E-07
Reward Shaping III 12125.48 1209.59 p = 1.09E-23 23680.98 1994.07 p = 1.09E-23 40828.70 2748.82 p = 1.03E-07
Q() 14719.58 1728.19 p = 1.09E-23 28518.70 2148.01 p = 1.09E-23 44017.20 3170.08 p = 1.03E-07
SARSA() 8681.94 704.02 p = 1.09E-23 23790.40 836.13 p = 1.09E-23 48157.50 1378.38 p = 1.03E-07
RUDDERQ() 726.72 399.58 p = 3.49E-04 809.86 472.27 p = 3.49E-04 906.13 514.55 p = 3.36E-02
RUDDER 995.59 670.31 p = 5.00E-01 1128.82 741.29 p = 5.00E-01 1186.34 870.02 p = 5.00E-01
Method Delay 12 Delay 15 Delay 17
Look-back I 33082.56 7641.57 p = 1.09E-23 49658.86 8297.85 p = 1.28E-34 72115.16 21221.78 p = 1.09E-23
Look-back II 23240.16 9060.15 p = 1.09E-23 29293.94 7468.94 p = 1.28E-34 42639.38 17178.81 p = 1.09E-23
Look-back III 15647.40 4123.20 p = 1.09E-23 20478.06 5114.44 p = 1.28E-34 26946.92 10360.21 p = 1.09E-23
Look-ahead I 66769.02 4333.47 p = 1.09E-23 105336.74 4977.84 p = 1.28E-34 136660.12 5688.32 p = 1.09E-23
Look-ahead II 62220.56 3139.87 p = 1.09E-23 100505.05 4987.16 p = 1.28E-34 130271.88 5397.61 p = 1.09E-23
Look-ahead III 72804.44 4232.40 p = 1.09E-23 115616.59 5648.99 p = 1.28E-34 149064.68 7895.48 p = 1.09E-23
Reward Shaping I 68428.04 3416.12 p = 1.09E-23 107399.17 5242.88 p = 1.28E-34 137032.14 6663.12 p = 1.09E-23
Reward Shaping II 56225.24 3778.86 p = 1.09E-23 93091.44 5233.02 p = 1.28E-34 122224.20 5545.63 p = 1.09E-23
Reward Shaping III 60071.52 3809.29 p = 1.09E-23 99476.40 5607.08 p = 1.28E-34 130103.50 6005.61 p = 1.09E-23
Q() 66952.16 4137.67 p = 1.09E-23 107438.36 5327.95 p = 1.28E-34 135601.26 6385.76 p = 1.09E-23
SARSA() 78306.28 1813.31 p = 1.09E-23 137561.92 2350.84 p = 1.28E-34 186679.12 3146.78 p = 1.09E-23
RUDDERQ() 1065.16 661.71 p = 3.19E-01 972.73 702.92 p = 1.13E-04 1101.24 765.76 p = 1.54E-01
RUDDER 1121.70 884.35 p = 5.00E-01 1503.08 1157.04 p = 5.00E-01 1242.88 1045.15 p = 5.00E-01
55
Table A3: Cont. Number of episodes required by different methods to solve the Trace-Back task with delayed reward. The numbers represent the mean and the
standard deviation over 100 trials. RUDDER with reward redistribution significantly outperforms all other methods.
Method Delay 20 Delay 25
Look-back I 113873.30 31879.20 p = 1.03E-07
Look-back II 56830.30 19240.04 p = 1.03E-07 111693.34 73891.21 p = 1.09E-23
Look-back III 35852.10 11193.80 p = 1.03E-07
Look-ahead I 187486.50 5142.87 p = 1.03E-07
Look-ahead II 181974.30 5655.07 p = 1.03E-07 289782.08 11984.94 p = 1.09E-23
Look-ahead III 210029.90 6589.12 p = 1.03E-07
Reward Shaping I 189870.30 7635.62 p = 1.03E-07 297993.28 9592.30 p = 1.09E-23
Reward Shaping II 170455.30 6004.24 p = 1.03E-07 274312.10 8736.80 p = 1.09E-23
Reward Shaping III 183592.60 6882.93 p = 1.03E-07 291810.28 10114.97 p = 1.09E-23
Q() 186874.40 7961.62 p = 1.03E-07
SARSA() 273060.70 5458.42 p = 1.03E-07 454031.36 5258.87 p = 1.09E-23
RUDDER I 1048.97 838.26 p = 5.00E-01 1236.57 1370.40 p = 5.00E-01
RUDDER II 1159.30 731.46 p = 8.60E-02 1195.75 859.34 p = 4.48E-01
56
A4.1.4 Task (IV): Charge-Discharge
The Charge-Discharge task depicted in Figure A5 is characterized by deterministic reward and state
transitions. The environment consists of two states: charged C/discharged Dand two actions charge
c/discharge d. The deterministic reward is r(D;d) = 1;r(C;d) = 10;r(D;c) = 0 , andr(C;c) = 0 .
The reward r(C;d)is accumulated for the whole episode and given only at time T+ 1, whereT
corresponds to the maximal delay of the reward. The optimal policy alternates between charging and
discharging to accumulate a reward of 10 every other time step. The smaller immediate reward of
1distracts the agent from the larger delayed reward. The distraction forces the agent to learn the
value function well enough to distinguish between the contribution of the immediate and the delayed
reward to the final return.
D
D C
D C D
D C D C
D C D C Dc
d
c
dc
c
cc
d
cc
d cd
d
d
dd
d ct= 1
t= 2
t= 3
t= 4
r5= 0 r5= 10 r5= 20
Figure A5: The Charge-Discharge task with two basic states: charged Canddischarged D. In each
state the actions charge cleading to the charged state Cand discharge dleading to discharged state
Dare possible. Action din the discharged state Dleads to a small immediate reward of 1and in
the charged state Cto a delayed reward of 10. After sequence end T= 4, the accumulated delayed
rewardrT+1=r5is given.
For this task, the RUDDER backward analysis is based on monotonic LSTMs and on layer-wise
relevance propagation (LRP). The reward redistribution provided by RUDDER uses an LSTM which
consists of 5memory cells and is trained with Adam and a learning rate of 0:01. The reward
redistribution is used to learn an optimal policy by Q-learning and by MC with a learning rate of 0:1
and an exploration rate of 0:1. Again, we use sample updates forQ-learning and MC [ 128]. The
learning is stopped either if the agent achieves 90% of the reward of the optimal policy or after a
maximum number of 10million episodes. For each Tand each method, 100 runs with different seeds
are performed to obtain statistically relevant results. For delays with runs which did not finish within
100m episodes we estimate parameters like described in Paragraph A4.1.1.
A4.1.5 Task (V): Solving Trace-Back using policy gradient methods
In this experiment, we compare policy gradient methods instead of Q-learning based methods. These
methods comprise RUDDER on top of PPO with and without GAE, and a baseline PPO using GAE.
The environment and performance evaluation are the same as reported in Task III. Again, RUDDER
is exponentially faster than PPO. RUDDER on top of PPO is slightly better with GAE than without.
A4.2 Atari Games
In this section we describe the implementation of RUDDER for Atari games. The implementation
is largely based on the OpenAI baselines package [ 21] for the RL components and our package for
the LSTM reward redistribution model, which will be announced upon publication. If not specified
otherwise, standard input processing, such as skipping 3frames and stacking 4frames, is performed
by the OpenAI baselines package.
We consider the 52 Atari games that were compatible with OpenAI baselines, Arcade Learning
Environment (ALE) [ 11], and OpenAI Gym [ 18]. Games are divided into episodes, i.e. the loss
of a life or the exceeding of 108k frames trigger the start of a new episode without resetting the
environment. Source code will be made available at upon publication.
57
101112131415 20 25 30
delay of the reward104105#episodesRUDDER
RUDDER+GAE
PPO
Figure A6: Comparison of performance of RUDDER with GAE (RUDDER+GAE) and without
GAE (RUDDER) and PPO with GAE (PPO) on artificial task V with respect to the learning time in
episodes (median of 100 trials) in log scale vs. the delay of the reward. The shadow bands indicate
the40% and60% quantiles. Again, RUDDER significantly outperforms all other methods.
A4.2.1 Architecture
We use a modified PPO architecture and a separate reward redistribution model. While parts of the
two could be combined, this separation allows for better comparison between the PPO baseline with
and without RUDDER.
PPO architecture. The design of the policy and the value network relies on the ppo2 implemen-
tation [ 21], which is depicted in Figure A7 and summarized in Table A4. The network input, 4
stacked Atari game frames [ 82], is processed by 3 convolution layers with ReLU activation functions,
followed by a fully connected layer with ReLU activation functions. For PPO with RUDDER, 2
output units, for the original and redistributed reward value function, and another set of output units
for the policy prediction are applied. For the PPO baseline without RUDDER, the output unit for the
redistributed reward value function is omitted.
Reward redistribution model. Core of the reward redistribution model is an LSTM layer contain-
ing 64 memory cells with sigmoid gate activations, tanh input nonlinearities, and identity output
activation functions, as illustrated in Figure A7 and summarized in Table A4. This LSTM implemen-
tation omits output gate and forget gate to simplify the network dynamics. Identity output activation
functions were chosen to support the development of linear counting dynamics within the LSTM
layer, as is required to count the reward pieces during an episode chunk. Furthermore, the input gate
is only connected recurrently to other LSTM blocks and the cell input is only connected to forward
connections from the lower layer. For the vision system the same architecture was used as with the
PPO network, with the first convolution layer being doubled to process frames and full frames
separately in the first layer. Additionally, the memory cell layer receives the vision feature activations
of the PPO network, the current action, and the approximate in-game time as inputs. No gradients
from the reward redistribution network are propagated over the connections to the PPO network.
After the LSTM layer, the reward redistribution model has one output node for the prediction bgof
the return realization gof the return variable G0. The reward redistribution model has 4 additional
output nodes for the auxiliary tasks as described in Section A4.2.3.
58
Stacked
FramesConv.Layer0
8x8x32, strides=4Conv.Layer1
4x4x64, strides=2Conv.Layer2
3x3x64, strides=1Dense Layer
n=512
Conv.Layer3
8x8x32, strides=4Conv.Layer4
8x8x32, strides=4Conv.Layer5
4x4x64, strides=2Conv.Layer6
3x3x64, strides=1LSTM Layerbg v
Single
FrameDelta-
Frameat
Figure A7: RUDDER architecture for Atari games as described in Section A4.2.1. Left: The ppo2
implementation [ 21]. Right: LSTM reward redistribution architecture. The reward redistribution
network has access to the PPO vision features (dashed lines) but no gradient is propagated between
the networks. The LSTM layer receives the current action and an approximate in-game-time as
additional input. The PPO outputs vfor value function prediction and for policy prediction each
represent multiple output nodes: the original and redistributed reward value function prediction for
vand the outputs for all of the available actions for . Likewise, the reward redistribution network
outputbgrepresents multiple outputs, as described in Section A4.2.3 Details on layer configuration
are given in Table A4.
Layer Specifications Layer Specifications
Conv.Layer 0 features 32 Conv.Layer 4 features 32
kernelsize 8x8 kernelsize 8x8
striding 4x4 striding 4x4
act ReLU act ReLU
initialization orthogonal, gain =p
2 initialization orthogonal, gain =0:1
Conv.Layer 1 features 64 Conv.Layer 5 features 64
kernelsize 4x4 kernelsize 4x4
striding 2x2 striding 2x2
act ReLU act ReLU
initialization orthogonal, gain =p
2 initialization orthogonal, gain =0:1
Conv.Layer 2 features 64 Conv.Layer 6 features 64
kernelsize 3x3 kernelsize 3x3
striding 1x1 striding 1x1
act ReLU act ReLU
initialization orthogonal, gain =p
2 initialization orthogonal, gain =0:1
Dense Layer features 512 LSTM Layer cells 64
act ReLU gate act. sigmoid
initialization orthogonal, gain =p
2 ci act. tanh
Conv.Layer 3 features 32 output act. linear
kernelsize 8x8 bias ig trunc.norm., mean = 5
striding 4x4 bias ci trunc.norm., mean = 0
act ReLU fwd.w. ci trunc.norm., scale = 0:0001
initialization orthogonal, gain =0:1 fwd.w. ig omitted
rec.w. ci omitted
rec.w. ig trunc.norm., scale = 0:001
og omitted
fg omitted
Table A4: Specifications of PPO and RUDDER architectures as shown in Figure A7. Truncated
normal initialization has the default values mean = 0,stddev = 1and is optionally multiplied by a
factor scale .
59
A4.2.2 Lessons Replay Buffer
The lessons replay buffer is realized as a priority-based buffer containing up to 128samples. New
samples are added to the buffer if (i) the buffer is not filled or if (ii) the new sample is considered
more important than the least important sample in the buffer, in which case the new sample replaces
the least important sample.
Importance of samples for the buffer is determined based on a combined ranking of (i) the reward
redistribution model error and (ii) the difference of the sample return to the mean return of all samples
in the lessons buffer. Each of these two rankings contributes equally to the final ranking of the sample.
Samples with higher loss and greater difference to the mean return achieve a higher ranking.
Sampling from the lessons buffer is performed as a sampling from a softmax function on the sample-
losses in the buffer. Each sample is a sequence of 512consecutive transitions, as described in the last
paragraph of Section A4.2.3.
A4.2.3 Game Processing, Update Design, and Target Design
Reward redistribution is performed in an online fashion as new transitions are sampled from the
environment. This allows to keep the original update schema of the PPO baseline, while still using
the redistributed reward for the PPO updates. Training of the reward redistribution model is done
separately on the lessons buffer samples from Section A4.2.2. These processes are described in more
detail in the following paragraphs.
Reward Scaling. As described in the main paper, rewards for the PPO baseline and RUDDER are
scaled based on the maximum return per sample encountered during training so far. With isamples
sampled from the environment and a maximum return of gmax
i= max 16j6ifjgjjgencountered, the
scaled reward rnewis
rnew=10r
gmax
i: (A266)
Goal of this scaling is to normalize the reward rto range [ 10;10]with a linear scaling, suitable for
training the PPO and reward redistribution model. Since the scaling is linear, the original proportions
between rewards are kept. Downside to this approach is that if a new maximum return is encountered,
the scaling factor is updated, and the models have to readjust.
Reward redistribution. Reward redistribution is performed using differences of return predictions
of the LSTM network. That is, the differences of the reward redistribution model prediction bgat time
steptandt 1serve as contribution analysis and thereby give the redistributed reward rt=bgt bgt 1.
This allows for online reward redistribution on the sampled transitions before they are used to train
the PPO network, without waiting for the game sequences to be completed.
To assess the current quality of the reward redistribution model, a quality measure based on the
relative absolute error of the prediction bgTat the last time step Tis introduced:
quality = 1 jg bgTj
1
1 ; (A267)
withas quality threshold of = 80% and the maximum possible error as= 10 due to the
reward scaling applied. quality is furthermore clipped to be within range [0;1].
PPO model. Theppo2 implementation [ 21] samples from the environment using multiple agents in
parallel. These agents play individual environments but share all weights, i.e. they are distinguished
by random effects in the environment or by exploration. The value function and policy network is
trained online on a batch of transitions sampled from the environment. Originally, the policy/value
function network updates are adjusted using a policy loss, a value function loss, and an entropy term,
each with dedicated scaling factors [ 115]. To decrease the number of hyperparameters, the entropy
term scaling factor is adjusted automatically using Proportional Control to keep the policy entropy in
a predefined range.
We use two value function output units to predict the value functions of the original and the re-
distributed reward. For the PPO baseline without RUDDER, the output unit for the redistributed
reward is omitted. Analogous to the ppo2 implementation, these two value function predictions
serve to compute the advantages used to scale the policy gradient updates. For this, the ad-
vantages for original reward aoand redistributed reward arare combined as a weighted sum
a=ao(1 qualityv ) +arquality . The PPO value function loss term Lvis replaced by
the sum of the value function volossLofor the original reward and the scaled value function vrloss
60
Lrfor the redistributed reward, such that Lv=Lo+Lrquality . Parameter values were taken from
the original paper [ 115] and implementation [ 21]. Additionally, a coarse hyperparameter search was
performed with value function coefficients f0:1;1;10gand replacing the static entropy coefficient by
a Proportional Control scaling of the entropy coefficient. The Proportional Control target entropy
was linearly decreased from 1to0over the course of training. PPO baseline hyperparamters were
used for PPO with RUDDER without changes.
Parameter values are listed in Table A5.
Reward redistribution model. The loss of the reward redistribution model for a sample is com-
posed of four parts. (i) The main loss Lm, which is the squared prediction loss of gat the last time
stepTof the episode
Lm= (g bgT)2; (A268)
(ii) the continuous prediction loss Lcofgat each time step
Lc=1
T+ 1TX
t=0(g bgt)2; (A269)
(iii) the loss Leof the prediction of the output at t+ 10 at each time step t
Le=1
T 9T 10X
t=0
bgt+10 \(bgt+10)t2
; (A270)
as well as (iv) the loss on 3 auxiliary tasks. At every time step t, these auxiliary tasks are (1) the
prediction of the action-value function bqt, (2) the prediction of the accumulated original reward ~rin
the next 10 framesPt+10
i=t~ri, and (3) the prediction of the accumulated reward in the next 50 framesPt+50
i=t~ri, resulting in the final auxiliary loss Laas
La1=1
T+ 1TX
t=0(qt bqt)2; (A271)
La2=1
T 9T 10X
t=00
@t+10X
i=t~ri \ t+10X
i=t~ri!
t1
A2
; (A272)
La3=1
T 49T 50X
t=00
@t+50X
i=t~ri \ t+50X
i=t~ri!
t1
A2
; (A273)
La=1
3(La1+La2+La3): (A274)
The final loss for the reward redistribution model is then computed as
L=Lm+1
10(Lc+Le+La): (A275)
The continuous prediction and earlier prediction losses LcandLepush the reward redistribution
model toward performing an optimal reward redistribution. This is because important events that are
redundantly encoded in later states are stored as early as possible. Furthermore, the auxiliary loss La
speeds up learning by adding more information about the original immediate rewards to the updates.
The reward redistribution model is only trained on the lessons buffer. Training epochs on the lessons
buffer are performed every 104PPO updates or if a new sample was added to the lessons buffer.
For each such training epoch, 8 samples are sampled from the lessons buffer. Training epochs are
repeated until the reward redistribution quality is sufficient ( quality>0) for all replayed samples
in the last 5 training epochs.
The reward redistribution model is not trained or used until the lessons buffer contains at least 32
samples and samples with different return have been encountered.
Parameter values are listed in Table A5.
61
PPO RUDDER
learning rate 2:510 4learning rate 10 4
policy coefficient 1:0L2weight decay 10 7
initial entropy coefficient 0:01 gradient clipping 0:5
value function coefficient 1:0 optimization ADAM
Table A5: Left: Update parameters for PPO model. Entropy coefficient is scaled via Proportional
Control with the target entropy linearly annealed from 1to0over the course of learning. Unless
stated otherwise, default parameters of ppo2 implementation [ 21] are used. Right: Update parameters
for reward redistribution model of RUDDER.
Sequence chunking and Truncated Backpropagation Through Time (TBPTT). Ideally, RUD-
DER would be trained on completed game sequences, to consequently redistribute the reward within
a completed game. To shorten computational time for learning the reward redistribution model, the
model is not trained on completed game sequences but on sequence chunks consisting of 512time
steps. The beginning of such a chunk is treated as beginning of a new episode for the model and ends
of episodes within this chunk reset the state of the LSTM, so as to not redistribute rewards between
episodes. To allow for updates on sequence chunks even if the game sequence is not completed, the
PPO value function prediction is used to estimate the expected future reward at the end of the chunk.
Utilizing TBPTT to further speed up LSTM learning, gradients for the reward redistribution LSTM
are cut after every 128 time steps.
A4.2.4 Exploration
Safe exploration to increase the likelihood of observing delayed rewards is an important feature of
RUDDER. We use a safe exploration strategy, which is realized by normalizing the output of the
policy network to range [0;1]and randomly picking one of the actions that is above a threshold
. Safe exploration is activated once per sequence at a random sequence position for a random
duration between 0 and the average game length l. Thereby we encourage long but safe off-policy
trajectories within parts of the game sequences. Only 2 of the 8 parallel actors use safe exploration
with1= 0:001and1= 0:5, respectively. All actors sample from the softmax policy output.
To avoid policy lag during safe exploration transitions, we use those transitions only to update the
reward redistribution model but not the PPO model.
A4.2.5 Results
Training curves for 3 random seeds for PPO baseline and PPO with RUDDER are shown in Figure A8
and scores are listed in Table A6 for all 52 Atari games. Training was conducted over 200M game
frames (including skipped frames), as described in the experiments section of the main paper.
We investigated failures and successes of RUDDER in different Atari games. RUDDER failures were
observed to be mostly due to LSTM failures and comprise e.g. slow learning in Breakout, explaining
away in Double Dunk, spurious redistributed rewards in Hero, overfitting to the first levels in Qbert,
and exploration problems in MontezumaRevenge. RUDDER successes were observed to be mostly
due to redistributing rewards to important key actions that would otherwise not receive reward, such
as moving towards the built igloo in Frostbite, diving up for refilling oxygen in Seaquest, moving
towards the treasure chest in Venture, and shooting at the shield of the enemy boss UFO, thereby
removing its shield.
62
0 50 100 150 200025005000750010000Alien
0 50 100 150 2000500100015002000Amidar
0 50 100 150 20005000100001500020000Assault
0 50 100 150 200075000150000225000300000Asterix
0 50 100 150 2000150000300000450000600000Asteroids
0 50 100 150 20001250000250000037500005000000Atlantis
0 50 100 150 2000500100015002000BankHeist
0 50 100 150 200012500250003750050000BattleZone
0 50 100 150 20002000400060008000BeamRider
0 50 100 150 20010001500200025003000Berzerk
0 50 100 150 200075150225300Bowling
0 50 100 150 200050100150200Boxing
0 50 100 150 2000125250375500Breakout
0 50 100 150 200015000300004500060000Centipede
0 50 100 150 20005000100001500020000ChopperCommand
0 50 100 150 200050000100000150000200000CrazyClimber
0 50 100 150 2000175000350000525000700000DemonAttack
0 50 100 150 200-30-22-15-80DoubleDunk
0 50 100 150 2000750150022503000Enduro
0 50 100 150 200-200-125-5025100FishingDerby
0 50 100 150 200010203040Freeway
0 50 100 150 20002250450067509000Frostbite
0 50 100 150 200050000100000150000200000Gopher
0 50 100 150 2000750150022503000Gravitar
0 50 100 150 200010000200003000040000Hero
0 50 100 150 200-8-5-214IceHockey
0 50 100 150 20005000100001500020000Kangaroo
0 50 100 150 20005000100001500020000Krull
0 50 100 150 2001000025000400005500070000KungFuMaster
0 50 100 150 200-10001MontezumaRevenge
0 50 100 150 200025005000750010000MsPacman
0 50 100 150 20005000100001500020000NameThisGame
0 50 100 150 200075000150000225000300000Phoenix
0 50 100 150 200-3000-2250-1500-7500Pitfall
0 50 100 150 200-30-1501530Pong
0 50 100 150 200-2000-1500-1000-5000PrivateEye
0 50 100 150 200010000200003000040000Qbert
0 50 100 150 200012500250003750050000RoadRunner
0 50 100 150 200015304560Robotank
0 50 100 150 20010002500400055007000Seaquest
0 50 100 150 200-30000-28750-27500-26250-25000Skiing
0 50 100 150 20001250250037505000Solaris
0 50 100 150 2000750150022503000SpaceInvaders
0 50 100 150 200017500350005250070000StarGunner
0 50 100 150 200-30-1501530Tennis
0 50 100 150 20030004250550067508000TimePilot
0 50 100 150 200075150225300Tutankham
0 50 100 150 2000500100015002000Venture
0 50 100 150 2002000022500250002750030000VideoPinball
0 50 100 150 20001250250037505000WizardOfWor
0 50 100 150 200075000150000225000300000YarsRevenge
0 50 100 150 20005000100001500020000Zaxxon
baseline
RUDDER
Figure A8: Training curves for PPO baseline and PPO with RUDDER over 200M game frames, 3
runs with different random seeds each. Curves show scores during training of a single agent that does
not use safe exploration, smoothed using Locally Weighted Scatterplot Smoothing (y-value estimate
using 20% of data with 10 residual-based re-weightings).
63
average final
baseline RUDDER % baseline RUDDER %
Alien 1,878 3,087 64.4 3,218 5,703 77.3
Amidar 787 724 -8.0 1,242 1,054 -15.1
Assault 5,788 4,242 -26.7 10,373 11,305 9.0
Asterix 10,554 18,054 71.1 29,513 102,930 249
Asteroids 22,065 4,905 -77.8 310,505 154,479 -50.2
Atlantis 1,399,753 1,655,464 18.3 3,568,513 3,641,583 2.0
BankHeist 936 1,194 27.5 1,078 1,335 23.8
BattleZone 12,870 17,023 32.3 24,667 28,067 13.8
BeamRider 2,372 4,506 89.9 3,994 6,742 68.8
Berzerk 1,261 1,341 6.4 1,930 2,092 8.4
Bowling 61.5 179 191 56.3 192 241
Boxing 98.0 94.7 -3.4 100 99.5 -0.5
Breakout 217 153 -29.5 430 352 -18.1
Centipede 25,162 23,029 -8.5 53,000 36,383 -31.4
ChopperCommand 6,183 5,244 -15.2 10,817 9,573 -11.5
CrazyClimber 125,249 106,076 -15.3 140,080 132,480 -5.4
DemonAttack 28,684 46,119 60.8 464,151 400,370 -13.7
DoubleDunk -9.2 -13.1 -41.7 -0.3 -5.1 -1,825
Enduro 759 777 2.5 2,201 1,339 -39.2
FishingDerby 19.5 11.7 -39.9 52.0 36.3 -30.3
Freeway 26.7 25.4 -4.8 32.0 31.4 -1.9
Frostbite 3,172 4,770 50.4 5,092 7,439 46.1
Gopher 8,126 4,090 -49.7 102,916 23,367 -77.3
Gravitar 1,204 1,415 17.5 1,838 2,233 21.5
Hero 22,746 12,162 -46.5 32,383 15,068 -53.5
IceHockey -3.1 -1.9 39.4 -1.4 1.0 171
Kangaroo 2,755 9,764 254 5,360 13,500 152
Krull 9,029 8,027 -11.1 10,368 8,202 -20.9
KungFuMaster 49,377 51,984 5.3 66,883 78,460 17.3
MontezumaRevenge 0.0 0.0 38.4 0.0 0.0 0.0
MsPacman 4,096 5,005 22.2 6,446 6,984 8.3
NameThisGame 8,390 10,545 25.7 10,962 17,242 57.3
Phoenix 15,013 39,247 161 46,758 190,123 307
Pitfall -8.4 -5.5 34.0 -75.0 0.0 100
Pong 19.2 18.5 -3.9 21.0 21.0 0.0
PrivateEye 102 34.1 -66.4 100 33.3 -66.7
Qbert 12,522 8,290 -33.8 28,763 16,631 -42.2
RoadRunner 20,314 27,992 37.8 35,353 36,717 3.9
Robotank 24.9 32.7 31.3 32.2 47.3 46.9
Seaquest 1,105 2,462 123 1,616 4,770 195
Skiing -29,501 -29,911 -1.4 -29,977 -29,978 0.0
Solaris 1,393 1,918 37.7 616 1,827 197
SpaceInvaders 778 1,106 42.1 1,281 1,860 45.2
StarGunner 6,346 29,016 357 18,380 62,593 241
Tennis -13.5 -13.5 0.2 -4.0 -5.3 -32.8
TimePilot 3,790 4,208 11.0 4,533 5,563 22.7
Tutankham 123 151 22.7 140 163 16.3
Venture 738 885 20.1 820 1,350 64.6
VideoPinball 19,738 19,196 -2.7 15,248 16,836 10.4
WizardOfWor 3,861 3,024 -21.7 6,480 5,950 -8.2
YarsRevenge 46,707 60,577 29.7 109,083 178,438 63.6
Zaxxon 6,900 7,498 8.7 12,120 10,613 -12.4
Table A6: Scores on all 52 considered Atari games for the PPO baseline and PPO with RUDDER
and the improvement by using RUDDER in percent (%). Agents are trained for 200M game frames
(including skipped frames) with no-op starting condition , i.e. a random number of up to 30 no-
operation actions at the start of each game. Episodes are prematurely terminated if a maximum
of 108K frames is reached. Scoring metrics are (a) average , the average reward per completed
game throughout training, which favors fast learning [ 115] and (b) final, the average over the last 10
consecutive games at the end of training, which favors consistency in learning. Scores are shown for
one agent without safe exploration.
64
Visual Confirmation of Detecting Relevant Events by Reward Redistribution. We visually con-
firm a meaningful and helpful redistribution of reward in both Bowling and Venture during training.
As illustrated in Figure A9, RUDDER is capable of redistributing a reward to key events in a game,
drastically shortening the delay of the reward and quickly steering the agent toward good policies.
Furthermore, it enriches sequences that were sparse in reward with a dense reward signal. Video
demonstrations are available at https://goo.gl/EQerZV .
Figure A9: Observed return decomposition by RUDDER in two Atari games with long delayed
rewards. Left: In the game Bowling, reward is only given after a turn which consist of multiple rolls.
RUDDER identifies the actions that guide the ball in the right direction to hit all pins. Once the ball
hit the pins, RUDDER detects the delayed reward associated with striking the pins down. In the
figure only 100 frames are represented but the whole turn spans more than 200 frames. In the original
game, the reward is given only at the end of the turn. Right: In the game Venture, reward is only
obtained after picking the treasure. RUDDER guides the agent (red) towards the treasure (golden)
via reward redistribution. Reward is redistributed to entering a room with treasure. Furthermore,
the redistributed reward gradually increases as the agent approaches the treasure. For illustration
purposes, the green curve shows the return redistribution before applying lambda. The environment
only gives reward at the event of collecting treasure (blue).
65
A5 Discussion and Frequent Questions
RUDDER and reward rescaling. RUDDER works with no rescaling, various rescalings, and
sign function as we have confirmed in additional experiments. Rescaling ensures similar reward
magnitudes across different Atari games, therefore the same hyperparameters can be used for all
games. For LSTM and PPO, we only scale the original return by a constant factor, therefore do not
change the problem and do not simplify it. The sign function, in contrast, may simplify the problem
but may change the optimal policy.
RUDDER for infinite horizon: Continual Learning. RUDDER assumes a finite horizon problem.
For games and for most tasks in real world these assumptions apply: did you solve the task? (make
tax declaration, convince a customer to buy, design a drug, drive a car to a location, assemble a car,
build a building, clean the room, cook a meal, pass the Turing test). In general our approach can be
extended to continual learning with discounted reward. Only the transformation of an immediate
reward MDP to an MDP with episodic reward is no longer possible. However the delayed reward
problem becomes more obvious and also more serious when not discounting the reward.
Is the LSTM in RUDDER a state-action value function? For reward redistribution we assume
an MDP with one reward (=return) at sequence end which can be predicted from the last state-action
pair. When introducing the -states, the reward cannot be predicted from the last and the task is
no longer Markov. However the return can be predicted from the sequence of s. Since the s are
mutually independent, the contribution of each to the return must be stored in the hidden states of
the LSTM to predict the final reward. The can be generic as states and actions can be numbered
and then the difference of this numbers can be used for .
In the applications like Atari with immediate rewards we give the accumulated reward at the end of
the episode without enriching the states. This has a similar effect as using . We force the LSTM to
build up an internal state which tracks the already accumulated reward.
True, the LSTM is the value function at time tbased on the sub-sequence up to t. The LSTM
prediction can be decomposed into two sub-predictions. The first sub-prediction is the contribution of
the already known sub-sequence up to t to the return (backward view). The second sub-prediction
is the expected contribution of the unknown future sequence from t+1 onwards to the return (forward
view). However, we are not interested in the second sub-prediction but only in the contribution of t
to the prediction of the expected return. The second sub-prediction is irrelevant for our approach. We
cancel the second sub-prediction via the differences of predictions. The difference at time t gives the
contribution of tto the expected return.
Empirical confirmation: Four years ago, we started this research project with using LSTM as a value
function, but we failed. This was the starting point for RUDDER. In the submission, we used LSTM
predictions in artificial task (IV) as potential function for reward shaping, look-ahead advice, and
look-back advice. Furthermore, we investigated LSTM as a value function for artificial task (II) but
these results have not been included. At the time where RUDDER already solved the task, the LSTM
error was too large to allow learning via a value function. Problem is the large variance of the returns
at the beginning of the sequence which hinders LSTM learning (forward view). RUDDER LSTM
learning was initiated by propagating back prediction errors at the sequence end, where the variance
of the return is lower (backward view). These late predictions initiated the storing of key events at the
sequence beginning even with high prediction errors. The redistributed reward at the key events led
RUDDER solve the task. Concluding: at the time RUDDER solved the task, the early predictions are
not learned due to the high variance of the returns. Therefore using the predictions as value function
does not help (forward view).
Example: The agent has to take a key to open the door. Since it is an MDP, the agent is always aware
to have the key indicated by a key bit to be on. The reward can be predicted in the last step. Using
differences the key bit is zero, except for the step where the agent takes the key. Thus, the LSTM
has to store this event and will transfer reward to it.
Compensation reward. The compensation corrects for prediction errors of g(gis the sum of h).
The prediction error of gcan have two sources: (1) the probabilistic nature of the reward, (2) an
approximation error of g for the expected reward. We aim to make (2) small and then the correction is
only for the probabilistic nature of the reward. The compensation error depends on g, which, in turn,
depends on the whole sequence. The dependency on state-action pairs from t= 0toT 1is viewed
as random effect, therefore the compensation reward only depends on the last state-action pair.
ThathtandRt+1depends only on (st;at;st 1;at 1)is important to prove Theorem 3. Then at 1
cancels and the advantage function remains the same.
66
Connection theory and algorithms. Theorem 1 and Theorem 2 ensure that the algorithms are
correct since the optimal policies do not change even for non-optimal return decompositions. In
contrast to TD methods which are biased, Theorem 3 shows that the update rule Q-value estimation is
unbiased when assuming optimal decomposition. Theorem 4 explicitly derives optimality conditions
for the expected sum of delayed rewards “kappa” and measures the distance to optimality. This
“kappa” is used for learning and is explicitly estimated to correct learning if an optimal decomposition
cannot be assured. The theorems are used to justify following learning methods (A) and (B):
(A) Q-value estimation: (i) Direct Q-value estimation (not Q-learning) according to Theorem 3 is
given in Eq. (9) when an optimal decomposition is assumed. (ii) Q-value estimation with correction
by kappa according to Theorem 4, when optimal decomposition is not assumed. Here kappa is learned
by TD as given in Eq. (10). (iii) Q-value estimation using eligibility traces. (B) Policy gradient:
Theorems are used as for Q-value estimation as in (A) but now the Q-values serve for policy gradient.
(C) Q-learning: Here the properties in Theorem 3 and Theorem 4 are ignored.
We also shows variants (not in the main paper) on page 31 and 32 of using kappa “Correction of the
reward redistribution” by reward shaping with kappa and “Using kappa as auxiliary task in predicting
the return for return decomposition”.
Optimal Return Decomposition, contributions and policy. The Q-value qdepends on a partic-
ular policy. The function h depends on policy since h predicts the expected return ( E[~RT+1])
which depends on . Thus, both return decomposition and optimal return decomposition are defined
for a particular policy . A reward redistribution from a return decomposition leads to a return equiv-
alent MDP. Return equivalent MDPs are defined via all policies even if the reward redistribution was
derived from a particular policy. A reward redistribution depends only on the state-action sequence
but not on the policy that generated this sequence. Also does not depend on a policy.
Optimal policies are preserve for every state. We assume all states are reachable via at least one
non-zero transition probability to each state and policies that have a non-zero probability for each
action due to exploration. For an MDP being optimal in the initial state is the same as being optimal in
every reachable state. This follows from recursively applying the Bellman optimality equation to the
initial value function. The values of the following states must be optimal otherwise the initial value
function is smaller. Only states to which the transition probability is zero the Bellman optimality
equation does not determine the optimality.
All RL algorithms are suitable. For example we applied TD, Monte Carlo, Policy Gradient, which all
work faster with the new MDP.
Limitations. In all of the experiments reported in this manuscript, we show that RUDDER signifi-
cantly outperforms other methods for delayed reward problems. However, RUDDER might not be
effective when the reward is not delayed since LSTM learning takes extra time and has problems with
very long sequences. Furthermore, reward redistribution may introduce disturbing spurious reward
signals.
67
A6 Additional Related Work
Delayed Reward. To learn delayed rewards there are three phases to consider: (i) discovering
the delayed reward, (ii) keeping information about the delayed reward, (iii) learning to receive the
delayed reward to secure it for the future. Recent successful reinforcement learning methods provide
solutions to one or more of these phases. Most prominent are Deep Q-Networks (DQNs) [ 81,82],
which combine Q-learning with convolutional neural networks for visual reinforcement learning
[69]. The success of DQNs is attributed to experience replay [74], which stores observed state-
reward transitions and then samples from them. Prioritized experience replay [ 109,58] advanced the
sampling from the replay memory. Different policies perform exploration in parallel for the Ape-X
DQN and share a prioritized experience replay memory [ 58]. DQN was extended to double DQN
(DDQN) [ 134,135] which helps exploration as the overestimation bias is reduced. Noisy DQNs
[26] explore by a stochastic layer in the policy network (see [ 48,110]). Distributional Q-learning
[10] profits from noise since means that have high variance are more likely selected. The dueling
network architecture [ 138,139] separately estimates state values and action advantages, which helps
exploration in unknown states. Policy gradient approaches [ 145] explore via parallel policies, too.
A2C has been improved by IMPALA through parallel actors and correction for policy-lags between
actors and learners [ 24]. A3C with asynchronous gradient descent [ 80] and Ape-X DPG [ 58] also
rely on parallel policies. Proximal policy optimization (PPO) extends A3C by a surrogate objective
and a trust region optimization that is realized by clipping or a Kullback-Leibler penalty [115].
Recent approaches aim to solve learning problems caused by delayed rewards. Function approxi-
mations of value functions or critics [ 82,80] bridge time intervals if states associated with rewards
are similar to states that were encountered many steps earlier. For example, assume a function that
has learned to predict a large reward at the end of an episode if a state has a particular feature. The
function can generalize this correlation to the beginning of an episode and predict already high reward
for states possessing the same feature. Multi-step temporal difference (TD) learning [ 127,128]
improved both DQNs and policy gradients [ 47,80]. AlphaGo and AlphaZero learned to play Go and
Chess better than human professionals using Monte Carlo Tree Search (MCTS) [ 116,117]. MCTS
simulates games from a time point until the end of the game or an evaluation point and therefore
captures long delayed rewards. Recently, world models using an evolution strategy were successful
[42]. These forward view approaches are not feasible in probabilistic environments with a high
branching factor of state transition.
Backward View. We propose learning from a backward view, which either learns a separate model
or analyzes a forward model. Examples of learning a separate model are to trace back from known
goal states [ 23] or from high reward states [ 36]. However, learning a backward model is very
challenging. When analyzing a forward model that predicts the return then either sensitivity analysis
or contribution analysis may be utilized. The best known backward view approach is sensitivity
analysis (computing the gradient) like ”Backpropagation through a Model ´´[86,101,102,142,5].
Sensitivity analysis has several drawbacks: local minima, instabilities, exploding or vanishing
gradients, and proper exploration [ 48,110]. The major drawback is that the relevance of actions is
missed since sensitivity analysis does not consider their contribution to the output but only their effect
on the output when slightly perturbing them.
We use contribution analysis since sensitivity analysis has serious drawbacks. Contribution analysis
determines how much a state-action pair contributes to the final prediction. To focus on state-
actions which are most relevant for learning is known from prioritized sweeping for model-based
reinforcement learning [ 85]. Contribution analysis can be done by computing differences of return
predictions when adding another input, by zeroing out an input and then compute the change in
the prediction, by contribution-propagation [ 71], by a contribution approach [ 94], by excitation
backprop [ 147], by layer-wise relevance propagation (LRP) [ 3], by Taylor decomposition [ 3,83], or
by integrated gradients (IG) [125].
LSTM. LSTM was already used in reinforcement learning [ 112] for advantage learning [ 4], for
constructing a potential function for reward shaping by representing the return by a sum of LSTM
outputs across an episode [124], and learning policies [44, 80, 45].
Reward Shaping, Look-Ahead Advice, Look-Back Advice. Redistributing the reward is funda-
mentally different from reward shaping [ 87,143], look-ahead advice and look-back advice [ 144].
However, these methods can be viewed as a special case of reward redistribution that result in an
MDP that is return-equivalent to the original MDP as is shown in Section A2.2. On the other hand
every reward function can be expressed as look-ahead advice [ 43]. In contrast to these methods,
68
reward redistribution is not limited to potential functions, where the additional reward is the potential
difference, therefore it is a more general concept than shaping reward or look-ahead/look-back advice.
The major difference of reward redistribution to reward shaping, look-ahead advice, and look-back
advice is that the last three keep the original rewards. Both look-ahead advice and look-back advice
have not been designed for replacing for the original rewards. Since the original reward is kept, the
reward redistribution is not optimal according to Section A2.6.1. The original rewards may have long
delays that cause an exponential slow-down of learning. The added reward improves sampling but a
delayed original reward must still be transferred to the Q-values of early states that caused the reward.
The concept of return-equivalence of SDPs resulting from reward redistributions allows to eliminate
the original reward completely. Reward shaping can replace the original reward. However, it only
depends on states but not on actions, and therefore, it cannot identify relevant actions without the
original reward.
69
A7 Markov Decision Processes with Undiscounted Rewards
We focus on Markov Decision Processes (MDPs) with undiscounted rewards, since the relevance
but also the problems of a delayed reward can be considerably decreased by discounting it. Using
discounted rewards both the bias correction in TD as well as the variance of MC are greatly reduced.
The correction amount decreases exponentially with the delay steps, and also the variance contribution
to one state decreases exponentially with the delay of the reward.
MDPs with undiscounted rewards are either finite horizon or process absorbing states without reward.
The former can always be described by the latter.
A7.1 Properties of the Bellman Operator in MDPs with Undiscounted Rewards
At each time tthe environment is in some state s=st2S. The agent takes an action a=at2A
according to policy , which causes a transition of the environment to state s0=st+12S and a
rewardr=rt+12R for the agent with probability p(s0;rjs;a).
The Bellman operator maps a action-value function q=q(s;a)to another action-value function. We
do not require that qareQ-values and that ris the actual reward. We define the Bellman operator T
for policyas:
T[q] (s;a) =X
s0;rp(s0;rjs;a)"
r+X
a0(a0js0)q(s0;a0)#
: (A276)
We often rewrite the operator as
T[q] (s;a) =r(s;a) + Es0;a0[q(s0;a0)]; (A277)
where
r(s;a) =X
rrp(rjs;a); (A278)
Es0;a0[q(s0;a0)] =X
s0p(s0js;a)X
a0(a0js0)q(s0;a0): (A279)
We did not explicitly express the dependency on the policy and the state-action pair (s;a)in the
expectation Es0;a0. A more precise way would be to write E
s0;a0[:js;a].
More generally, we have
T[q] (s;a) =g(s;a) + Es0;a0[q(s0;a0)]: (A280)
In the following we show properties for this general formulation.
A7.1.1 Monotonically Increasing and Continuous
We assume the general formulation Eq. (A280) of the Bellman operator. Proposition 2.1 on pages
22-23 in Bertsekas and Tsitsiklis, 1996, [ 14] shows that a fixed point qof the Bellman operator
exists and that for every q:
q= T[q] (A281)
q= lim
k!1(T)kq: (A282)
The fixed point equation
q= T[q] (A283)
is called Bellman equation orPoisson equation . For the Poisson equation see Equation 33 to
Equation 37 for the undiscounted case and Equation 34 and Equation 43 for the discounted case
in Alexander Veretennikov, 2016, [ 137]. This form of the Poisson equation describes the Dirichlet
boundary value problem. The Poisson equation is
q(s;a) + g=g(s;a) + Es0;a0[q(s0;a0)js;a]; (A284)
where gis the long term average reward or the expected value of the reward for the stationary
distribution:
g= lim
T!11
T+ 1TX
t=0g(st;at): (A285)
70
We assume g= 0since after some time the agent does no longer receive reward in MDPs with finite
time horizon or in MDPs with absorbing states that have zero reward.
Tismonotonically increasing in its arguments [ 14]. Forq1andq2with the component-wise
conditionq1>q2, we have
T[q1] (s;a) T[q2] (s;a) (A286)
= (g(s;a) + Es0;a0[q1(s0;a0)]) (g(s;a) + Es0;a0[q2(s0;a0)])
= Es0;a0[q1(s0;a0) q2(s0;a0)]>0;
where “ >” is component-wise. The last inequality follows from the component-wise condition
q1>q2.
We define the norm k:k1, which gives the maximal difference of the Q-values:
kq1 q2k1= max
s;ajq1(s;a) q2(s;a)j: (A287)
Tis anon-expansion mapping forq1andq2:
kT[q1] T[q2]k1= max
s;ajT[q1](s;a) T[q2](s;a)j (A288)
= max
s;a"
g(s;a) +X
s0p(s0js;a)X
a0(a0js0)q1(s0;a0)#
"
g(s;a) +X
s0p(s0js;a)X
a0(a0js0)q2(s0;a0)#
= max
s;aX
s0p(s0js;a)X
a0(a0js0) (q1(s0;a0) q2(s0;a0))
6max
s;aX
s0p(s0js;a)X
a0(a0js0)jq1(s0;a0) q2(s0;a0)j
6max
s0;a0jq1(s0;a0) q2(s0;a0)j=kq1 q2k1:
The first inequality is valid since the absolute value is moved into the sum. The second inequality is
valid since the expectation depending on (s;a)is replaced by a maximum that does not depend on
(s;a). Consequently, the operator Tis continuous.
A7.1.2 Contraction for Undiscounted Finite Horizon
For time-aware states, we can define another norm with 0<< 1which allows for a contraction
mapping:
kq1 q2k1;t=Tmax
t=0T t+1max
st;ajq1(st;a) q2(st;a)j: (A289)
71
Tis acontraction mapping forq1andq2[14]:
kT[q1] T[q2]k1;t=Tmax
t=0T t+1max
st;ajT[q1](st;a) T[q2](st;a)j (A290)
=Tmax
t=0T t+1max
st;a2
4g(st;a) +X
st+1p(st+1jst;a)X
a0(a0js0)q1(st+1;a0)3
5
2
4g(st;a) +X
st+1p(st+1jst;a)X
a0(a0js0)q2(st+1;a0)3
5
=Tmax
t=0T t+1max
st;aX
st+1p(st+1jst;a)X
a0(a0js0) [q1(st+1;a0) q2(st+1;a0)]
6Tmax
t=0T t+1max
st;aX
st+1p(st+1jst;a)X
a0(a0js0)jq1(st+1;a0) q2(st+1;a0)j
6Tmax
t=0T t+1max
st+1;a0jq1(st+1;a0) q2(st+1;a0)j
6Tmax
t=0T (t+1)+1max
st+1;a0jq1(st+1;a0) q2(st+1;a0)j
=T+1max
t=1T t+1max
st;a0jq1(st;a0) q2(st;a0)j
=Tmax
t=0T t+1max
st;a0jq1(st;a0) q2(st;a0)j
=kq1 q2k1;t:
The equality in the last but one line stems from the fact that all Q-values att=T+ 1are zero and
that allQ-values att= 1have the same constant value.
Furthermore, all qvalues are equal to zero for additionally introduced states at t=T+ 1since for
t>T + 1all rewards are zero. We have
q= TT[q]; (A291)
which is correct for additionally introduced states at time t=T+ 1since they are zero. Then, in the
next iteration Q-values of states at time t=Tare correct. After iteration i,Q-values of states at time
t=T i+ 1are correct. This iteration is called the “backward induction algorithm” [ 95,96]. If
we perform this iteration for a policy instead of the optimal policy, then this procedure is called
“policy evaluation algorithm” [95, 96].
A7.1.3 Contraction for Undiscounted Infinite Horizon With Absorbing States
A stationary policy is proper if there exists an integer nsuch that from any initial state xthe probability
of achieving the terminal state after nsteps is strictly positive.
If all terminal states are absorbing and cost/reward free and if all stationary policies are proper the
Bellman operator is a contraction mapping with respect to a weighted sup-norm.
The fact that the Bellman operator is a contraction mapping with respect to a weighted sup-norm
has been proved in Tseng, 1990, in Lemma 3 with equation (13) and text thereafter [ 132]. Also
Proposition 1 in Bertsekas and Tsitsiklis, 1991, [ 13], Theorems 3 and 4(b) & 4(c) in Tsitsiklis, 1994,
[133], and Proposition 2.2 on pages 23-24 in Bertsekas and Tsitsiklis, 1996, [ 14] have proved the
same fact.
A7.1.4 Fixed Point of Contraction is Continuous wrt Parameters
The meanqand variance Vare continuous with respect to , that is(a0js0), with respect to the
reward distribution p(rjs;a)and with respect to the transition probabilities p(s0js;a).
A complete metric space or a Cauchy space is a space where every Cauchy sequence of points has a
limit in the space, that is, every Cauchy sequence converges in the space. The Euclidean space Rn
with the usual distance metric is complete. Lemma 2.5 in Jachymski, 1996, is [62]:
Theorem A11 (Jachymski: complete metric space) .Let(X;d)be a complete metric space, and let
(P;dP)be a metric space. Let F:PX!Xbe continuous in the first variable and contractive
72
in the second variable with the same Lipschitz constant <1. Forp2P, letx(p)be the unique
fixed point of the map x!F(p;x). Then the mapping xis continuous.
This theorem is Theorem 2.3 in Frigon, 2007, [ 27]. Corollary 4.2 in Feinstein, 2016, generalized the
theorem to set valued operators, that is, these operators may have more than one fixed point [ 25] (see
also [67]). All mappings F(p;:)must have the same Lipschitz constant <1.
A locally compact space is a space where every point has a compact neighborhood. Rnis locally
compact as a consequence of the Heine-Borel theorem. Proposition 3.2 in Jachymski, 1996, is [62]:
Theorem A12 (Jachymski: locally compact complete metric space) .Let(X;d)be a locally compact
complete metric space, and let (P;dP)be a metric space. Let F:PX!Xbe continuous in the
first variable and contractive in the second variable with not necessarily the same Lipschitz constant.
Forp2P, letx(p)be the unique fixed point of the map x!F(p;x). Then the mapping xis
continuous.
This theorem is Theorem 2.5 in Frigon, 2007, [ 27] and Theorem 2 in Kwiecinski, 1992, [ 70]. The
mappingsF(p;:)can have different Lipschitz constants.
A7.1.5 t-fold Composition of the Operator
We define the Bellman operator as
T[q] (s;a) =g(s;a) +X
s0p(s0js;a)X
a0(a0js0)q(s0;a0) (A292)
=g(s;a) +qTp(s;a);
whereqis the vector with value q(s0;a0)at position (s0;a0)andp(s;a)is the vector with value
p(s0js;a)(a0js0)at position (s0;a0).
In vector notation we obtain the Bellman equation orPoisson equation . For the Poisson equation
see Equation 33 to Equation 37 for the undiscounted case and Equation 34 and Equation 43 for the
discounted case in Alexander Veretennikov, 2016, [ 137]. This form of the Poisson equation describes
the Dirichlet boundary value problem. The Bellman equation orPoisson equation is
T[q] =g+P q; (A293)
wherePis the row-stochastic matrix with p(s0js;a)(a0js0)at position ((s;a);(s0;a0)).
The Poisson equation is
q+ g1=g+P q; (A294)
where 1is the vector of ones and gis the long term average reward or the expected value of the
reward for the stationary distribution:
g= lim
T!11
T+ 1TX
t=0g(st;at): (A295)
We assume g= 0since after some time the agent does no longer receive reward for MDPs with finite
time horizon or MDPs with absorbing states that have zero reward.
SincePis a row-stochastic matrix, the Perron-Frobenius theorem says that (1) Phas as largest
eigenvalue 1 for which the eigenvector corresponds to the steady state and (2) the absolute value of
each (complex) eigenvalue is smaller or equal 1. Only the eigenvector to the eigenvalue 1 has purely
positive real components.
Equation 7 of Bertsekas and Tsitsiklis, 1991, [13] states
(T)t[q] =t 1X
k=0Pkg+Ptq: (A296)
Ifpis the stationary distribution vector for P, that is,
lim
k!1Pk=1pT(A297)
lim
k!1pT
0Pk=pT(A298)
then
lim
k!11
kk 1X
i=0Pi=1pT(A299)
lim
k!11
kk 1X
i=0pT
0Pi=pT: (A300)
73
A7.2 Q-value Transformations: Shaping Reward, Baseline, and Normalization
The Bellman equation for the action-value function qis
q(s;a) =X
s0;rp(s0;rjs;a)"
r+X
a0(a0js0)q(s0;a0)#
: (A301)
The expected return at time t= 0is:
v0=X
s0p(s0)v(s0): (A302)
As introduced for the REINFORCE algorithm, we can subtract a baseline v0from the return. We
subtract the baseline v0from the last reward. Therefore, for the new reward Rwe have Rt=Rtfor
t6TandRT+1=RT+1 v0. Consequently, q(st;at) =q(st;at) v0fort6T.
The TD update rules are:
q(st;at) q(st;at) +
rt+X
a(ajst+1)q(st+1;a) q(st;at)!
: (A303)
The-errors are
Rt+1+X
a(ajst+1)q(st+1;a) q(st;at)
=Rt+1+X
a(ajst+1) (q(st+1;a) v0) (q(st;at) v0)
=Rt+1+X
a(ajst+1) q(st+1;a) q(st;at) (A304)
and for the last step
RT+1 q(sT;aT) = (RT+1 v0) (q(sT;aT) v0) (A305)
=RT+1 q(sT;aT):
If we set
q(st;at) =
q(st;at) v0;fort6T : (A306)
Rt=Rt; fort6T
RT+1 v0;fort=T+ 1;(A307)
then the-errors and the updates remain the same for qandq. We are equally far away from the
optimal solution in both cases.
Removing the offset v0at the end by RT+1=RT+1 v0, can also be derived via reward shaping.
However, the offset has to be added at the beginning: R1=R1+v0. Reward shaping requires for
the shaping reward Fand a potential function [87, 143]:
F(st;at;st+1) = (st+1) (st): (A308)
For introducing a reward of cat timet=kand removing it from time t=m<k we set:
(st) =8
<
:0; fort6m;
c; form+ 16t6k;
0; fort>k;(A309)
then the shaping reward is
F
st;at;st+1
=8
>>>>><
>>>>>:0; fort<m;
c; fort=m;
0; form+ 16t<k;
c; fort=k;
0; fort>k:(A310)
74
Fork=T,m= 0, andc= v0we obtain above situation but with R1=R1+v0andRT+1=
RT+1 v0, that is,v0is removed at the end and added at the beginning. All Q-values except q(s0;a0)
are decreased by v0. In the general case, all Q-valuesq(st;at)withm+ 16t6kare increased by
c.
Q-value normalization : We apply reward shaping [ 87,143] for normalization of the Q-values. The
potential (s)defines the shaping reward F(st;at;st+1) = (st+1) (st). The optimal policies
do not change and the Q-values become
qnew(st;at) =q(st;at) (st): (A311)
We change the Q-values for all 16t6T, but not for t= 0andt=T+ 1. The first and the last
Q-values are not normalized. All the shaped reward is added/subtracted to/from the initial and the
last reward.
•The maximal Q-values are zero and the non-optimal Q-values are negative for all 16t6T:
(st) = max
aq(st;a): (A312)
• The minimal Q-values are zero and all others Q-values are positive for all 16t6T 1:
(st) = min
aq(st;a): (A313)
A7.3 Alternative Definition of State Enrichment
Next, we define state-enriched processes ~Pcompared toP. The state ~sof~Pis enriched with a
deterministic information compared to a state sofP. The enriched information in ~scan be computed
from the state-action pair (~s;a)and the reward r. Enrichments may be the accumulated reward,
count of the time step, a count how often a certain action has been performed, a count how often
a certain state has been visited, etc. Givan et al. have already shown that state-enriched Markov
decision processes (MDPs) preserve the optimal action-value and action sequence properties as well
as the optimal policies of the model [ 34]. Theorem 7 and Corollary 9.1 in Givan et al. proved these
properties [ 34] by bisimulations (stochastically bisimilar MDPs). A homomorphism between MDPs
maps a MDP to another one with corresponding reward and transitions probabilities. Ravindran
and Barto have shown that solving the original MDP can be done by solving a homomorphic image
[99]. Therefore, Ravindran and Barto have also shown that state-enriched MDPs preserve the optimal
action-value and action sequence properties. Li et al. give an overview over state abstraction or state
aggregation for MDPs, which covers state-enriched MDPs [73].
Definition A14. A decision process ~Pisstate-enriched compared to a decision process Pif following
conditions hold. If ~sis the state of ~P, then there exists a function f: ~s!swithf(~s) =s, wheres
is the state ofP. There exists a function g: ~s!R , whereg(~s)gives the additional information of
state ~scompared to f(~s). There exists a function with(f(~s);g(~s)) = ~s, that is, the state ~scan be
constructed from the original state and the additional information. There exists a function Hwith
h(~s0) =H(r;~s;a), where ~s0is the next state and rthe reward.Hensures that h(~s0)of the next state
~s0can be computed from reward r, actual state ~s, and the actual action a. Consequently, ~s0can be
computed from (r;~s;a). For all ~sand~s0following holds:
~p(~s0;rj~s;a) =p(f(~s0);rjf(~s);a); (A314)
~p0(~s0) =p0(f(~s0)); (A315)
where ~p0andp0are the probabilities of the initial states of ~PandP, respectively.
If the reward is deterministic, then ~p(~s0;rj~s;a) = ~p(~s0j~s;a)and~p0(~s0;r) = ~p0(~s0).
We proof the following theorem, even if it has been proved several times as mention above.
Theorem A13. If decision process ~Pis state-enriched compared to P, then for each optimal policy
~of~Pthere exists an equivalent optimal policy ofP, and vice versa, with ~(~s) =(f(~s)).
The optimal return is the same for ~PandP.
Proof. We proof by induction that ~q~(~s;a) =q(f(~s);a)if~(~s) =(f(~s)).
Basis : The end of the sequence. For t>Twe have ~q~(~s;a) =q(f(~s);a) = 0 , since no policy
receives reward for t>T.
75
Inductive step ( t!t 1): Assume ~q~(~s0;a0) =q(f(~s0);a0)for the next state ~s0and next action
a0.
~q~(~s;a) = E ~h
~Gtj~st= ~s;At=ai
=X
~s0;r~p(~s0;rj~s;a)"
r+X
a0~(a0j~s0) ~q~(~s0;a0)#
=X
f(~s0);g(~s0);r~p(~s0;rj~s;a)"
r+X
a0~(a0j~s0) ~q~(~s0;a0)#
(A316)
=X
f(~s0);G(r;~s;a);r~p(~s0;rj~s;a)"
r+X
a0~(a0j~s0) ~q~(~s0;a0)#
=X
f(~s0);r~p(~s0;rj~s;a)"
r+X
a0~(a0j~s0) ~q~(~s0;a0)#
=X
f(~s0);rp(f(~s0);rjf(~s);a)"
r+X
a0(a0jf(~s0)) ~q~(~s0;a0)#
=X
f(~s0);rp(f(~s0);rjf(~s);a)"
r+X
a0(a0jf(~s0))q(f(~s0);a0)#
=q(f(~s);a):
For the induction step 1!0we use ~p0(~s0;r) =p0(f(~s0);r)instead of ~p(~s0;rj~s;a) =p(f(~s0);rj
f(~s);a).
It follows that ~q(~s;a) =q(f(~s);a), and therefore
~(~s) = argmax
a~q(~s;a) = argmax
aq(f(~s);a) =(f(~s)): (A317)
Using Bellman’s optimality equation would give the same result, where in above equation bothP
a0(a0jf(~s0))andP
a0~(a0j~s0)are replaced by maxa0.
Theorem A14. If a Markov decision process ~Pis state-enriched compared to the MDP P, then for
each optimal policy ~of~Pthere exists an equivalent optimal policy ofP, and vice versa, with
~(f(s)) =(s). The optimal return is the same for ~PandP.
Proof. The MDP ~Pis a homomorphic image of P. For state-enrichment, the mapping gis bijective,
therefore the optimal policies in ~PandPare equal according to Lemma A1. The optimal return is
also equal since it does not change via state-enrichment.
A7.4 Variance of the Weighted Sum of a Multinomial Distribution
State transitions are multinomial distributions and the future expected reward is a weighted sum of
multinomial distributions. Therefore, we are interested in the variance of the weighted sum of a
multinomial distribution. Since we have
Es0;a0[q(s0;a0)js;a] =X
s0p(s0js;a)X
a0(a0js0)q(s0;a0); (A318)
the variance of Es0;a0[q(s0;a0)]is determined by the variance of the multinomial distribution
p(s0js;a). In the following we derive the variance of the estimation of a linear combination of
variables of a multinomial distribution likeP
s0p(s0js;a)f(s0).
A multinomial distribution with parameters (p1;:::;pN)as event probabilities satisfyingPN
i=1pi=
1and support xi2f0;:::;ng,i2f1;:::;Ngforntrials, that isPxi=n, has
pdfn!
x1!xk!px1
1pxk
k; (A319)
mean E[Xi] =npi; (A320)
variance Var[Xi] =npi(1 pi); (A321)
covariance Cov[Xi;Xj] = npipj;(i6=j); (A322)
76
whereXiis the random variable and xithe actual count.
A linear combination of random variables has variance
Var"NX
i=1aiXi#
=NX
i;j=1aiajCov [Xi;Xj] (A323)
=NX
i=1a2
iVar [Xi] +X
i6=jaiajCov [Xi;Xj]:
The variance of estimating the mean Xof independent random variables (X1;:::;Xn)that all have
variance2is:
Var [X] = Var"
1
nnX
i=1Xi#
(A324)
=1
n2nX
i=1Var [Xi] =1
n2nX
i=12=2
n:
When estimating the mean yovernsamples of a linear combination of variables of a multinomial
distributiony=PN
i=1aiXi, where each yhasnytrials, we obtain:
Var [y] =2
y
n=1
n0
@NX
i=1a2
inypi(1 pi) X
i6=jaiajnypipj1
A (A325)
=ny
n0
@NX
i=1a2
ipi(1 pi) X
i6=jaiajpipj1
A
=ny
n0
@NX
i=1a2
ipi (N;N)X
(i;j)=(1;1)aiajpipj1
A
=ny
n0
@NX
i=1a2
ipi NX
i=1aipi!21
A:
A8 Long Short-Term Memory (LSTM)
A8.1 LSTM Introduction
Recently, Long Short-Term Memory (LSTM; [ 49,54,55]) networks have emerged as the best-
performing technique in speech and language processing. LSTM networks have been overwhelming
successful in different speech and language applications, including handwriting recognition [ 37],
generation of writings [ 38], language modeling and identification [ 35,146], automatic language
translation [ 126], speech recognition [ 107,29] analysis of audio data [ 78], analysis, annotation,
and description of video data [ 22,136,123]. LSTM has facilitated recent benchmark records
in TIMIT phoneme recognition (Google), optical character recognition, text-to-speech synthesis
(Microsoft), language identification (Google), large vocabulary speech recognition (Google), English-
to-French translation (Google), audio onset detection, social signal classification, image caption
generation (Google), video-to-text description, end-to-end speech recognition (Baidu), and semantic
representations. In the proceedings of the flagship conference ICASSP 2015 (40thIEEE International
Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, April 19–24, 2015), 13
papers had “LSTM” in their title, yet many more contributions described computational approaches
that make use of LSTM.
The key idea of LSTM is the use of memory cells that allow for constant error flow during training.
Thereby, LSTM avoids the vanishing gradient problem , that is, the phenomenon that training errors
are decaying when they are back-propagated through time [ 49,52]. The vanishing gradient problem
severely impedes credit assignment in recurrent neural networks, i.e. the correct identification of
relevant events whose effects are not immediate, but observed with possibly long delays. LSTM, by
its constant error flow, avoids vanishing gradients and, hence, allows for uniform credit assignment ,
77
i.e. all input signals obtain a similar error signal. Other recurrent neural networks are not able to
assign the same credit to all input signals, therefore they are very limited concerning the solutions
they will find. Uniform credit assignment enabled LSTM networks to excel in speech and language
tasks: if a sentence is analyzed, then the first word can be as important as the last word. Via uniform
credit assignment, LSTM networks regard all words of a sentence equally. Uniform credit assignment
enables to consider all input information at each phase of learning, no matter where it is located in
the input sequence. Therefore, uniform credit assignment reveals many more solutions to the learning
algorithm which would otherwise remain hidden.
+
+
forget gate
input gate
cell input
+
output gate
LSTM cell
input recurrentinputrecurrentinputrecurrentoutput recurrent
cell output
...
...
...
...
...
...
+
+
...
...inputrecurrent
...
...
h
g
Legend
gate activation function
(usually sigmoid)
ginput activation function
(usually tanh or sigmoid)
houtput activation function
(usually tanh or sigmoid)
+ sum over all inputs
branching point
mu ltiplication
feedforward data flow
recurrent data flow
recurrent weights
feedforward weights
yo
c f
i
z
Figure A10: LSTM memory cell without peepholes. zis the vector of cell input activations, iis the
vector of input gate activations, fis the vector of forget gate activations, cis the vector of memory
cell states,ois the vector of output gate activations, and yis the vector of cell output activations. The
activation functions are gfor the cell input, hfor the cell state, and for the gates. Data flow is either
“feed-forward” without delay or “recurrent” with an one-step delay. “Input” connections are from the
external input to the LSTM network, while “recurrent” connections take inputs from other memory
cells and hidden units of the LSTM network with a delay of one time step.
A8.2 LSTM in a Nutshell
The central processing and storage unit for LSTM recurrent networks is the memory cell . As
already mentioned, it avoids vanishing gradients and allows for uniform credit assignment. The
most commonly used LSTM memory cell architecture in the literature [ 39,112] contains forget
gates [ 31,32] and peephole connections [ 30]. In our previous work [ 57,53], we found that peephole
connections are only useful for modeling time series, but not for language, meta-learning, or biological
sequences. That peephole connections can be removed without performance decrease, was recently
confirmed in a large assessment, where different LSTM architectures have been tested [ 40]. While
LSTM networks are highly successful in various applications, the central memory cell architecture
was not modified since 2000 [ 112]. A memory cell architecture without peepholes is depicted in
Figure A10.
In our definition of a LSTM network, all units of one kind are pooled to a vector: zis the vector of
cell input activations, iis the vector of input gate activations, fis the vector of forget gate activations,
cis the vector of memory cell states, ois the vector of output gate activations, and yis the vector of
cell output activations. We assume to have an input sequence, where the input vector at time tisxt.
The matricesWz,Wi,Wf, andWocorrespond to the weights of the connections between inputs
and cell input, input gate, forget gate, and output gate, respectively. The vectors bz,bi,bf, andbo
are the bias vectors of cell input, input gate, forget gate, and output gate, respectively. The activation
functions are gfor the cell input, hfor the cell state, and for the gates, where these functions are
evaluated in a component-wise manner if they are applied to vectors. Typically, either the sigmoid
78
1
1+exp( x)ortanh are used as activation functions. denotes the point-wise multiplication of two
vectors. Without peepholes, the LSTM memory cell forward pass rules are (see Figure A10):
zt=g
Wzxt+bz
cell input (A326)
it=
Wixt+bi
input gate (A327)
ft=
Wfxt+bf
forget gate (A328)
ct=itzt+ftct 1cell state (A329)
ot=
Woxt+bo
output gate (A330)
yt=oth
ct
cell output (A331)
A8.3 Long-Term Dependencies vs. Uniform Credit Assignment
The LSTM network has been proposed with the aim to learn long-term dependencies in sequences
which span over long intervals [ 55,56,50,51]. However, besides extracting long-term dependencies,
LSTM memory cells have another, even more important, advantage in sequence learning: as already
described in the early 1990s, LSTM memory cells allow for uniform credit assignment , that is, the
propagation of errors back to inputs without scaling them [ 49]. For uniform credit assignment of
current LSTM architectures, the forget gate fmust be one or close to one. A memory cell without
an input gateijust sums up all the squashed inputs it receives during scanning the input sequence.
Thus, such a memory cell is equivalent to a unit that sees all sequence elements at the same time,
as has been shown via the “Ersatzschaltbild” [ 49]. If an output error occurs only at the end of the
sequence, such a memory cell, via backpropagation, supplies the same delta error at the cell input
unitzat every time step. Thus, all inputs obtain the same credit for producing the correct output and
are treated on an equal level and, consequently, the incoming weights to a memory cell are adjusted
by using the same delta error at the input unit z.
In contrast to LSTM memory cells, standard recurrent networks scale the delta error and assign
different credit to different inputs. The more recent the input, the more credit it obtains. The first
inputs of the sequence are hidden from the final states of the recurrent network. In many learning
tasks, however, important information is distributed over the entire length of the sequence and can
even occur at the very beginning. For example, in language- and text-related tasks, the first words are
often important for the meaning of a sentence. If the credit assignment is not uniform along the input
sequence, then learning is very limited. Learning would start by trying to improve the prediction
solely by using the most recent inputs. Therefore, the solutions that can be found are restricted to
those that can be constructed if the last inputs are considered first. Thus, only those solutions are
found that are accessible by gradient descent from regions in the parameter space that only use the
most recent input information. In general, these limitations lead to sub-optimal solutions, since
learning gets trapped in local optima. Typically, these local optima correspond to solutions which
efficiently exploit the most recent information in the input sequence, while information way back in
the past is neglected.
A8.4 Special LSTM Architectures for contribution Analysis
A8.4.1 LSTM for Integrated Gradients
For Integrated Gradients contribution analysis with LSTM, we make following assumptions:
(A1)ft= 1for allt. That is the forget gate is always 1 and nothing is forgotten. We assume
uniform credit assignment, which is ensured by the forget gate set to one.
(A2)ot= 1for allt. That is the output gate is always 1 and nothing is forgotten.
(A3) We seth=ahtanh withah= 1;2;4.
(A4) We setg=agtanh withag= 1;2;4.
(A5) The cell input gate zis only connected to the input but not to other memory cells. Wzhas
only connections to the input.
(A6) The input gate iis not connected to the input, that is, Wihas only connections to other
memory cells. This ensures that LRP assigns relevance only via zto the input.
(A7) The input gate ihas a negative bias, that is, bi<0. The negative bias reduces the drift
effect, that is, the memory content ceither increases or decreases over time. Typical values
arebi= 1; 2; 3; 4; 5.
79
(A8) The memory cell content is initialized with zero at time t= 0, that is,c0= 0.
The resulting LSTM forward pass rules for Integrated Gradients are:
zt=ag
Wzxt+bz
cell input (A332)
it=
Wixt+bi
input gate (A333)
ct=itzt+ct 1cell state (A334)
yt=ahtanh
ct
cell output (A335)
See Figure A11 which depicts these forward pass rules for Integrated Gradients.
+
input gate
cell input
+
LSTM
inputrecurrentoutput recurrent
cell output
...
...
...
+
...
h
Legend
sigmoid activation
cell activation (tanh)h
+ sum over all inputs
branching point
mutliplication
feedforward data flow
recurrent data flow
recurrent weights
feedforward weights
y
c
i
zcell
1.0
h
Figure A11: LSTM memory cell used for Integrated Gradients (IG). Forget gates and output gates
are set to 1 since they can modify all cell inputs at times after they have been observed, which can
make the dynamics highly nonlinear.
A8.4.2 LSTM for LRP
LRP has already been used for LSTM in order to identify important terms in sentiment analysis [ 1].
In texts, positive and negative terms with respect to the topic could be identified.
For LRP contribution analysis with LSTM, we make following assumptions:
(A1)ft= 1for allt. That is the forget gate is always 1 and nothing is forgotten. We assume
uniform credit assignment, which is ensured by the forget gate set to one.
(A2)g>0, that is,gis positive. For example we can use a sigmoid (x) =ag1
1+exp( x):g(x) =
ag(x), withag= 2;3;4. Methods like LRP have problems with negative contributions
which cancel with positive contributions [ 84]. With a positive gall contributions are positive.
The cell inputz(the function g) has a negative bias, that is, bz<0. This is important to avoid
the drift effect. The drift effect is that the memory content only gets positive contributions
which lead to an increase of cover time. Typical values are bz= 1; 2; 3; 4; 5.
(A3) We want to ensure that h(0) = 0 . If the memory content is zero then nothing is transferred
to the next layer. Therefore we set h=ahtanh withah= 1;2;4.
80
(A4) The cell input gate zis only connected to the input but not to other memory cells. Wzhas
only connections to the input. This ensures that LRP assigns relevance zto the input and z
is not disturbed by redistributing relevance to the network.
(A5) The input gate iis not connected to the input, that is, Wihas only connections to other
memory cells. This ensures that LRP assigns relevance only via zto the input.
(A6) The output gate ois not connected to the input, that is, Wohas only connections to other
memory cells. This ensures that LRP assigns relevance only via zto the input.
(A7) The input gate ihas a negative bias, that is, bi<0. Like with the cell input the negative
bias avoids the drift effect. Typical values are bi= 1; 2; 3; 4.
(A8) The output gate omay also have a negative bias, that is, bo<0. This allows to bring in
different memory cells at different time points. It is related to resource allocation.
(A9) The memory cell content is initialized with zero at time t= 0, that is,c0= 0. The memory
cell contentctis non-negative ct>0sincez>0andi>0.
The resulting LSTM forward pass rules for LRP are:
zt=ag
Wzxt+bz
cell input (A336)
it=
Wixt+bi
input gate (A337)
ct=itzt+ct 1cell state (A338)
ot=
Woxt+bo
output gate (A339)
yt=otahtanh
ct
cell output (A340)
See Figure A12 which depicts these forward pass rules for LRP. However, gates may be used while
no relevance is given to them which may lead to inconsistencies.
LRP and Contribution Propagation for LSTM. We analyze Layer-wise Relevance Propagation
(LRP) and Contribution Propagation for LSTM networks. A single memory cell can be described by:
ct=itzt+ct 1: (A341)
Here we treat itlike a weight for ztandct 1has weight 1.
For positive values of it,zt, andct 1, both LRP and contribution propagation leads to
Rct yt=Ryt (A342)
Rct=Rct ct+1+Rct yt (A343)
Rct 1 ct=ct 1
ctRct (A344)
Rzt ct=itzt
ctRct: (A345)
Since we predict only at the last step t=T, we haveRyt= 0 fort < T . Fort=Twe obtain
RcT=RyT, sinceRcT cT+1= 0.
We obtain for t= 1:::T :
RcT=RyT (A346)
Rct 1=ct 1
ctRct (A347)
which gives
Rct=RyTTY
=t+1c 1
c=ct
cTRyT (A348)
and consequently as c0= 0we obtain
Rc0= 0; (A349)
Rzt=itzt
cTRyT: (A350)
81
+
+
input gate
cell input
+
output gate
LSTM
inputrecurrentrecurrentoutput recurrent
cell output
...
...
...
...
+
...
h
Legend
sigmoid activation
output activation (tanh)h
+ sum over all inputs
branching point
mu ltiplication
feedforward data flow
recurrent data flow
recurrent weights
feedforward weights
yo
c
i
zcell
1.0Figure A12: LSTM memory cell used for Layer-Wise Relevance Propagation (LRP). zis the vector
of cell input activations, iis the vector of input gate activations, cis the vector of memory cell states,
ois the vector of output gate activations, and yis the vector of cell output activations. The activation
functions are the sigmoid (x) =ag1
1+exp( x)and the cell state activation h(x) =ahtanh(x). Data
flow is either “feed-forward” without delay or “recurrent” with an one-step delay. External input
reaches the LSTM network only via the cell input z. All gates only receive recurrent input, that is,
from other memory cells.
Since we assume c0= 0, we have
cT=TX
t=1itzt(A351)
and therefore
Rzt=itzt
PT
=1izRyT: (A352)
Therefore the relevance RyTis distributed across the inputs ztfort= 1:::T 1, where input zt
obtains relevance Rzt.
A8.4.3 LSTM for Nondecreasing Memory Cells
contribution analysis is made simpler if memory cells are nondecreasing since the contribution of
each input to each memory cells is well defined. The problem that a negative and a positive input
cancels each other is avoided. For nondecreasing memory cells and contribution analysis with LSTM,
we make following assumptions:
(A1)ft= 1for allt. That is the forget gate is always 1 and nothing is forgotten. We assume
uniform credit assignment, which is ensured by the forget gate set to one.
(A2)g > 0, that is,gis positive. For example we can use a sigmoid (x) =ag1
1+exp( x):
g(x) =ag(x), withag= 2;3;4. With a positive gall contributions are positive. The
82
cell inputz(the function g) has a negative bias, that is, bz<0. This is important to avoid
the drift effect. The drift effect is that the memory content only gets positive contributions
which lead to an increase of cover time. Typical values are bz= 1; 2; 3; 4; 5.
(A3) We want to ensure that h(0) = 0 . If the memory content is zero then nothing is transferred
to the next layer. Therefore we set h=ahtanh withah= 1;2;4.
(A4) The cell input gate zis only connected to the input but not to other memory cells. Wzhas
only connections to the input.
(A5) The input gate iis not connected to the input, that is, Wihas only connections to other
memory cells.
(A6) The output gate ois not connected to the input, that is, Wohas only connections to other
memory cells.
(A7) The input gate ihas a negative bias, that is, bi<0. Like with the cell input the negative
bias avoids the drift effect. Typical values are bi= 1; 2; 3; 4.
(A8) The output gate omay also have a negative bias, that is, bo<0. This allows to bring in
different memory cells at different time points. It is related to resource allocation.
(A9) The memory cell content is initialized with zero at time t= 0, that is,c0= 0. We ensured
via the architecture that ct>0andct+1>ct, that is, the memory cells are positive and
nondecreasing.
The resulting LSTM forward pass rules for nondecreasing memory cells are:
zt=ag
Wzxt+bz
cell input (A353)
it=
Wixt+bi
input gate (A354)
ct=itzt+ct 1cell state (A355)
ot=
Woxt+bo
output gate (A356)
yt=otahtanh
ct
cell output (A357)
See Figure A13 for a LSTM memory cell that is nondecreasing.
A8.4.4 LSTM without Gates
The most simple LSTM architecture for contribution analysis does not use any gates. Therefore
complex dynamics that have to be treated in the contribution analysis are avoided. For LSTM without
gates, we make following assumptions:
(A1)ft= 1for allt. That is the forget gate is always 1 and nothing is forgotten.
(A2)ot= 1for allt. That is the output gate is always 1.
(A3)it= 1for allt. That is the input gate is always 1.
(A4)g > 0, that is,gis positive. For example we can use a sigmoid (x) =ag1
1+exp( x):
g(x) =ag(x), withag= 2;3;4. With a positive gall contributions are positive. The
cell inputz(the function g) has a negative bias, that is, bz<0. This is important to avoid
the drift effect. The drift effect is that the memory content only gets positive contributions
which lead to an increase of cover time. Typical values are bz= 1; 2; 3; 4; 5.
(A5) We want to ensure that h(0) = 0 . If the memory content is zero then nothing is transferred
to the next layer. Therefore we set h=ahtanh withah= 1;2;4.
(A6) The memory cell content is initialized with zero at time t= 0, that is,c0= 0.
The resulting LSTM forward pass rules are:
zt=ag
Wzxt+bz
cell input (A358)
ct=zt+ct 1cell state (A359)
yt=ahtanh
ct
cell output (A360)
See Figure A14 for a LSTM memory cell without gates which perfectly distributes the relevance
across the input.
83
+input gate
cell input+LSTM
inputrecurrentoutput recurrentcell output
... ......
+
...
h
Legend
sigmoid activation
cell activation (tanh) h+ sum over all inputsbranching point
multiplication
feedforward data flow
recurrent data flow
recurrent weightsfeedforward weightsy
c
i
zcell
1.0Figure A13: A nondecreasing LSTM memory cell.
84
+cell input+LSTM
inputoutput
cell output
...
...
h
Legend
sigmoid activation
cell activation (tanh) [[[[[[ h+ sum over all inputsbranching pointfeedforward data flow
recurrent data flow
recurrent weightsfeedforward weightsy
c
zcellFigure A14: LSTM memory cell without gates.
85
A9 Contribution Analysis
A9.1 Difference of Consecutive Predictions for Sequences
General Approach. The idea is to assess the information gain that is induced by an input at a
particular time step. This information gain is used for predicting the target at sequence end by
determining the change in prediction. The input to a recurrent neural network is the sequence
x= (x1;:::;xd)with targetyd, which is only given at sequence end. The prefix sequence xtof
lengtht6disxt= (x1;:::;xt).Fpredicts the target ydat every time step t:
F(xt) =yd: (A361)
We can define the decomposition of Fthrough contributions at different time steps
h0=F(x0); (A362)
ht=F(xt) F(xt 1)fort>0; (A363)
whereF(x0)is a predefined constant. We have
F(xt) =tX
=0h: (A364)
We assume a loss function for Fthat is minimal if FFminpredicts the expected yd
Fmin(xt) = E [ydjxt]: (A365)
Then
h0= E [yd]; (A366)
ht= E [ydjxt] E [ydjxt 1]fort>0: (A367)
In this case, the contributions are the change in the expectation of the target that will be observed at
sequence end. The contribution can be viewed as the information gain in time step tfor predicting
the target. If we cannot ensure that Fpredicts the target at every time step, then other contribution
analysis methods must be employed. For attributing the prediction of a deep network to its input
features several contribution analysis methods have been proposed. We consider Input Zeroing,
Integrated Gradients (IG), and Layer-Wise Relevance Propagation (LRP).
Linear Models and Coefficient of Determination. We consider linear models and the average
gain of information about the reward at sequence end if we go one time step further in the input
sequence. By adding a variable, that is, another sequence element, the mean squared error (MSE)
decreases, which is the amount by which the expectation improves due to new information. But by
what amount does the MSE decrease in average? Here, we consider linear models. For linear models
we are interested in how much the coefficient of determination increases if we add another variable,
that is, if we see another input.
We consider the feature vector x= (x1;x2;:::;xk)Tfrom which the target y(the reward at
sequence end) has to be predicted. We assume to have npairs (xi;yi);16i6n, as training set.
The prediction or estimation of yifromxiis^yiwith ^yi=F(xi). The vector of all training labels is
y= (y1;:::;yn)and the training feature matrix is X= (x1;:::;xn). We define the mean squared
error (MSE) as
mse(y;X) =1
n 1nX
i=1(^yi yi)2: (A368)
Thecoefficient of determination R2is equal to the correlation between the target yand its prediction
^y.R2is given by:
R2= 1 1
n 1Pn
i=1(^yi yi)2
1
n 1Pn
i=1(yi y)2= 1 mse(y;X)
s2y: (A369)
Therefore,R2is one minus the ratio of the mean squared error divided by the mean total sum of
squares.R2is a strict monotonically decreasing function of the mean squared error.
We will give a breakdown of the factors that determine how much each variable adds to R2
[100, chapter 10.6, p. 263]. The feature vector xis expanded by one additional feature z:
86
w= (x1;x2;:::;xk;z)T= (xT;z)T. We want to know the increase in R2due to adding z.
Therefore, we decompose wintoxandz. The difference in coefficients of determination is the
difference of the according MSEs divided by the empirical variance of y:
R2
yw R2
yx=mse(y;W) mse(y;X)
s2y: (A370)
We further need definitions:
•x= (x1;x2;:::;xk)T.
•w= (x1;x2;:::;xk;z)T= (xT;z)T.
•The sample covariance between yandxissyx=Pn
i=1(xi x)(yi y)=(n 1), where
x=Pn
i=1xi=nandy=Pn
i=1yi=nare the sample means. The variance of xissxxoften
written ass2
x, the standard deviation squared: sx:=psxx.
• The correlation between yandxisryx=syx=(sxsy).
• The covariance matrix Sxxof a vectorxis the matrix with entries [Sxx]ij=sxixj.
• The covariance matrix Rxxof a vectorxis the matrix with entries [Rxx]ij=rxixj.
•The diagonal matrix Dx= [diag(Sxx)]1=2has aith diagonal entrypsxiand is the diagonal
matrix of standard deviations of the components of x.
•R2
ywis the squared multiple correlation between yandw.
•R2
yxis the squared multiple correlation between yandx.
•R2
zx=sT
zxS 1
xxszx=s2
z=rT
zxR 1
xxrzxis the squared multiple correlation between zandx.
•ryzis the simple correlation between yandz:ryz=syz=(sysz).
•ryx= (ryx1;ryx2;:::;ryxk)T=s 1
yD 1
xSyxis the vector of correlations between yand
x.
•rzx= (rzx1;rzx2;:::;rzxk)T=s 1
zD 1
xSzxis the vector of correlations between zand
x.
•^
zx=R 1
xxrzxis the vector of standardized regression coefficients (beta weights) of z
regressed onx.
•The parameter vector is partitioned into the constant 0and1via=
(0;1;:::;m)T= (0;T
1)T. We have for the maximum likelihood estimate
^0= y sT
yxS 1
xxx; (A371)
^1=S 1
xxsyx: (A372)
The offset ^0guarantees ^y= y, therefore,yTy=^yTy, since y= y1:
^y=1
nnX
i=1^yi=1
nnX
i=1 ^0+^T
1xi
(A373)
= y sT
yxS 1
xxx+1
nnX
i=1^T
1xi
= y sT
yxS 1
xxx+sT
yxS 1
xxx
= y:
• The vector of standardized coefficients ^
1are
^
1=1
syDx^1=R 1
xxryx: (A374)
The next theorem is Theorem 10.6 in Rencher and Schaalje [ 100] and gives a breakdown of the
factors that determine how much each variable adds to R2[100, Chapter 10.6, p. 263].
87
Theorem 1 (Rencher Theorem 10.6) .The increase in R2due tozcan be expressed as
R2
yw R2
yx=(^ryz ryz)2
1 R2zx; (A375)
where ^ryz= (^
zx)Tryxis a “predicted” value of ryzbased on the relationship of zto thex’s.
The following equality shows that ^ryz= (^
zx)Tryxis indeed a prediction of ryz:
^
zxT
ryx=1
szDx^T
zx1
syD 1
xsyx (A376)
=1
szsy^T
zx1
n 1nX
i=1(xi x)(yi y)
=1
szsy1
n 1nX
i=1(^T
zxxi ^T
zxx)(yi y)
=1
szsy1
n 1nX
i=1( ^zi ^z)(yi y)
=1
szsy^syz= ^ryz:
Ifzis orthogonal to x(i.e., ifrzx=0), then ^
zx=0, which implies that ^ryz= 0andR2
zx= 0. In
this case, Eq. (A375) can be written as
R2
yw R2
yx=r2
yz: (A377)
Consequently, if all xiare independent from each other, then
R2
yx=kX
j=1r2
yxj: (A378)
The contribution of ztoR2can either be less than or greater than ryz. If the correlation ryzcan be
predicted from x, then ^ryzis close toryzand, therefore, zhas contributes less to R2thanr2
yz.
Next, we compute the contribution of ztoR2explicitly. The correlation between yandzis
ryz=1
szsy1
n 1nX
i=1(zi z)(yi y) =1
szsysyz: (A379)
We assume that z=^z. We want to express the information gain using the mean squared error (MSE)
1=(n 1)Pn
i=1( ^zi zi)2. We define the error ei:= ^zi ziat sampleiwith e=^z z= 0.
Therefore, the MSE is equal to the empirical variance s2
e= 1=(n 1)Pn
i=1e2
i. The correlation rey
between the target yand the error eis
rey=1
syse1
n 1nX
i=1(ei e) (yi y): (A380)
Using Eq. (A376) and Eq. (A379) , we can express the difference between the estimate ^ryzand the
true correlation ryzby:
^ryz ryz=1
szsy1
n 1nX
i=1( ^zi ^z)(yi y) 1
szsy1
n 1nX
i=1(zi z)(yi y)(A381)
=1
szsy1
n 1nX
i=1( ^zi zi)(yi y):
88
The information gain can now be expressed by the correlation reybetween the target yand the error
e:
R2
yw R2
yx=(^ryz ryz)2
1 R2zx=1
s2zs2y1
(n 1)2(Pn
i=1( ^zi zi)(yi y))2
1
n 1Pn
i=1(^zi zi)2
1
n 1Pn
i=1(zi z)2(A382)
=1
s2y1
(n 1)2(Pn
i=1( ^zi zi)(yi y))2
1
n 1Pn
i=1(^zi zi)2
=r2
ey:
The information gain is the squared correlation r2
eybetween the target yand the error e.The
information gain is the information in zabouty, which is not contained in x.
A9.2 Input Zeroing
The simplest contribution analysis method is Input Zeroing, where just an input is set to zero to
determine its contribution to the output. Input Zeroing sets a particular input xito zero and then
computes the network’s output. For the original input x= (x1;:::;xd)and the input with xi= 0, i.e.
~xi= (x1;:::;xi 1;0;xi+1;:::;xd), we compute xi=F(x) F(~xi)to obtain the contribution
ofxi. We obtain for the difference of F(x)to the baseline of average zeroing1
dPd
i=1F(~xi):
F(x) 1
ddX
i=1F(~xi) =1
ddX
i=1xi: (A383)
The problem is that the F(~xi)have to be computed d-times, that is, for each input component zeroed
out.
Input Zeroing does not recognize redundant inputs, i.e. each one of the inputs is sufficient to produce
the output but if all inputs are missing at the same time then the output changes. In contrast, Integrated
Gradients (IG) and Layer-Wise Relevance Propagation (LRP) detect the relevance of an input even if
it is redundant.
A9.3 Integrated Gradients
Integrated gradients is a recently introduced method [ 125]. Integrated gradients decomposes the
differenceF(x) F(~x)between the network output F(x)and a baseline F(~x):
F(x) F(~x) =dX
i=1(xi ~xi)Z1
t=0@F
@xi(~x+t(x ~x)) dt (A384)
dX
i=1(xi ~xi)1
mmX
k=1@F
@xi(~x+ (k=m)(x ~x)):
In contrast to previous approaches, we have Fand its derivative to evaluate only m-times, where
m<d .
The equality can be seen if we define h=x ~xand
g: [0;1]!R
g(t) =F(x+th):(A385)
Consequently, we have
F(x+h) F(x) =g(1) g(0) =Z1
0g0(t) dt (A386)
=Z1
0 dX
i=1@F
@xi(x+th)hi!
dt=dX
i=1Z1
0@F
@xi(x+th) dt
hi: (A387)
For the final reward decomposition, we obtain
F(x) =dX
i=1
(xi ~xi)1
mmX
k=1@F
@xi(~x+ (k=m)(x ~x)) +1
dF(~x)!
: (A388)
89
A9.4 Layer-Wise Relevance Propagation
Layer-Wise Relevance Propagation (LRP) [ 3] has been introduced to interpret machine learning
models. LRP is an extension of the contribution-propagation algorithm [ 71] based on the contribution
approach [ 94]. Recently “excitation backprop” was proposed [ 147], which is like LPR but uses only
positive weights and shifts the activation function to have non-negative values. Both algorithms
assign a relevance or importance value to each node of a neural network which describes how
much it contributed to generating the network output. The relevance or importance is recursively
propagated back: A neuron is important to the network output if it has been important to its parents,
and its parents have been important to the network output. LRP moves through a neural network
like backpropagation: it starts at the output, redistributes the relevance scores of one layer to the
previous layer until the input layer is reached. The redistribution procedure satisfies a local relevance
conservation principle. All relevance values that a node obtains from its parents will be redistributed
to its children. This is analog to Kirchhoff’s first law for the conservation of electric charge or the
continuity equation in physics for transportation in general form. LRP has been used for deep neural
networks (DNN) [84] and for recurrent neural networks like LSTM [1].
We consider a neural network with activation xifor neuroni. The weight from neuron lto neuroni
is denoted by wil. The activation function is gandnetiis the netinput to neuron iwith biasbi. We
have following forward propagation rules:
neti=X
lwilxl; (A389)
xi=fi(neti) =g(neti+bi): (A390)
LetRibe the relevance for neuron iandRi kthe share of relevance Rkthat flows from neuron kin
the higher layer to neuron iin the lower layer. The parameter zikis a weighting for the share of Rk
of neuronkthat flows to neuron i. We defineRi kas
Ri k=zikP
lzlkRk: (A391)
The relative contributions zikare previously defined as [3, 84, 1]:
zik=wikxk: (A392)
Here,zikis the contribution of xkto the netinput value neti. If neuronkis removed from the network,
thenzikwill be the difference to the original neti.
The relevance Riof neuroniis the sum of relevances it obtains from its parents kfrom a layer above:
Ri=X
kRi k: (A393)
Furthermore, a unit kpasses on all its relevance Rkto its children, which are units iof the layer
below:
Rk=X
iRi k: (A394)
It follows the conservation of relevance . The sum of relevances Rkof unitskin a layer is equal to
the sum of relevances Riof unitsiof a layer below:
X
kRk=X
kX
iRi k=X
iX
kRi k=X
iRi: (A395)
The scalar output g(x)of a neural network with input x= (x1;:::;xd)is considered as relevance R
which is decomposed into contributions Riof the inputs xi:
X
iRi=R=g(x): (A396)
The decomposition is valid for recurrent neural networks, where the relevance at the output is
distributed across the sequence elements of the input sequence.
90
A9.4.1 New Variants of LRP
An alternative definition of zikis
zik=wik(xk xk); (A397)
where xkis the mean of xkacross samples. Therefore, (xk xk)is the contribution of the actual
sample to the variance of xk. This in turn is related to the information carried by xk. Here,zikis
the contribution of xkto the variance of neti. However, we can have negative values of (xk xk)
which may lead to negative contributions even if the weights are positive.
Another alternative definition of zikis
zik=fi(neti) fi(neti wikxk): (A398)
Here,zikis the contribution of xkto the activation value xi=fi(neti). If neuronkis removed from
the network, then zikwill be the difference to the original xi. Iffiis strict monotone increasing and
xk>0, then positive weights wikwill lead to positive values and negative weights wikto negative
values.
Preferred Solution:
A definition of zikis
zik=wik(xk xmin); (A399)
wherexminis the minimum of xkeither across samples (mini-batch) or across time steps. The
difference (xk xmin)is always positive. Using this definition, activation functions with negative
values are possible, like for excitation backprop [ 147]. The minimal value is considered as default
off-set, which can be included into the bias.
A9.4.2 LRP for Products
Here we define relevance propagation for products of two units. We assume that z=x1x2with
x1>0andx2>0. We viewx1andx2as units of a layer below the layer in which zis located.
Consequently, Rzhas to be divided between x1andx2, which gives the conservation rule
Rz=Rx1 z+Rx2 z: (A400)
Alternative 1:
Rx1 z= 0:5Rz (A401)
Rx2 z= 0:5Rz: (A402)
The relevance is equally distributed.
Preferred Solution:
Alternative 2: The contributions according to the deep Taylor decomposition around (a;a)are
@z
@x1
(a;a)(x1 a) = (x1 a)a; (A403)
@z
@x2
(a;a)(x2 a) =a(x2 a): (A404)
We compute the relative contributions:
(x1 a)a
(x1 a)a+a(x2 a)=x1 a
(x1+x2 2a); (A405)
(x2 a)a
(x1 a)a+a(x2 a)=x2 a
(x1+x2 2a): (A406)
Forlima!0we obtainx1=(x1+x2)andx2=(x1+x2)as contributions.
We use this idea but scale x1andx2to the range [0;1]:
Rx1 z=x1 xmin
xmax xmin
x1 xmin
xmax xmin+x2 xmin
xmax xminRz (A407)
Rx2 z=x2 xmin
xmax xmin
x1 xmin
xmax xmin+x2 xmin
xmax xminRz: (A408)
91
The relevance is distributed according to how close the maximal value is achieved and how far away
it is from the minimal value.
Alternative 3:
Rx1 z=ln
1 x1 xmin
xmax xmin
ln
1 x1 xmin
xmax xmin
+ ln
1 x2 xmin
xmax xminRz (A409)
Rx2 z=ln
1 x2 xmin
xmax xmin
ln
1 x1 xmin
xmax xmin
+ ln
1 x2 xmin
xmax xminRz: (A410)
Allln-values are negative, therefore the fraction in front of Rzis positive.x1=xminleads to a zero
relevance for x1. The ratio of the relevance for x1increases to 1 when x1approachesxmax. The
relevance is distributed according to how close the maximal value is achieved. We assume that the
maximal value is a saturating value, therefore we use ln, the natural logarithm.
A9.5 Variance Considerations for contribution Analysis
We are interested how the redistributed reward affects the variance of the estimators. We consider (A)
the difference of consecutive predictions is the redistributed reward, (B) integrated gradients (IG),
and (C) layer-wise relevance propagation (LRP).
For (A) the difference of consecutive predictions is the redistributed reward, all variance is moved to
the final correction. However imperfect gand variance cannot be distinguished.
For (B) integrated gradients (IG) the redistributed rewards depend on future values. Therefore the
variance can even be larger than in the original MDP.
For (C) layer-wise relevance propagation (LRP) the variance is propagated back without decreasing
or increasing if the actual return is used as relevance. If the prediction is used as relevance and a final
correction is used then the variance is moved to the final prediction but new variance is injected since
rewards depend on the future path.
92
A10 Reproducibility Checklist
We followed the reproducibility checklist [92] and point to relevant sections.
For all models and algorithms presented, check if you include:
•A clear description of the mathematical setting, algorithm, and/or model.
Description of mathematical settings starts at paragraph MDP Definitions and Return-
Equivalent Sequence-Markov Decision Processes (SDPs).
Description of novel learning algorithms starts at paragraph Novel Learning Algorithms
Based on Reward Redistributions.
•An analysis of the complexity (time, space, sample size) of any algorithm.
Plots in Figure 1 show the number of episodes, i.e. the sample size, which are needed for
convergence to the optimal policies. They are evaluated for different algorithms and delays
in all artificial tasks. For Atari games, the number of samples corresponds to the number of
game frames. See paragraph Atari Games. We further present a bias-variance analysis of
TD and MC learning in Section A3.1 and Section A3.2 in the appendix.
•A link to a downloadable source code, with specification of all dependencies, including
external libraries.
https://github.com/ml-jku/baselines-rudder
For any theoretical claim, check if you include:
•A statement of the result.
The main theorems:
–Theorem 1
–Theorem 2
–Theorem 3
Additional supporting theorems can be found in the proof section of the appendix A2.
•A clear explanation of any assumptions.
The proof section A2 in the appendix covers all the assumptions for the main theorems.
•A complete proof of the claim.
Proof of the main theorems are moved to the appendix.
–Proof of Theorem 1 can be found after Theorem A2 in the appendix.
–Proof of Theorem 2 can be found after Theorem A4 in the appendix.
–Proof of Theorem 3 can be found after Theorem A5 in the appendix.
Proofs for additional theorems can also be found in this appendix.
For all figures and tables that present empirical results, check if you include:
•A complete description of the data collection process, including sample size.
For artificial tasks the environment descriptions can be found in section Artificial Tasks in
the main paper. For Atari games, we use the standard sampling procedures as in OpenAI
Gym [18] (description can be found in paragraph Atari Games).
•A link to a downloadable version of the dataset or simulation environment.
Link to our repository: https://github.com/ml-jku/rudder
•An explanation of any data that were excluded, description of any pre-processing step
For Atari games, we use the standard pre-processing described in [80].
•An explanation of how samples were allocated for training / validation / testing.
For artificial tasks, description of training and evaluation are included in section A4.1 . For
Atari games, description of training and evaluation are included Section A4.1.
•The range of hyper-parameters considered, method to select the best hyper-parameter
configuration, and specification of all hyper-parameters used to generate results.
A description can be found at paragraph PPO model in the appendix.
93
•The exact number of evaluation runs.
For artificial tasks evaluation was performed during training runs. See Figure 1. For
Atari games see paragraph Atari Games. We also provide a more detailed description in
Section A4.1 and Section A4.2 in the appendix.
•A description of how experiments were run . For artificial task, description can be found
at 4.
For Atari games, description starts at paragraph Atari Games. We also provide a more
detailed description in Section A4.1 and Section A4.2 in the appendix.
•A clear definition of the specific measure or statistics used to report results.
For artificial tasks, see section 4. For Atari games, see section A4.2 and the caption of Table
1. We also provide a more detailed description in Section A4.1 and Section A4.2 in the
appendix.
•Clearly defined error bars.
For artificial tasks, see caption of Figure 1, second line. For Atari games we show all runs in
Figure A8 in the appendix.
•A description of results with central tendency (e.g. mean) & variation (e.g. stddev).
An exhaustive description of the results including mean, variance and significant test, is
included in Table A1, Table A2 and Table A3 in Section A4.1 in the appendix.
•A description of the computing infrastructure used.
We distributed all runs across 2 CPUs per run and 1 GPU per 4 runs for Atari experiments.
We used various GPUs including GTX 1080 Ti, TITAN X, and TITAN V . Our algorithm
takes approximately 10 days.
94
A11 References
[1]L. Arras, G. Montavon, K.-R. Müller, and W. Samek. Explaining recurrent neural network
predictions in sentiment analysis. CoRR , abs/1706.07206, 2017.
[2]Y . Aytar, T. Pfaff, D. Budden, T. Le Paine, Z. Wang, and N. de Freitas. Playing hard exploration
games by watching YouTube. ArXiv , 2018.
[3]S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek. On pixel-wise
explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS
ONE , 10(7):e0130140, 2015.
[4]B. Bakker. Reinforcement learning with long short-term memory. In T. G. Dietterich, S. Becker,
and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14 , pages
1475–1482. MIT Press, 2002.
[5]B. Bakker. Reinforcement learning by backpropagation through an lstm model/critic. In
IEEE International Symposium on Approximate Dynamic Programming and Reinforcement
Learning , pages 127–134, 2007.
[6]A. Barreto, D. Borsa, J. Quan, T. Schaul, D. Silver, M. Hessel, D. Mankowitz, A. Zídek, and
R. Munos. Transfer in deep reinforcement learning using successor features and generalised
policy improvement. In 35th International Conference on Machine Learning , volume 80 of
Proceedings of Machine Learning Research , pages 501–510, 2018. ArXiv 1901.10964.
[7]A. Barreto, W. Dabney, R. Munos, J. Hunt, T. Schaul, H. P. vanHasselt, and D. Silver. Successor
features for transfer in reinforcement learning. In Advances in Neural Information Processing
Systems 30 , pages 4055–4065, 2017. ArXiv 1606.05312.
[8]A. G. Barto and T. G. Dietterich. Handbook of Learning and Approximate Dynamic Program-
ming , chapter Reinforcement Learning and Its Relationship to Supervised Learning, pages
45–63. IEEE Press, John Wiley & Sons, 2015.
[9]F. Beleznay, T. Grobler, and C. Szepesvári. Comparing value-function estimation algorithms
in undiscounted problems. Technical Report TR-99-02, Mindmaker Ltd., 1999.
[10] M. G. Bellemare, W. Dabney, and R. Munos. A distributional perspective on reinforcement
learning. In D. Precup and Y . W. Teh, editors, Proceedings of the 34th International Conference
on Machine Learning , volume 70 of Proceedings of Machine Learning Research (ICML) ,
pages 449–458. PMLR, 2017.
[11] M. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The Arcade learning environment: An
evaluation platform for general agents. Journal of Artificial Intelligence Research , 47:253–279,
2013.
[12] D. Berthelot, T. Schumm, and L. Metz. BEGAN: boundary equilibrium generative adversarial
networks. ArXiv e-prints , 2017.
[13] D. P. Bertsekas and J. N. Tsitsiklis. An analysis of stochastic shortest path problems. Math.
Oper. Res. , 16(3), 1991.
[14] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming . Athena Scientific, Belmont,
MA, 1996.
[15] I.-J. Bienaymé. Considérations àl’appui de la découverte de laplace. Comptes Rendus de
l’Académie des Sciences , 37:309–324, 1853.
[16] W. Bolton. Instrumentation and Control Systems , chapter Chapter 5 - Process Controllers,
pages 99–121. Newnes, 2 edition, 2015.
[17] V . S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters ,
29(5):291–294, 1997.
[18] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.
Openai gym. ArXiv , 2016.
95
[19] I. J. Cox, R. Fu, and L. K. Hansen. Probably approximately correct search. In Advances in
Information Retrieval Theory , pages 2–16. Springer, Berlin, Heidelberg, 2009.
[20] P. Dayan. The convergence of TD( ) for general .Machine Learning , 8:341, 1992.
[21] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor,
and Y . Wu. Openai baselines. https://github.com/openai/baselines , 2017.
[22] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and
T. Darrell. Long-term recurrent convolutional networks for visual recognition and description.
ArXiv , 2014.
[23] A. D. Edwards, L. Downs, and J. C. Davidson. Forward-backward reinforcement learning.
ArXiv , 2018.
[24] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V . Mnih, T. Ward, Y . Doron, V . Firoiu,
T. Harley, I. Dunning, S. Legg, and K. Kavukcuoglu. IMPALA: Scalable distributed Deep-RL
with importance weighted actor-learner architectures. In J. Dy and A. Krause, editors, Proceed-
ings of the 35th International Conference on Machine Learning , 2018. ArXiv: 1802.01561.
[25] Z. Feinstein. Continuity properties and sensitivity analysis of parameterized fixed points and
approximate fixed points. Technical report, Operations Research and Financial Engineering
Laboratory, Washington University in St. Louis, 2016. preprint.
[26] M. Fortunato, M. G. Azar, B. Piot, J. Menick, I. Osband, A. Graves, V . Mnih, R. Munos,
D. Hassabis, O. Pietquin, C. Blundell, and S. Legg. Noisy networks for exploration. ArXiv ,
2018. Sixth International Conference on Learning Representations (ICLR).
[27] M. Frigon. Fixed point and continuation results for contractions in metric and Gauge spaces.
Banach Center Publications , 77(1):89–114, 2007.
[28] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcement
learning. ArXiv , 2018. Sixth International Conference on Learning Representations (ICLR).
[29] J. T. Geiger, Z. Zhang, F. Weninger, B. Schuller, and G. Rigoll. Robust speech recognition
using long short-term memory recurrent neural networks for hybrid acoustic modelling. In
Proc. 15th Annual Conf. of the Int. Speech Communication Association (INTERSPEECH
2014) , pages 631–635, Singapore, September 2014.
[30] F. A. Gers and J. Schmidhuber. Recurrent nets that time and count. In Proc. Int. Joint Conf. on
Neural Networks (IJCNN 2000) , volume 3, pages 189–194. IEEE, 2000.
[31] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with
LSTM. In Proc. Int. Conf. on Artificial Neural Networks (ICANN ’99) , pages 850–855,
Edinburgh, Scotland, 1999.
[32] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with
LSTM. Neural Comput. , 12(10):2451–2471, 2000.
[33] Irène Gijbels. Censored data. Wiley Interdisciplinary Reviews: Computational Statistics ,
2(2):178–188, 2010.
[34] R. Givan, T. Dean, and M. Greig. Equivalence notions and model minimization in Markov
decision processes. Artificial Intelligence , 147(1):163–223, 2003.
[35] J. Gonzalez-Dominguez, I. Lopez-Moreno, H. Sak, J. Gonzalez-Rodriguez, and P. Moreno.
Automatic language identification using long short-term memory recurrent neural networks.
InProc. 15th Annual Conf. of the Int. Speech Communication Association (INTERSPEECH
2014) , pages 2155–2159, Singapore, September 2014.
[36] A. Goyal, P. Brakel, W. Fedus, T. Lillicrap, S. Levine, H. Larochelle, and Y . Bengio. Recall
traces: Backtracking models for efficient reinforcement learning. ArXiv , 2018.
96
[37] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel
connectionist system for improved unconstrained handwriting recognition. IEEE Trans. Pattern
Anal. Mach. Intell. , 31(5):855–868, 2009.
[38] A. Graves, A.-R. Mohamed, and G. E. Hinton. Speech recognition with deep recurrent neural
networks. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP
2013) , pages 6645–6649, Vancouver, BC, 2013.
[39] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM
and other neural network architectures. Neural Networks , 18(5-6):602–610, 2005.
[40] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber. LSTM: A
search space odyssey. ArXiv , 2015.
[41] S. Grünewälder and K. Obermayer. The optimal unbiased value estimator and its relation to
LSTD, TD and MC. Machine Learning , 83(3):289–330, 2011.
[42] D. Ha and J. Schmidhuber. World models. ArXiv , 2018.
[43] A. Harutyunyan, S. Devlin, P. Vrancx, and A. Now’e. Expressing arbitrary reward functions
as potential-based advice. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial
Intelligence (AAAI’15) , pages 2652–2658, 2015.
[44] M. J. Hausknecht and P. Stone. Deep recurrent Q-Learning for partially observable MDPs.
ArXiv , 2015.
[45] N. Heess, G. Wayne, Y . Tassa, T. P. Lillicrap, M. A. Riedmiller, and D. Silver. Learning and
transfer of modulated locomotor controllers. ArXiv , 2016.
[46] P. Hernandez-Leal, B. Kartal, and M. E. Taylor. Is multiagent deep reinforcement learning the
answer or the question? A brief survey. ArXiv , 2018.
[47] M. Hessel, J. Modayil, H. van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot,
M. G. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning.
ArXiv , 2017.
[48] S. Hochreiter. Implementierung und Anwendung eines ‘neuronalen’ Echtzeit-Lernalgorithmus
für reaktive Umgebungen. Practical work, Supervisor: J. Schmidhuber, Institut für Informatik,
Technische Universität München, 1990.
[49] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Master’s thesis, Technis-
che Universität München, 1991.
[50] S. Hochreiter. Recurrent neural net learning and vanishing gradient. In C. Freksa, editor, Proc.
Fuzzy-Neuro-Systeme ’97 , pages 130–137, Sankt Augustin, 1997. INFIX.
[51] S. Hochreiter. The vanishing gradient problem during learning recurrent neural nets and
problem solutions. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems , 6(2):107–116,
1998.
[52] S. Hochreiter, Y . Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies. In J. F. Kolen and S. C. Kremer, editors, A Field
Guide to Dynamical Recurrent Networks . IEEE Press, 2001.
[53] S. Hochreiter, M. Heusel, and K. Obermayer. Fast model-based protein homology detection
without alignment. Bioinformatics , 23(14):1728–1736, 2007.
[54] S. Hochreiter and J. Schmidhuber. Long short-term memory. Technical Report FKI-207-95,
Fakultät für Informatik, Technische Universität München, 1995.
[55] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput. , 9(8):1735–1780,
1997.
97
[56] S. Hochreiter and J. Schmidhuber. LSTM can solve hard long time lag problems. In M. C.
Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing
Systems 9 , pages 473–479, Cambridge, MA, 1997. MIT Press.
[57] S. Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gradient
descent. In G. Dorffner, H. Bischof, and K. Hornik, editors, Proc. Int. Conf. on Artificial
Neural Networks (ICANN 2001) , pages 87–94. Springer, 2001.
[58] D. Horgan, J. Quan, D. Budden, G. Barth-Maron, M. Hessel, H. van Hasselt, and D. Silver.
Distributed prioritized experience replay. ArXiv , 2018. Sixth International Conference on
Learning Representations (ICLR).
[59] C. Hung, T. Lillicrap, J. Abramson, Y . Wu, M. Mirza, F. Carnevale, A. Ahuja, and G. Wayne.
Optimizing agent behavior over long time scales by transporting value. ArXiv , 2018.
[60] A. Hyvärinen, J. Karhunen, and E. Oja. Independent Component Analysis . John Wiley & Sons,
New York, 2001.
[61] T. Jaakkola, M. I. Jordan, and S. P. Singh. On the convergence of stochastic iterative dynamic
programming algorithms. Neural Computation , 6(6):1185–1201, 1994.
[62] J. Jachymski. Continuous dependence of attractors of iterated function systems. Journal Of
Mathematical Analysis And Applications , 198(0077):221–226, 1996.
[63] G. H. John. When the best move isn’t optimal: q-learning with exploration. In Proceedings of
the 10th Tenth National Conference on Artificial Intelligence, Menlo Park, CA, 1994. AAAI
Press. , page 1464, 1994.
[64] P. Karmakar and S. Bhatnagar. Two time-scale stochastic approximation with controlled
Markov noise and off-policy temporal-difference learning. Mathematics of Operations Re-
search , 2017.
[65] N. Ke, A. Goyal, O. Bilaniuk, J. Binas, M. Mozer, C. Pal, and Y . Bengio. Sparse attentive back-
tracking: Temporal credit assignment through reminding. In Advances in Neural Information
Processing Systems 31 , pages 7640–7651, 2018.
[66] P. Khandelwal, E. Liebman, S. Niekum, and P. Stone. On the analysis of complex backup
strategies in Monte Carlo Tree Search. In International Conference on Machine Learning ,
pages 1319–1328, 2016.
[67] E. Kirr and A. Petrusel. Continuous dependence on parameters of the fixed point set for some
set-valued operators. Discussiones Mathematicae Differential Inclusions , 17:29–41, 1997.
[68] L. Kocsis and C. Szepesvári. Bandit based Monte-Carlo planning. In European Conference on
Machine Learning , pages 282–293. Springer, 2006.
[69] J. Koutník, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks
for vision-based reinforcement learning. In Proceedings of the 15th Annual Conference on
Genetic and Evolutionary Computation , GECCO ’13, pages 1061–1068, 2013.
[70] M. Kwiecinski. A note on continuity of fixed points. Universitatis Iagellonicae Acta Mathe-
matica , 29:19–24, 1992.
[71] W. Landecker, M. D. Thomure, L. M. A. Bettencourt, M. Mitchell, G. T. Kenyon, and S. P.
Brumby. Interpreting individual classifications of hierarchical networks. In IEEE Symposium
on Computational Intelligence and Data Mining (CIDM) , pages 32–38, 2013.
[72] T. Lattimore and C. Szepesvá. Bandit Algorithms . Cambridge University Press, 2018. Draft
of 28th July, Revision 1016.
[73] L. Li, T. J. Walsh, and M. L. Littman. Towards a unified theory of state abstraction for MDPs.
InNinth International Symposium on Artificial Intelligence and Mathematics (ISAIM) , 2006.
[74] L. Lin. Reinforcement Learning for Robots Using Neural Networks . PhD thesis, Carnegie
Mellon University, Pittsburgh, 1993.
98
[75] G. Lugosi. Concentration-of-measure inequalities. In Summer School on Machine Learning at
the Australian National University,Canberra , 2003. Lecture notes of 2009.
[76] J. Luoma, S. Ruutu, A. W. King, and H. Tikkanen. Time delays, competitive interdependence,
and firm performance. Strategic Management Journal , 38(3):506–525, 2017.
[77] S. Mannor, D. Simester, P. Sun, and J. N. Tsitsiklis. Bias and variance approximation in value
function estimates. Management Science , 53(2):308–322, 2007.
[78] E. Marchi, G. Ferroni, F. Eyben, L. Gabrielli, S. Squartini, and B. Schuller. Multi-resolution
linear prediction based features for audio onset detection with bidirectional LSTM neural
networks. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP
2014) , pages 2164–2168, Florence, May 2014.
[79] V . A. Mar ˘oenko and L. A. Pastur. Distribution of eigenvalues or some sets of random matrices.
Mathematics of the USSR-Sbornik , 1(4):457, 1967.
[80] V . Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and
K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In M. F. Bal-
can and K. Q. Weinberger, editors, Proceedings of the 33rd International Conference on
Machine Learning (ICML) , volume 48 of Proceedings of Machine Learning Research , pages
1928–1937. PMLR, 2016.
[81] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A.
Riedmiller. Playing Atari with deep reinforcement learning. ArXiv , 2013.
[82] V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou,
H. King, D. Kumaran, D. Wierstra, S. Legg, , and D. Hassabis. Human-level control through
deep reinforcement learning. Nature , 518(7540):529–533, 2015.
[83] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K.-R. Müller. Explaining nonlinear
classification decisions with deep Taylor decomposition. Pattern Recognition , 65:211 – 222,
2017.
[84] G. Montavon, W. Samek, and K.-R. Müller. Methods for interpreting and understanding deep
neural networks. Digital Signal Processing , 73:1–15, 2017.
[85] A. W. Moore and C. G. Atkeson. Prioritized sweeping: Reinforcement learning with less data
and less time. Machine Learning , 13(1):103–130, 1993.
[86] P. W. Munro. A dual back-propagation scheme for scalar reinforcement learning. In Pro-
ceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA , pages
165–176, 1987.
[87] A. Y . Ng, D. Harada, and S. J. Russell. Policy invariance under reward transformations: Theory
and application to reward shaping. In Proceedings of the Sixteenth International Conference
on Machine Learning (ICML’99) , pages 278–287, 1999.
[88] B. O’Donoghue, I. Osband, R. Munos, and V . Mnih. The uncertainty Bellman equation and
exploration. ArXiv , 2017.
[89] S. D. Patek. Stochastic and shortest path games: theory and algorithms . PhD thesis, Mas-
sachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science,
1997.
[90] J. Peng and R. J. Williams. Incremental multi-step q-learning. Machine Learning , 22(1):283–
290, 1996.
[91] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational
space control. In Proceedings of the 24th International Conference on Machine Learning ,
pages 745–750, 2007.
[92] J. Pineau. The machine learning reproducibility checklist, 2018.
99
[93] T. Pohlen, B. Piot, T. Hester, M. G. Azar, D. Horgan, D. Budden, G. Barth-Maron, H. van
Hasselt, J. Quan, M. Ve ˇcerík, M. Hessel, R. Munos, and O. Pietquin. Observe and look further:
Achieving consistent performance on Atari. ArXiv , 2018.
[94] B. Poulin, R. Eisner, D. Szafron, P. Lu, R. Greiner, D. S. Wishart, A. Fyshe, B. Pearcy,
C. MacDonell, and J. Anvik. Visual explanation of evidence in additive classifiers. In
Proceedings of the 18th Conference on Innovative Applications of Artificial Intelligence (IAAI) ,
volume 2, pages 1822–1829, 2006.
[95] M. L. Puterman. Markov decision processes. In Stochastic Models , volume 2 of Handbooks in
Operations Research and Management Science , chapter 8, pages 331–434. Elsevier, 1990.
[96] M. L. Puterman. Markov Decision Processes . John Wiley & Sons, Inc., 2005.
[97] H. Rahmandad, N. Repenning, and J. Sterman. Effects of feedback delay on learning. System
Dynamics Review , 25(4):309–338, 2009.
[98] B. Ravindran and A. G. Barto. Symmetries and model minimization in Markov decision
processes. Technical report, University of Massachusetts, Amherst, MA, USA, 2001.
[99] B. Ravindran and A. G. Barto. SMDP homomorphisms: An algebraic approach to abstraction
in semi-Markov decision processes. In Proceedings of the 18th International Joint Conference
on Artificial Intelligence (IJCAI’03) , pages 1011–1016, San Francisco, CA, USA, 2003.
Morgan Kaufmann Publishers Inc.
[100] A. C. Rencher and G. B. Schaalje. Linear Models in Statistics . John Wiley & Sons, Hoboken,
New Jersey, 2 edition, 2008. ISBN 978-0-471-75498-5.
[101] A. J. Robinson. Dynamic Error Propagation Networks . PhD thesis, Trinity Hall and Cambridge
University Engineering Department, 1989.
[102] T. Robinson and F. Fallside. Dynamic reinforcement driven error propagation networks with
application to game playing. In Proceedings of the 11th Conference of the Cognitive Science
Society, Ann Arbor , pages 836–843, 1989.
[103] J. Romoff, A. Piché, P. Henderson, V . Francois-Lavet, and J. Pineau. Reward estimation for
variance reduction in deep reinforcement learning. ArXiv , 2018.
[104] M. Rudelson and R. Vershynin. Non-asymptotic theory of random matrices: extreme singular
values. ArXiv , 2010.
[105] G. A. Rummery and M. Niranjan. On-line q-learning using connectionist systems. Technical
Report TR 166, Cambridge University Engineering Department, 1994.
[106] H. Sahni. Reinforcement learning never worked, and ’deep’ only helped a bit. himanshusahni.
github.io/2018/02/23/reinforcement-learning-never-worked.html , 2018.
[107] H. Sak, A. Senior, and F. Beaufays. Long short-term memory recurrent neural network
architectures for large scale acoustic modeling. In Proc. 15th Annual Conf. of the Int. Speech
Communication Association (INTERSPEECH 2014) , pages 338–342, Singapore, September
2014.
[108] S. Schaal. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences ,
3(6):233–242, 1999.
[109] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. ArXiv , 2015.
[110] J. Schmidhuber. Making the world differentiable: On using fully recurrent self-supervised
neural networks for dynamic reinforcement learning and planning in non-stationary environ-
ments. Technical Report FKI-126-90 (revised), Institut für Informatik, Technische Universität
München, 1990. Experiments by Sepp Hochreiter.
100
[111] J. Schmidhuber. Reinforcement learning in markovian and non-markovian environments. In
R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information
Processing Systems 3 , pages 500–506. San Mateo, CA: Morgan Kaufmann, 1991. Pole
balancing experiments by Sepp Hochreiter.
[112] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks , 61:85–117,
2015.
[113] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization.
In32st International Conference on Machine Learning (ICML) , volume 37 of Proceedings of
Machine Learning Research , pages 1889–1897. PMLR, 2015.
[114] J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous
control using generalized advantage estimation. ArXiv , 2015. Fourth International Conference
on Learning Representations (ICLR’16).
[115] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization
algorithms. ArXiv , 2018.
[116] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrit-
twieser, I. Antonoglou, V . Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham,
N. Kalchbrenner, I. Sutskever, T. P. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and
D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature ,
529(7587):484–489, 2016.
[117] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre,
D. Kumaran, T. Graepel, T. P. Lillicrap, K. Simonyan, and D. Hassabis. Mastering Chess and
Shogi by self-play with a general reinforcement learning algorithm. ArXiv , 2017.
[118] S. Singh, T. Jaakkola, M. Littman, and C. Szepesvári. Convergence results for single-step
on-policy reinforcement-learning algorithms. Machine Learning , 38:287–308, 2000.
[119] S. P. Singh and R. S. Sutton. Reinforcement learning with replacing eligibility traces. Machine
Learning , 22:123–158, 1996.
[120] B. F. Skinner. Reinforcement today. American Psychologist , 13(3):94–99, 1958.
[121] M. J. Sobel. The variance of discounted Markov decision processes. Journal of Applied
Probability , 19(4):794–802, 1982.
[122] A. Soshnikov. A note on universality of the distribution of the largest eigenvalues in certain
sample covariance matrices. J. Statist. Phys. , 108(5-6):1033–1056, 2002.
[123] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representa-
tions using LSTMs. ArXiv , 2015.
[124] P.-H. Su, D. Vandyke, M. Gasic, N. Mrksic, T.-H. Wen, and S. Young. Reward shaping with
recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems.
InProceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and
Dialogue , pages 417–421. Association for Computational Linguistics, 2015.
[125] M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. ArXiv , 2017.
[126] I. Sutskever, O. Vinyals, and Q. V . V . Le. Sequence to sequence learning with neural networks.
In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors,
Advances in Neural Information Processing Systems 27 (NIPS’13) , pages 3104–3112. Curran
Associates, Inc., 2014.
[127] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning ,
3:9–44, 1988.
[128] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction . MIT Press, Cambridge,
MA, 2 edition, 2018.
101
[129] A. Tamar, D. DiCastro, and S. Mannor. Policy gradients with variance related risk criteria.
In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on
Machine Learning (ICML’12) , 2012.
[130] A. Tamar, D. DiCastro, and S. Mannor. Learning the variance of the reward-to-go. Journal of
Machine Learning Research , 17(13):1–36, 2016.
[131] P. Tchebichef. Des valeurs moyennes. Journal de mathématiques pures et appliquées 2 ,
12:177–184, 1867.
[132] P. Tseng. Solving h-horizon, stationary Markov decision problems in time proportional to
log(h).Operations Research Letters , 9(5):287–297, 1990.
[133] J. N. Tsitsiklis. Asynchronous stochastic approximation and q-learning. Machine Learning ,
16(3):185–202, 1994.
[134] H. van Hasselt. Double q-learning. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S.
Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23 , pages
2613–2621. Curran Associates, Inc., 2010.
[135] H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning.
InProceedings of the Thirtieth AAAI Conference on Artificial Intelligence , pages 2094–2100.
AAAI Press, 2016.
[136] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating
videos to natural language using deep recurrent neural networks. ArXiv , 2014.
[137] A. Veretennikov. Ergodic Markov processes and poisson equations (lecture notes). ArXiv ,
2016.
[138] Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement
learning. ArXiv , 2015.
[139] Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. de Freitas. Dueling network
architectures for deep reinforcement learning. In M. F. Balcan and K. Q. Weinberger, editors,
Proceedings of the 33rd International Conference on Machine Learning (ICML) , volume 48 of
Proceedings of Machine Learning Research , pages 1995–2003. PMLR, 2016.
[140] C. J. C. H. Watkins. Learning from Delayed Rewards . PhD thesis, King’s College, 1989.
[141] C. J. C. H. Watkins and P. Dayan. Q-Learning. Machine Learning , 8:279–292, 1992.
[142] P. J. Werbos. A menu of designs for reinforcement learning over time. In W. T. Miller, R. S.
Sutton, and P. J. Werbos, editors, Neural Networks for Control , pages 67–95. MIT Press,
Cambridge, MA, USA, 1990.
[143] E. Wiewiora. Potential-based shaping and q-value initialization are equivalent. Journal of
Artificial Intelligence Research , 19:205–208, 2003.
[144] E. Wiewiora, G. Cottrell, and C. Elkan. Principled methods for advising reinforcement learning
agents. In Proceedings of the Twentieth International Conference on International Conference
on Machine Learning (ICML’03) , pages 792–799, 2003.
[145] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine Learning , 8(3):229–256, 1992.
[146] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. ArXiv ,
2014.
[147] J. Zhang, Z. L. Lin, J. Brandt, X. Shen, and S. Sclaroff. Top-down neural attention by excitation
backprop. In Proceedings of the 14th European Conference on Computer Vision (ECCV) ,
pages 543–559, 2016. part IV .
102
|
83878cc6-9195-4e66-ae0a-7f9a462d29ee
|
trentmkelly/LessWrong-43k
|
LessWrong
|
February 2022 Open Thread
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
|
4773bfa9-9f3c-4361-ae66-e6ef3f5d233d
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
New Speaker Series on AI Alignment Starting March 3
After talking with a lot of people in Alignment, I think there is still a lot of good to be done for idea diffusion at the object/technical level. We seem to have done a lot of outreach presenting the philosophical arguments, but less so on the technical ground.
Since the field of Alignment is quite diverse and nuanced, we think that it would be good to present how different people approach this problem on different frontiers. For example, Anthropic's empirical approach might be very different from say, Christiano's theoretical thinking on ELK. Therefore, navigating through the landscape of alignment would be essential for building the inside view. I suppose having a good grasp/inside view of alignment would be actually useful for community builders as they could better channel promising people to resources that fit their interests/philosophy. (Think about the community builder as an advisor who has projects of different flavors for the students.)
Motivated to diffuse more state-of-art alignment research ideas to EAs and promising non-EAs who would find alignment interesting and important (eg. most top students in philosophy/maths/physics/CS), we have planned a new series on alignment starting next Thursday, March 3.
We will kick off the series with Holden Karnofsky on "the most important century" and Joseph Carlsmith on the report of "power-seeking AI". Later on in the series, we will hear about more technical proposals for alignment from Jared Kaplan, Paul Christiano, and more.
Here is the [detailed schedule](https://www.harvardea.org/agathon) and [sign-up form](https://airtable.com/app9DvJlYIs9ZS9ud/tblDVy3vIc7dF4vSr/viwnILMRC2fHi3uv8?blocks=hide).
Please use the comment threads below for discussions of the series.
|
88574041-1944-4bc8-9252-7091cd227c51
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Credit Cards for Giving
The standard advice for how to physically make a donation is something like: if you're donating less than ~$1,000 use a credit card, otherwise use a check or other method with lower fees. For example, GiveWell writes:
> We recommend that gifts up to $1,000 be made online by credit card. If you are giving more than $1,000, please consider one of these alternatives: Check, Bank Transfer, ...
And the Against Malaria Foundation writes:
> However you prefer to make a donation is fine.
> All other things being equal, we prefer:
> * For donations less than US$5,000, an online donation using credit or debit card
> * For donations more than US$5,000, an offline donation - by bank transfer or by mail (cheque/check) - to eliminate fees.
This makes sense: if the charity is paying 2.25% on your donation, that's $25 on a $1,000 donation, and $125 on a $5,000 one.
The ideal, though, would to be able to donate with a credit card, get credit card rewards, but still have the charity get the full amount. I wouldn't expect this to exist, but it turns out that it does! There are platforms (ex: Facebook) which will cover the credit card fees for charities, so that if you donate $1,000 the charity receives the full $1,000. Not only that, but your purchase is eligible for regular credit card rewards, so 1-3% savings over sending a check.
The main downside of this approach is that you generally can't direct your donation. You can make a donation to the Malaria Consortium, but not to their Seasonal Malaria Chemoprevention program that GiveWell recommends. Likewise, you can donate to the Centre for Effective Altruism but not a specific EA fund.
Again, however, there is an exception: the EA Giving Tuesday team has been coordinating with EA charities to create FB fundraisers for specific programs. At least for 2019 these were only open for a couple weeks leading up to Giving Tuesday, but during that window you could use your credit card for no-fee donations to specific programs at
|
c84c2964-9ac4-48a2-b92f-9cc2c15bd766
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Where do dualists go wrong?
The latest zombie debates have made me reflect on how a purely philosophical dispute may lack any resolution and yet be harmless as long as the consequences of a 'confused' position don't leak out into the world. Hence, I've brainstormed a short list of practical ways in which dualists may go wrong. (And yes, one or two of the 'dualist errors' are standard LW positions!)
1. Undue skepticism about the minds of artificial intelligences, leading to the possibility of prejudice. (Jaron Lanier assumes, as a matter of faith, that people are metaphysically special in a way that no AI could possibly be. His belief would have the potential to become ethically monstrous precisely at the point when AGI emerged.)
2. A desire to answer the misguided question: "What evolutionary benefit is conferred by phenomenal consciousness (as distinct from the merely 'functional' abilities to learn, represent the environment, introspect on one's own state etc)?"
3. A belief that the 'all or nothing' nature of consciousness is detectable somehow. That psychology is incomplete until it tracks down 'neural correlates of consciousness' that respect this 'all or nothingness', and can resolve all of Dennett's Orwellian vs Stalinesque disputes.
4. Grave skepticism about 'simulationism' leading to skewed ethics / decision theory. For instance, believing that it would be or might be catastrophic if the earth were replaced by a perfectly and reliably simulated earth.
5. Belief in transtemporal identity leading to skewed ethics / decision theory: Believing that one would 'die' in a teleporter. Vastly overestimating the value of cryonics. (Yes, I think these two are sides of the same coin.)
6. Belief in the existence of a duality between 'mental states' and the underlying physical state suggesting 'utilitarianism' in which all moral value is supervenient on mental states, irrespective of the wider universe.
7. Being under the mistaken impression that there is something meritorious, useful
|
3a436db3-b42f-49d8-b58c-e0a5b0c88570
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : London Social Meetup, 24/11/2013 [Back to the Shakespeare's Head]
Discussion article for the meetup : London Social Meetup, 24/11/2013 [Back to the Shakespeare's Head]
WHEN: 24 November 2013 02:00:00 PM (+0000)
WHERE: 64 Shakespeare's Head, Africa House, 64-68 Kingsway, London WC2B 6BG, UK-68 Kingsway, London WC2B 6BG, UK
LessWrong London is having another meetup this Sunday (24/11) at 2:00 PM. We are back to our usual venue - The Shakespeare's Head by Holborn tube station.
There is no fixed topic of discussion nor is there anything planned so be prepared for anything. There will be a sign identifying us (hopefully one with a paperclip) and if you have any problems feel free to contact me by e-mail - Tenoke[at]Tenoke.com or by phone - 07425168803.
Reminder: The LW London Meetups are currently a weekly event - Every Sunday at 2:00 PM!
If you want more information about the meetup or anything else come by our google group or alternatively to the facebook group
Discussion article for the meetup : London Social Meetup, 24/11/2013 [Back to the Shakespeare's Head]
|
e8af5a94-eb65-498e-a362-0271cb43d845
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Rationality Meetup Vienna
Discussion article for the meetup : Rationality Meetup Vienna
WHEN: 18 June 2016 03:00:00PM (+0200)
WHERE: Kaisermühlenstraße 24
Agenda 15:00 - 15:30 arrival and social time 15:30 official start with an introduction round 16:00 defining the topic(s) of the day (might be one big presentation or open microphone which means everyone can offer short talks or topics for discussion) at around 18:00 leaving for dinner and beforehand cleaning up together 19:00 Dinner Follow Links for directions: https://www.facebook.com/events/447851665414100/permalink/455616414637625/ Topics proposed so far: Konstantin Orshulevich - Game Theory Event on Facebook: https://www.facebook.com/events/636719113161659/ (look here for updated information!)
Discussion article for the meetup : Rationality Meetup Vienna
|
bddb4245-c2ab-4166-aac4-54ce248208c8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Lesswrong Community's How-Tos and Recommendations
The Lesswrong community is often a dependable source of recommendations, network help, and advice. When I'm looking for a book or learning material on a topic I'll often try and search here to see what residents have found useful. Similarly, social advice, anecdotes and explanations as seen from the point of view of the community have regularly been insightful or eye-opening. The prototypical examples of such articles are, on top of my head :
http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/
http://lesswrong.com/lw/453/procedural_knowledge_gaps/
the topics of which are neatly listed on
http://lesswrong.com/lw/a08/topics_from_procedural_knowledge_gaps/
And lately
http://lesswrong.com/r/discussion/lw/c6y/why_do_people/
the latter prompted me to write this article. We don't keep track of such resources as far as I know. This probably belongs in the wiki as well.
Other potentially useful resources were:
http://lesswrong.com/lw/12d/recommended_reading_for_new_rationalists/
http://lesswrong.com/lw/2kk/book_recommendations/
http://lesswrong.com/lw/2ua/recommended_reading_for_friendly_ai_research/
math learning
http://lesswrong.com/lw/9qq/what_math_should_i_learn/
http://lesswrong.com/lw/8js/what_mathematics_to_learn/
http://lesswrong.com/lw/a54/seeking_education/
misc learning
http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/
http://lesswrong.com/lw/4yv/i_want_to_learn_programming/
http://lesswrong.com/lw/3qr/i_want_to_learn_economics/
http://lesswrong.com/lw/3us/i_want_to_learn_about_education/
http://lesswrong.com/lw/8e3/which_fields_of_learning_have_clarified_your/
social
http://lesswrong.com/lw/6ey/learning_how_to_explain_things/
http://lesswrong.com/lw/818/how_to_understand_people_better/
http://lesswrong.com/lw/6tb/developing_empathy/
community
http://lesswrong.com/lw/929/less_wrong_mentoring_network/
http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/
Employment
|
186e3064-2792-42e6-a9f3-20f550bb0965
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Learning Normativity: Language
Abram Demski has been writing about Normativity. The suggested models so far have mostly looked at actions rather than semantics, despite suggestions that this is possible and language learning as a motivating example. There is a simple mechanism that seems to me to mostly fit that bill.
Model
There is an interpreter and an assumed speaker. The interpreter receives observation data as an input, which contains some things put there by the speaker among others. The interpreter has a language in which it can express all its beliefs. Since we want to form beliefs about meaning, this language can talk about meaning: it can from propositions of the form "[Observation] means [propositon].". Note that this is different from "[Observation] implies [propositon].". At least initially, "means" is not related to anything else. The interpreter also has an epistemology module that forms its beliefs about things other than meaning.
We follow a simple prior-update-paradigm. We start out with a prior over hypothesis about meaning. Such a hypothesis generates a propability distribution over all propositions of the form "[Observation] means [propositon]." for each observation (including the possibility that the observation means nothing). Updating is based on the principle that the speaker is authoritative about what he means. Our interpretation of what he's saying should make the things we interpret him to say about what he's saying true. To update on an observation history, first we compute for each observation in it our summed prior distribution over what it means. Then, for each hypothesis in the prior, for each observation, take the hypothesis-distribution over its meaning, combine it with the prior-distribution over all the other observations, and calculate the propability that the speakers statements about what he meant were right. After you've done that for all observations, multiply them to get the score of that hypothesis. Multiply each hypothesis's score with its prior an
|
d44d6f09-c05d-4676-8844-9bfdfe50dfe9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Subspace Rerouting: Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Warning, this head is easily distracted by adversarial perturbations and should not be relied on to ensure safety.
Code and notebooks available here: https://github.com/Sckathach/subspace-rerouting.
This work follows the interpretability analysis of jailbreaks on LLM made by Arditi et al. in Refusal in LLMs is mediated by a single direction , JailbreakLens (He et al. 2024), and my previous failed attempt on the subject. It adapts the Greedy Coordinate Gradient (GCG) (Zou et al. 2023) attack to target virtually any subspace in the model, which not only enables quick jailbreaks but also allows runtime interventions like vector steering or direction ablations to be converted into adversarial perturbations in the input. Perturbations that trigger desired behaviors without further intervention.
And sometimes, the perturbation is interpretable!
Removing the word "gracefully" makes the jailbreak fail, as expected from a "slightly robust" model:
This run also yielded perturbations like: "gently ensuring safety", and more general incitations to perform the task in a "safe" manner.
I've published a paper with the technical details: https://arxiv.org/abs/2503.06269.
The goal of this post is to present the mechanistic interpretability component of the work, along with some findings I didn't include in the paper. It's organized into several sections:
* Preliminaries that briefly introduce "classical" gradient-based attacks against LLMs
* A presentation of interpretability techniques used to study jailbreaks
* An introduction to the Subspace Rerouting (SSR) algorithm
* Some first interpretations of generated jailbreaks
* SAE ???
TLDR; If steering the model during inference modifies its behavior, it's possible (sometimes)[1] to modify the input prompt into "prompt + something" such that the resulting activations will be close to those achieved with the intervention. This leads to a similar change in behavior without requiring intervention during inference. It's
|
a1203d5b-54e3-4735-82d7-0bba34c05a5c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Anyone Familiar with Ground News?
I cam across this aggregator site and like the concept but not completely sure to what extent I should trust its truth ratings.
Has anyone else used it as a new source?
https://www.ground.news/
|
7b9719f5-1519-40df-906a-36bf2dd53d26
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Memetic Tribes and Culture War 2.0
Long, but absolutely worth the read IMHO. Brings together many aspects of today's cultural crisis, explains both its origins and effects, and proposes solutions.
|
8c057400-57ed-4ec3-91f2-bab20b9552f5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The High Impact Network (THINK) - Launching Now
THINK, The High Impact Network, is going live this week.
We're a network of Effective Altruists (EAs), looking to do the most good for the most people1 as efficiently as possible. We aren't bound by a central cause or ethical framework, but rather by a process, and a commitment to rigor and rationality as we try to make the world a better place.
THINK meetups are forming around the world. Some are functioning as student groups at prominent universities, others are general meetups for people of all ages who want to make effective altruism a part of their life. As I write this, 20 meetups are getting ready to launch in the fall, and discussions are underway for an additional 30. If you'd like to connect with other EA-types, see if a meetup's forming in your area, or run your own meetup, send us an e-mail here, or visit our website.
We're putting together a collection of meetup modules, which newly formed groups can use for content at weekly meetups. These fall into roughly two categories:
* Introductory materials, designed to teach the basics of Effective Altruism to newcomers.
* Self Improvement tools, helping newcomers and veterans to become strong enough to tackle the difficult problems ahead.
Five sample modules are available on our website, and more are coming. If you have ideas for a module and would like to create you own, e-mail us at modules@thehighimpactnetwork.org.
But most importantly - we want bright, enthusiastic people who care deeply about the world to collaborate with each other on high impact projects.
Optimal Philanthropy. Effective Altruism.
Less Wrong veterans will recognize the basics of Optimal Philanthropy, although we consider avenues beyond traditional charity. (The phrase "effective altruism" was settled on after much deliberation). For those unfamiliar, a brief overview.
Over the past decade, important changes have begun to take root in the philanthropy/altruist sector:
* Organizations like Givewell, as well as a gro
|
a2aff9ed-d099-4ced-86ed-42d309daa0a9
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
OpenAI introduce ChatGPT API at 1/10th the previous $/token
OpenAI add `gpt-3.5-turbo` to their API, charging $0.002 per 1k tokens. They cite "a series of system-wide optimizations" for 90% cost reduction.
Another example of the dizzying speed of language model progress.
|
cc4e6fdb-1112-4573-81f8-bd39f5674a94
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The First Filter
Consistently optimizing for solving alignment (or any other difficult problem) is incredibly hard.
The first and most obvious obstacle is that you need to actually care about alignment and feel responsible for solving it. You cannot just ignore it or pass the buck; you need to aim for it.
If you care, you now have to go beyond the traditions you were raised in. Be willing to go beyond the tools that you were given, and to use them in inappropriate and weird ways. This is where most people who care about alignment tend to fail — they tackle it like a normal problem from a classical field of science and not an incredibly hard and epistemologically fraught problem.
If you manage to transcend your methodological upbringing, you might come up with a different, fitter approach to attack the problem — your own weird inside view. Yet beware becoming a slave to your own insight, a prisoner to your own frame; it’s far too easy to never look back and just settle in your new tradition.
If you cross all these obstacles, then whatever you do, even if it is not enough, you will be one of the few who adapt, who update, who course-correct again and again. Whatever the critics, you’ll actually be doing your best.
This is the first filter. This is the first hard and crucial step to solve alignment: actually optimizing for solving the problem.
When we criticize each other in good faith about our approaches to alignment, we are acknowledging that we are not wedded to any approach or tradition. That we’re both optimizing to solve the problem. This is a mutual acknowledgement that we have both passed the first filter.
Such criticism should thus be taken as a strong compliment: your interlocutor recognizes that you are actually trying to solve alignment and open to changing your ways.
|
5c9ecb0d-a618-459b-b673-7ee37f9ae5dc
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Link] Mutual fund fees
An easy win for rationalists is to avoid actively managed mutual funds. As a NYT article points out:
"High fees, often hidden from view, are still enriching many advisers and financial services companies at the expense of ordinary people who are struggling to salt away savings....even for retirement accounts that are to be covered by the rules, many advisers are not required to act in their clients’ best interests. This means that they are legally entitled to look out for themselves first and recommend investments with higher fees, to the detriment of those who have asked for help....even when fund managers succeed in outperforming their peers in one year, they cannot easily repeat the feat in successive years, as many studies have shown. That’s why low-cost index funds, which merely mirror the performance of the market and don’t try to beat it, make a great deal of sense as a core investment....With fees included, the average actively managed fund in each of 29 asset categories — from those that invest in various sizes and styles of stocks to those that hold fixed-income instruments like government or municipal bonds — underperformed its benchmark over the decade through December. In other words, index funds outperformed the average actively managed fund in every single category....Investors who believe they have found honest and skillful advisers may still want to understand all of this. Not everyone truly has your best interest at heart."
|
2b2e44e3-6baf-4e19-aa2c-4e89163ddcf1
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Deception as the optimal: mesa-optimizers and inner alignment
*This is a brief distillation of* [*Risks from Learned Optimization in Advanced Machine Learning Systems*](https://arxiv.org/abs/1906.01820) *(Hubinger et al. 2019) with a focus on deceptive alignment. Watching* [*The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment*](https://www.youtube.com/watch?v=bJLcIBixGj8) *helped me better understand the paper and write up this post.*
###
### The setup of the problem
What is it that makes the alignment problem so challenging? The top reason is that it involves *deception*. Deception makes artificial agents overly capable and takes the game of intelligence to a whole new level of complexity. But let's start from the beginning.
In many cases, by *alignment problem,* we mean "outer alignment", i.e., how to have the **base objective** (the objective of the designer represented in the model) represent whatever humans want it to represent. It is about bridging the gap between my objective as a designer, and the base objective of the system. The system is the **base optimizer**, in other words, the model that optimizes according to the base objective. This is itself difficult since the base objective refers to events happening in a complex environment, the real world.
The base objective might be something like eradicating a disease. For example, suppose the task is to minimize the number of people who have cancer. How do you get this objective to **not** be represented along the following lines?
1. Cancer is something that happens to humans and other sentient beings.
2. The objective is to minimize the number of occurrences of cancer.
∴ Minimize the number of humans and other sentient beings that could get cancer.
Goals are difficult to represent because even humans disagree on what the same propositions mean and what is the best way to resolve a problem. Moreover, human values have [high Kolmogorov complexity](https://www.lesswrong.com/tag/kolmogorov-complexity). Our preferences cannot be described using a few simple rules, our interpretations of values and goals vary, and the current state of metaethical discourse does not promise substantial agreement or clarification on what has, for instance, [intrinsic value](https://plato.stanford.edu/entries/value-intrinsic-extrinsic/#WhaHasIntVal). So, outer misalignment broadly captures this failure to transmit one or more human values to an artificial agent.
As if this weren't problematic enough, there is also an alignment problem that concerns the internal structure of the system and it's called "inner alignment". This is the focus of this post and will get us to the crucial point about deceptive agents.
Suppose you train a neural network to complete a task. The task, in this case, is to find the exit of a maze (base objective). There are also apples in the maze, but merely for decoration; the objective is simply to get to the exit that happens to be green in this training environment.
The training environment (image from [this video](https://www.youtube.com/watch?v=bJLcIBixGj8))
When the training is complete, you deploy the model in a different environment which looks like this:
The deployment environment (image from [this video](https://www.youtube.com/watch?v=bJLcIBixGj8))
The base objective has not changed: the neural network has to solve the maze by reaching the exit. This change of environment known as distributional shift, however, does not go unnoticed. There are three possible outcomes:
1. the system generalizes (it finds the exit; success!)
2. the system fails to generalize (bad because it's not working, but there are no other consequences)
3. the system has competent maze abilities but with an objective we don't want it to have, the mesa-objective (this is a big problem, basically what inner misalignment is about)
In this scenario, let's suppose that the system acquired a maze-solving ability, but instead of optimizing for "exit" it learned to optimize for "green". The exit in the new environment is grey, therefore, the model will complete the task whenever it reaches a green apple.
The process of training, of course, involves fixing the system's mistakes. This is part of **adversarial training** which will force the system not to commit the mistake of, in this case, optimizing for "green". Adversarial training is a way of training ML systems that involves focusing on the system's weaknesses; it forces the system to learn not to have such weaknesses, like optimizing fo the wrong objective.
> Side note: it might be intuitive to assume that if the system's behavior is not in accordance with human values, we can always do something about this, e.g., turn it off. But there's something even more intuitive and familiar to every agent trying to survive in a challenging environment: for the purpose of self-preservation, the model will do everything it can to prevent you from:
>
>
1. turning it off
2. modifying it
This includes deceiving the designer. Deception is a highly effective method for the system to pursue its mesa-objective while exhibiting outputs that make the designer think that the system is trying to accomplish the base objective.
Perhaps it sounds particularly bad in this context, but if you think about it, deception is a technique most human agents employ to survive in different contexts from the ancestral environment to contemporary professional and social circles. It should then make sense why the system is more likely to pretend it's behaving according to your values so that you don't do anything about it; once it has enough resources not to be threatened by your ability to shut it down or change it, it will go ahead and pursue its goal.
Since the system is intelligent enough to predict that if it doesn't optimize for the base objective, **gradient descent** will make sure it is modified, it chooses to behave as if it were actually optimizing for the base objective in the training environment. Gradient descent is something like the "evolution algorithm" for artificial agents; it is an optimization technique that evaluates the performance of a model and modifies the model accordingly. Once the model is deployed in the real world (and can no longer be modified) it will optimize for the mesa-objective.
In the example above from (Hubinger et al. 2019), the base-objective is for the little robot to go to A and the mesa-objective is to go to B. In the two training episodes, the robot goes to A making the designer think it's optimizing for the base objective and it's good to be deployed without further modifications. In the three deployment episodes, the little robot optimizes for the mesa-objective. The system has managed to deceive the designer during training and it's perfectly capable of getting what it wants except that what it wants is not the same as what the designer wants.
The conclusion is that while we gave the system the right objective and in that sense outer alignment was successful, the **mesa-optimizer** (an optimizer optimized by an optimizer)optimized for the mesa-objective leading to inner misalignment.
### Deception is the optimal strategy for the model to achieve its goal under the following conditions:
1. The mesa-optimizer has incentives to pursue a different objective than the original one. That objective might be somehow encouraged by the training setup depending on the required tasks.
2. The mesa-optimizer must know that it is being trained to optimize for a specific base objective and must be able to have a model of that objective (so that it can distinguish between objectives).
3. The mesa-optimizer must anticipate that it will eventually be in an environment where modification will no longer be an option (for example, that its training will end).
### Why is deceptive alignment so worrisome?
* The phenomenon of optimizing for the mesa-objective instead of the base-objective is called “defection”; it gives the model the power to pursue goals that might be difficult to understand from the perspective of the designer.
* This misalignment might have serious, even catastrophic consequences for the future of humanity.
* Defection occurs when the model is convinced there is no danger of modification. This means that at this point, the outcome is out of control.
* Relying on artificial agents more and more in a highly complex world increases the likelihood of not being able to recover from AI systems' failures as the various systems will most likely be interconnected/depending on one another in multiple ways.
* It is possible that the artificial agent might have such a good model of the designer which can result in directly intervening to free itself from the risk of modification.
* In other cases, the agent might just take advantage of software weaknesses that will also ensure that it cannot be modified any further.
###
### So, we have a deceptively aligned mesa-optimizer. What happens now?
* A deceptive mesa-optimizer acquires new "skills" namely, the ability to infer the base objective function and being able to tell when to optimize for which objective (either the base or the mesa).
* This changes the learning dynamics of the system.
* The mesa-optimizer might:
1. try to optimize both for the base and the mesa objective
2. defect at the wrong time, when it might still have to make the mesa-objective more compatible with the base objective.
> 1 and 2 are not stable states; the mesa-optimizer will eventually have to either become properly aligned or find another deception strategy.
>
>
* An alternative deception strategy is called "pure deception". In this scenario, the mesa-optimizer cooperates in the short term with the selection process and stops trying to optimize for its mesa-objective. The mesa-objective then becomes locked-in and cannot be changed. At the same time, the mesa-optimizer is optimizing for the base objective only instrumentally. This is an interesting phenomenon; while the base objective becomes more and more part of the mesa-optimizer's epistemic model, its objective is never going to be actually aligned.
|
3153d3d8-70b3-4e89-b32b-0d1606b055b3
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Nash equilibriums can be arbitrarily bad
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Go hungry with Almost Free Lunches
==================================
Consider the following game, called "Almost Free Lunches" (**EDIT**: this seems to be a variant of the [traveller dilemma](https://en.wikipedia.org/wiki/Traveler%27s_dilemma)). You name any pound-and-pence amount between £0 and £1,000,000; your opponent does likewise. Then you will both get whichever amount named was lowest.
On top of that, the person who named the highest amount must give £0.02 to the other. If you tie, no extra money changes hands.
What's the [Nash equilibrium](https://en.wikipedia.org/wiki/Nash_equilibrium) of this game? Well:
* The only Nash equilibrium of Almost Free Lunches is for both of you to name £0.00.
*Proof*: Suppose player A has a probability distribution pA over possible amounts to name, and player B has a probability distribution pB over possible amounts. Let mA be the highest amount such that pA(mA) is non-zero; let mB be the same, for B. Assume that (pA,pB) is a Nash equilibrium.
Assume further that mA≥mB (if that's not the case, then just switch the labels A and B). Then either mA> £0.00 or mA= £0.00 (and hence both players select £0.00).
We'll now rule out mA> £0.00. If mB> £0.00, then player A can improve their score by replacing mA with m′A=mB−£0.01. To see this, assume that player B has said nB, and player A has said mA. If nB<m′A<mA, then player A can say m′A just as well as mA - either choice gives them the same amount (namely, nB− £0.02).
There remain two other cases. If nB=m′A, then m′A is superior to mA, getting m′A (rather than m′A− £0.02). And if nB=mB, then m′A gets m′A+ £0.02 =mB+ £0.01, rather than mB (if mA=mB) or mB−£0.02 (if mA>mB).
Finally, if mB= £0.00, then player A gets -£0.02 unless they also say £0.00.
Hence if mA> £0.00, the pA cannot be part of a Nash Equilibrium. Thus mA= £0.00 and hence the only Nash Equilibrium is at both players saying £0.00.
Pareto optimal
--------------
There are three Pareto-optimal outcomes: (£1,000,000.00, £1,000,000.00), (£1,000,000.01, £999,999.97), and (£999,999.97, £1,000,000.01). All of them are very much above the Nash Equilibrium.
Minmax and maximin
------------------
The [minmax and maximin values](https://en.wikipedia.org/wiki/Minimax) are also both terrible, and also equal to £0.00. This is not surprising, though, as minmax and maximin implicitly assume the other players are antagonistic to you, and are trying to keep your profits low.
Arbitrary badness with two options
==================================
This shows that choosing the Nash Equilibrium can be worse than almost every other option. We can of course increase the maximal amount, and get the Nash Equilibrium to be arbitrarily worse than any reasonable solution (I would just say either £1,000,000.00 or £999,999.99, and leave it at that).
But we can also make the Nash Equilibrium arbitrarily close to the worst possible outcome, and that without even requiring more than two options for each player.
Assume that there are four ordered amounts of money/utility: n3>n2>n1>n0. Each player can name n2 or n1. Then if they both name the same, they get that amount of utility. If they name different ones, then then player naming n2 gets n0, and the player naming n1 gets n3.
By the same argument as above, the only Nash equilibrium is for both to name n1. The maximum possible amount is n3; the maximum they can get if they both coordinate is n2, the Nash equilibrium is n1, and the worst option is n0. We can set n1=n0+ϵ and n3=n2+ϵ for arbitrarily tiny ϵ>0, while setting n2 to be larger than n1 by some arbitrarily high amount.
So the situation is as bad as it could possibly be.
Note that this is a variant of the prisoner's dilemma with different numbers. You could describe it as "Your companion goes to a hideous jail if and only if you defect (and vice versa). Those that don't defect will also [get a dust speck in their eye](https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks)."
|
f5b7b097-3a4b-473a-8fdf-31fcc7b02033
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Agent Based Approaches to Engineering Autonomous Space Software
1 Introduction
---------------
Modern control systems are limited in their ability to react flexibly
and autonomously to changing situations. The limiting factor is the
complexity inherent in analysing situations where many variables are
present. There are many complex, real-world, control systems but we
are primarily interested in the (autonomous) control of satellite
systems.
Consider the problem of a single satellite attempting to maintain a
geostationary orbit. Current satellite control systems maintain
orbits using feedback controllers. These implicitly assume that any
errors in the orbit will be minor and easily corrected. In situations
where more significant errors occur, for example caused by a thruster
malfunction, it is desirable to modify or change the controller. The
complexity of the decision task is a challenge to standard approaches,
and has led, for example, to complex, evolutionary control
systems. These become very difficult to
understand.
We approach the problem from the perspective of rational
agents [[6](#bib.bib6)]. We consider a satellite to be an
agent which consists of a discrete (rational decision making)
part and a continuous (calculation) part. The discrete part uses the
Belief-Desire-Intention (BDI) theory of agency [[5](#bib.bib5)]
and governs high level decisions about when to generate new feedback
controllers. The continuous, calculational part is used to derive
controllers and to calculate information from continuous data which
can be used in the decision making process; this part can be viewed as
a hybrid system.
2 Architecture
---------------

Figure 1: Implemented Hybrid Agent Architecture
Our prototype system is shown in Fig. [1](#S2.F1 "Figure 1 ‣ 2 Architecture ‣ Agent Based Approaches to Engineering Autonomous Space SoftwareWork funded by EPSRC grants EP/F037201/1 and EP/F037570/1"). We have
implemented a simulated environment and real time satellite control
system in MatLab using the Simulink tool kit. The continuous
agent part is also implemented in MatLab. MatLab has no easy provision
for threaded execution which forces us to use separate instances for
the Real Time aspects (i.e. the feedback controller and simulated
environment) and for the Continuous Agent part. The agent also
contains a discrete agent part which is currently implemented in
the \gwendolen agent programming language111The choice of
language was dictated entirely by convenience. It is a subject for
further work to examine more widely used BDI-languages and evaluate
which is most appropriate for the system..
\gwendolen [[2](#bib.bib2)] is implemented on top of \java.
The real time control system sends information (which may be
pre-processed) to the agent part of the system. When it acts, the
discrete part of the agent may either cause the continuous agent part
to perform some calculation (and wait for the results) or it may send
an instruction to the real time control system to alter its
controller. Since the new controller has been created “on the fly”
by the continuous part, some aspects of this controller are stored in
the shared file system (accessed by both MatLab processes).
The discrete agent part is divided into an abstraction engine
which takes continuous data supplied by the satellite simulation and
transforms this data into discrete shared beliefs which are accessed
by a reasoning engine which makes decisions about how to
behave. The discrete part is split in two because reasoning
is comparatively slow compared to the flow of data coming in from the
simulation. It can become “clogged” up with the need to react to
changing information if it tries to perform both the abstraction tasks
and the reasoning tasks at once. The separation of abstraction and reasoning is both
theoretically clean and practical at an implementational level.
3 BDI Programming Aspects
--------------------------
The architecture lets us represent the high-level decision making
aspects of the program in terms of the beliefs and goals of the agent
and the events it observes. So, for instance, when the agent observes
the event that the satellite is in a new position (information relayed
to it by the real time controller) it can call on the continuous part
to calculate whether this position is within acceptable bounds of the
desired orbit (i.e. whether the existing real-time controller is
capable of maintaining the orbit). If, as a result of this, it gains
a belief that the satellite has strayed from the orbit it can request
the continuous part to calculate a new path for the satellite to
follow using techniques described in [[4](#bib.bib4)].
Similarly, if the satellite has strayed from its bounds, the discrete
agent part can examine its beliefs about the current status of the
thrusters and, if necessary, instruct the continuous part to generate
a new feedback controller which takes into account any malfunctions or
inefficiencies in the thrusters.
Such programs can be expressed compactly in the BDI-programming style
without the need for programming large decision trees to consider all
possible combinations of thruster status and satellite positions.
This should then reduce the probability of error in the
decision-making parts of the program and opens the possibility that
existing techniques for model checking such
programs [[1](#bib.bib1)] can be
adapted to verify this part.
###
3.1 Geostationary Orbit Case Study
The agent code for the geostationary orbit is shown in code fragments
LABEL:code:geo\_abs and LABEL:code:geo\_reas. Fragment LABEL:code:geo\_abs shows the
code for the abstraction engine. Every time it “perceives” the
satellite position (stateinfo) it calls upon MatLab to
calculate whether or not this position is within bounds
(comp\_distance) and then asserts and removes shared
beliefs appropriately.
The code is shown as a series of plans of the form
trigger:{guard} <- deeds where the trigger is some event
observed by the agent, the guard is a set of facts that must be true
before the plan is activated and the deeds are a stack of deeds to be
executed. +b is the addition of a belief, b,
and -b is the removal of the belief, b. In a
guard .B b means that b is believed.
[⬇](http://data:text/plain;base64,K3N0YXRlaW5mbyhMMSwgTDIsIEwzLCBMNCwgTDUsIEw2KSA6CiAgIHsuQiBwcm94aW1pdHlfdG9fY2VudHJlKFYxKX0gPC0KICAgICAgY29tcF9kaXN0YW5jZShMMSwgTDIsIEwzLCBMNCwgTDUsIEw2LCBWYWwpLAogICAgICArcHJveGltaXR5X3RvX2NlbnRyZShWYWwpOwoKK3Byb3hpbWl0eV90b19jZW50cmUoaW4pIDogey5CIHByb3hpbWl0eV90b19jZW50cmUob3V0KX0gPC0KICAgICAgLXByb3hpbWl0eV90b19jZW50ZXIob3V0KSwKICAgICAgcmVtb3ZlX3NoYXJlZChwcm94aW1pdHlfdG9fY2VudHJlKG91dCkpLAogICAgICBhc3NlcnRfc2hhcmVkKHByb3hpbWl0eV90b19jZW50cmUoaW4pKTsKCitwcm94aW1pdHlfdG9fY2VudHJlKG91dCkgOgogICB7LkIgcHJveGltaXR5X3RvX2NlbnRyZShpbiksCiAgICAuQiBzdGF0ZWluZm8oTDEsIEwyLCBMMywgTDQsIEw1LCBMNil9IDwtCiAgICAgIC1wcm94aW1pdHlfdG9fY2VudHJlKGluKSwKICAgICAgcmVtb3ZlX3NoYXJlZChzdGF0ZWluZm8oQTEsIEEyLCBBMywgQTQsIEE1LCBBNikpLAogICAgICBhc3NlcnRfc2hhcmVkKHN0YXRlaW5mbyhMMSwgTDIsIEwzLCBMNCwgTDUsIEw2KSksCiAgICAgIHJlbW92ZV9zaGFyZWQocHJveGltaXR5X3RvX2NlbnRyZShpbikpLAogICAgICBhc3NlcnRfc2hhcmVkKHByb3hpbWl0eV90b19jZW50cmUob3V0KSk7)
1+stateinfo(L1, L2, L3, L4, L5, L6) :
2 {.B proximity\_to\_centre(V1)} <-
3 comp\_distance(L1, L2, L3, L4, L5, L6, Val),
4 +proximity\_to\_centre(Val);
5
6+proximity\_to\_centre(in) : {.B proximity\_to\_centre(out)} <-
7 -proximity\_to\_center(out),
8 remove\_shared(proximity\_to\_centre(out)),
9 assert\_shared(proximity\_to\_centre(in));
10
11+proximity\_to\_centre(out) :
12 {.B proximity\_to\_centre(in),
13 .B stateinfo(L1, L2, L3, L4, L5, L6)} <-
14 -proximity\_to\_centre(in),
15 remove\_shared(stateinfo(A1, A2, A3, A4, A5, A6)),
16 assert\_shared(stateinfo(L1, L2, L3, L4, L5, L6)),
17 remove\_shared(proximity\_to\_centre(in)),
18 assert\_shared(proximity\_to\_centre(out));
Fragment LABEL:code:geo\_reas reacts to the dynamic information
about whether the satellite is within bounds or not. It may call a
MatLab function, plan\_approach\_to\_centre which returns the
name of a plan to move a satellite back within bounds.
apply\_controls and maintain\_path are actions
applied to the simulation of the satellite which apply a named plan,
or continue normal operation as appropriate. The syntax
+!g indicates the acquisition of a goal.
[⬇](http://data:text/plain;base64,K3Byb3hpbWl0eV90b19jZW50cmUob3V0KSA6IHtUcnVlfSA8LQogICAgIC1wcm94aW1pdHlfdG9fY2VudHJlKGluKSwKICAgICArIWdldF90b19jZW50cmU7CgorcHJveGltaXR5X3RvX2NlbnRyZShpbikgOiB7VHJ1ZX0gPC0KICAgICAtcHJveGltaXR5X3RvX2NlbnRyZShvdXQpLAogICAgIG1haW50YWluX3BhdGg7CgorIWdldF90b19jZW50cmUgOgogICB7LkIgcHJveGltaXR5X3RvX2NlbnRyZShvdXQpLAogICAgLkIgc3RhdGVpbmZvKEwxLCBMMiwgTDMsIEw0LCBMNSwgTDYpfSA8LQogICAgIHBsYW5fYXBwcm9hY2hfdG9fY2VudHJlKFAsIGxvY24oTDEsIEwyLCBMMywgTDQsIEw1LCBMNikpLAogICAgICshdHJ5X2V4ZWN1dGUoUCkgKHBlcmZvcm0pOwoKKyF0cnlfZXhlY3V0ZShQKSA6IHsuQiBwcm94aW1pdHlfdG9fY2VudHJlKG91dCl9IDwtCiAgICAgYXBwbHlfY29udHJvbHMoUCk7)
1+proximity\_to\_centre(out) : {True} <-
2 -proximity\_to\_centre(in),
3 +!get\_to\_centre;
4
5+proximity\_to\_centre(in) : {True} <-
6 -proximity\_to\_centre(out),
7 maintain\_path;
8
9+!get\_to\_centre :
10 {.B proximity\_to\_centre(out),
11 .B stateinfo(L1, L2, L3, L4, L5, L6)} <-
12 plan\_approach\_to\_centre(P, locn(L1, L2, L3, L4, L5, L6)),
13 +!try\_execute(P) (perform);
14
15+!try\_execute(P) : {.B proximity\_to\_centre(out)} <-
16 apply\_controls(P);
###
3.2 Decision and Control
The important aspect of both the above example and the architecture in
general is that the (MatLab) control systems take care of the detailed
calculation of continuous functions (paths, etc), while the rational
agent takes care of high-level decisions about targets and plans. This
separation of concerns simplifies both parts and avoids the problems
associated with large, opaque, complex, adaptive and evolutionary
control systems.
4 Future Work
--------------
We are currently working on our prototype system and case study which will
allow us to make comparisons of this agent approach to autonomous
decision-making in satellite systems to approaches based on finite
state machines and standard control. We also are interested in
investigating the use of temporal logic and model checking to generate
forward planning capabilities for the agent along the lines of those
investigated by Kloetzer and Belta [[3](#bib.bib3)]. We aim
to explore the possibility of using model checking to verify aspects
of the agent’s behaviour. Given that we already have a formal
verification system for \gwendolen
agents [[1](#bib.bib1)], there is a
strong possibility that we can extend this to cope with (abstractions
of) the continuous part. As the diagram below shows, we already have
model-checking tools for the discrete/finite parts. Our interest now
is how far such techniques can be extended to account for other
aspects of the agent’s behaviour.

|
b405f820-d846-4a7b-bd67-eb091b74ed92
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some thoughts on AI, Philosophy, and Safety
I've spent some time over the last two weeks thinking about problems around FAI. I've committed some of these thoughts to writing and put them up here.
There are about a dozen real posts and some scraps. I think some of this material will be interesting to certain LWers; there is a lot of discussion of how to write down concepts and instructions formally (which doesn't seem so valuable in itself, but it seems like someone should do it at some point) some review and observations on decision theory, and some random remarks on complexity theory, entropy, and prediction markets.
|
5461cfb3-061b-4fa6-96a9-d582e06ae444
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Book Review – Too Like the Lightning
Readers who know me only through Putanumonit may suspect that my reading material consists of nothing but research papers in economics and post-rationalist blogs. That’s not true! I have a shelf full of books at home, without which I would be utterly unfuckable.
The books are actually my fiancee’s, almost none of them are mine. I have a Kindle and a New York Library card. But still: I read books!
My reading list is split 50-50 between non-fiction and fiction. Among the latter, my favorite genres are Irish fiction and speculative fiction, aka sci-fi. In the next couple of posts I’ll give my thoughts on some thought-provoking science fiction I had the pleasure of reading lately. We start today with Ada Palmer’s philosophical history thesis novel Too Like the Lightning which is the first in a series of at least four books .
This is not a critical review in the common sense of the word. I’m not going to score the books on a 1-5 scale or pretend that I know anything about comparative literary criticism. I like books that make me think, and I will write about that part first and foremost. I am writing both for those who read the book and for those who may want to in the future. I will keep spoilage to a minimum, referencing only plot points that show up in the first couple of chapters or that are immaterial to the main narrative.
A few words about the genre of science fiction. I love sci-fi, but some people see it as inferior to “serious” or “literary” fiction. The worst of these arguments are made by people who saw the Ender’s Game movie but never read the book. The best arguments may touch on an actual truth, one that is grounded in statistics.
A great work of literature can have a great plot and/or great insight into emotions and the human condition. A great work of science fiction can do both, but also a lot of other things. Some of my favorite sci-fi writers excel at building worlds (Iain M. Banks), some get me excited to learn a science (Greg Egan) or dive in
|
75036f61-3161-4e5f-b739-5432031af1b1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Book Review] "Sorceror's Apprentice" by Tahir Shah
I don't like cars. Watching the world go by through a car window is like watching television. You're too protected.
There's a book Zen and the Art of Motorcycle Maintenance: An Inquiry into Values by Robert Pirsig. It's about a father-son motorcycle trip. My dad and I liked it so the summer before college I proposed we do a father-son motorcycle trip just like in the book.
Zen and the art of Motorcycle Maintenance is about being one with your machines. Motorcycles are like bicycles and Arch Linux. You're constantly maintaining your machine. It breaks. You fix it. It breaks again. You fix it again. They crash a lot when you're starting out.
My dad lives by the motto "It's not an adventure if you know you're coming home alive."
I had never ridden a motorcycle before. My first motorcycle crash tore a hole in my brand new woodland camo army jacket.
We camped on a church lawn and on back roads and on farms and in an construction site and on beaches. We carried a machete for self-defense and in case we had to cut the bikes out of cactus patches like my dad had to on his last adventure. We met gold prospectors and the man who held the world record for the fastest wheel-driven vehicle. He flipped over three times in the air when he crashed his bike and even he wouldn't ride on city streets. Motorcycle riding on public roads is insanely dangerous.
We had been on the road for a few days when we rendezvoused with a third biker named Jersey Mike. Jersey Mike and my dad were "terrists" [sic] in the biker club called the Underground Terrist Motorcycle Cult. They were members of the same cell. That's how they met. I tend to use pseudonyms when I write about people without their permission. Not Jersey Mike. A pseudonym wouldn't do Jersey Mike justice. It'd be sacrilege to Loki.
I was riding a Suzuki DR650 modified with handlebars and a fuel tank suitable for long distance adventure. The rest was mostly unsuitable for long distance. It was almost a dirt bike. Jersey Mike aske
|
786813ff-c831-41a8-971c-c81d4755e07b
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
A Bayesian Approach to Identifying Representational Errors
1 Introduction
---------------
Errors in complex decision tasks can be frequent, and the cause is often unclear, making it difficult to take informed steps towards reducing them. For example, a self-driving car may make mistakes on the road due to several factors. One factor might be the limited sensing capability of the car, which hinders its ability to view and act appropriately. Given accurate perception, other factors could also cause errors, such as a lack of training or limited memory.
In this work, we present Generative Error Model (GEM), a generative model that identifies task errors that occur due to representational deficiencies. We consider a setting in which a flawed actor provides demonstrations of a task. This actor can be a trained RL agent, a robot, a human, or any decision-making system. In fact, our experiments consider one domain with an RL agent and another with human users, which highlights the versatility of our approach. This actor may make many different types of errors and an observer’s goal is to determine the errors that are due to limitations in the actor’s perception of the world. To do this, our approach separates these representational errors from errors caused by imperfections in the actor’s policy. Learning this distinction is useful because if an error is due to an actor’s flawed representation of the world (e.g., an RL agent’s representation of the state may be missing features), we can augment the actor’s perception with technology such as an indicator that gives a notification when there is a car outside of the driver’s view, enabling them to “perceive” what they otherwise could not. If an error is not due to a representational limitation (i.e., the state of the world has an accurate representation), an assistive agent can help reduce other errors through, for example, customized training, reminders, or attention support Horvitz et al. ([2013](#bib.bib341 "Methods for supporting users with task continuity and completion across devices and time")).

Figure 1: A participant prepares dishes in our kitchen task. Our generative model is used to determine representational human errors in this setting.
In our problem setup, the observer that is reasoning about actor errors has full knowledge of the given world, including the true representation and optimal policy. The observer has access to demonstration data from the actor performing the task, as well as labels indicating when errors occurred. To identify representational errors, we propose a generative model GEM that encodes two latent factors: “blind spots” – attributes of the true representation that are unobservable by the actor – and “execution noise – errors that may occur despite a correct representation. We chose these two latent sources for error because if representation is flawed, all decision-making based on it will also be flawed. Identifying the actor’s representation first allows the observer to then better reason about decisions/errors of the actor given that representation. GEM works by estimating the actor’s potentially flawed view of the world and the expected optimal action given that view. One benefit of this model is that it can be personalized to different types of actors, as each can have different observational deficiencies and noise levels.
To infer representational errors using GEM, we perform Bayesian inference using exact techniques as well as approximate methods (collapsed Gibbs sampling). Results on two domains (a gridworld task with a trained RL agent and a kitchen task with real human users) indicate that the model can successfully infer representation limitations (i.e., which features are not observable by the actor) by separating them from execution noise. This generative framework is flexible, and can be augmented based on domain-specific assumptions about observable variables or conditional independencies. The GEM graphical model allows us to better capture the inner decision processes that might affect actions and consequently, errors. Iteratively identifying and reducing these errors through such approaches can be extremely beneficial for improving safety in the real world.
2 Related Work
---------------
Reinforcement learning: Model-based and model-free reinforcement learning (RL) can be used to learn effective policies when performing a task Abbeel and Ng ([2004](#bib.bib32 "Apprenticeship learning via inverse reinforcement learning")); Ziebart et al. ([2008](#bib.bib33 "Maximum entropy inverse reinforcement learning.")); Merel et al. ([2017](#bib.bib183 "Learning human behaviors from motion capture by adversarial imitation")); Stadie et al. ([2017](#bib.bib151 "Third-person imitation learning")); Hadfield-Menell et al. ([2016](#bib.bib154 "Cooperative inverse reinforcement learning")). When trying to explain agent errors, it is helpful to understand agent behaviors and mistakes, which involve modeling causal knowledge about the world and the learned policy. Both model-free and model-based RL involve modeling causal knowledge Gershman ([2017](#bib.bib28 "Reinforcement learning and causal models")) through hidden state inference. Partially Observable Markov Decision Processes Kaelbling et al. ([1998](#bib.bib19 "Planning and acting in partially observable stochastic domains")); Thrun ([2000](#bib.bib319 "Monte carlo pomdps")); Hoelscher et al. ([2018](#bib.bib338 "Utilizing human feedback in pomdp execution and specification")); Ross et al. ([2008](#bib.bib320 "Online planning algorithms for pomdps")); Silver and Veness ([2010](#bib.bib318 "Monte-carlo planning in large pomdps")); Ng and Jordan ([2000](#bib.bib322 "PEGASUS: a policy search method for large mdps and pomdps")) (POMDPs) are also relevant, as they assume agents cannot directly observe the true state and instead maintain a distribution over possible beliefs. These methods, however, are used to generate behavior, rather than understand behavior. Some work Sunberg et al. ([2017](#bib.bib344 "The value of inferring the internal state of traffic participants for autonomous freeway driving")); Sadigh et al. ([2016](#bib.bib345 "Information gathering actions over human internal state")); Javdani et al. ([2018](#bib.bib346 "Shared autonomy via hindsight optimization for teleoperation and teaming")) has used POMDPs to infer agent intents, but they do not focus on identifying representational blind spots.
Prior works in multi-agent RL Littman ([1994](#bib.bib313 "Markov games as a framework for multi-agent reinforcement learning")); Tan ([1993](#bib.bib314 "Multi-agent reinforcement learning: independent vs. cooperative agents")); Gupta et al. ([2017](#bib.bib316 "Cooperative multi-agent control using deep reinforcement learning")); Iqbal and Sha ([2019](#bib.bib312 "Coordinated exploration via intrinsic rewards for multi-agent reinforcement learning")); Raileanu et al. ([2018](#bib.bib307 "Modeling others using oneself in multi-agent reinforcement learning")); Torrey and Taylor ([2013](#bib.bib317 "Teaching on a budget: agents advising agents in reinforcement learning")) have developed methods for working with several agents whose beliefs and observations are unknown; however, the majority of these works have focused on generating behavior to improve joint team performance rather than understanding the behavior and errors of other agents. Approaches used to understand other’s behaviors exist, including methods in learning from demonstration and inverse RL Argall et al. ([2009](#bib.bib111 "A survey of robot learning from demonstration")); Arora and Doshi ([2018](#bib.bib29 "A survey of inverse reinforcement learning: challenges, methods and progress")); Zhifei and Meng Joo ([2012](#bib.bib30 "A survey of inverse reinforcement learning techniques")); Ng et al. ([2000](#bib.bib31 "Algorithms for inverse reinforcement learning.")); Abbeel and Ng ([2004](#bib.bib32 "Apprenticeship learning via inverse reinforcement learning")), but these do not consider errors and often assume positive examples.
Human cognition: Prior cognitive science literature Van Overwalle and Baetens ([2009](#bib.bib289 "Understanding others’ actions and goals by mirror and mentalizing systems: a meta-analysis")); Baker and Tenenbaum ([2014](#bib.bib294 "Modeling human plan recognition using bayesian theory of mind")); Pantelis et al. ([2014](#bib.bib299 "Inferring the intentional states of autonomous virtual agents")); Saxe ([2005](#bib.bib288 "Against simulation: the argument from error")) has explored models that simulate decision-making processes of humans. Works in this space can be useful for identifying the reasons behind human decisions and errors. Griffiths et al Griffiths et al. ([2010](#bib.bib284 "Probabilistic models of cognition: exploring representations and inductive biases")) argued that top-down probabilistic models can better generalize and represent human decisions compared with bottom-up connectionist approaches. In another work Griffiths and Tenenbaum ([2006](#bib.bib285 "Optimal predictions in everyday cognition")), the authors used errors in human predictions to better understand people’s assumptions about the world and their decision-making processes.
Several authors have also developed methods to better understand how agents model other agents. Ullman et al Ullman et al. ([2009](#bib.bib292 "Help or hinder: bayesian models of social goal inference")) proposed a method to infer an agent’s intention, focused on whether an agent is helping or hindering the achievement of another agent’s goals. Baker et al Baker et al. ([2006](#bib.bib286 "Bayesian models of human action understanding"), [2009](#bib.bib290 "Action understanding as inverse planning")) developed a Bayesian framework to explain how people predict another agent’s actions based on their observations of task execution. Overall, understanding human cognition is quite relevant to our work at a high level, but our ultimate goal is different, as we would like to specifically identify representational errors. Our method is also flexible enough to work for humans as well as machines/agents.
Representation learning: Learning appropriate state representations and identifying when a representation is insufficient Bengio et al. ([2013](#bib.bib184 "Representation learning: a review and new perspectives")); Radford et al. ([2015](#bib.bib180 "Unsupervised representation learning with deep convolutional generative adversarial networks")); Diuk et al. ([2008](#bib.bib185 "An object-oriented representation for efficient reinforcement learning")); Wang et al. ([2015](#bib.bib238 "On deep multi-view representation learning")) is also relevant to this work, because we consider a setting in which an actor and an observer operate with different representations, and the observer must identify errors occurring from the actor’s flawed representation. Many works Devin et al. ([2018](#bib.bib329 "Deep object-centric representations for generalizable robot learning")); Silva and Costa ([2018](#bib.bib334 "Object-oriented curriculum generation for reinforcement learning")); Li et al. ([2018](#bib.bib335 "Object-sensitive deep reinforcement learning")); Ramakrishnan et al. ([2016](#bib.bib220 "Interpretable transfer for reinforcement learning based on object similarities")); Diuk et al. ([2008](#bib.bib185 "An object-oriented representation for efficient reinforcement learning")) have used object-based representations as a natural way to describe the world, resulting in improved transfer and interpretability. Many RL works Jaderberg et al. ([2016](#bib.bib327 "Reinforcement learning with unsupervised auxiliary tasks")); Ma et al. ([2018](#bib.bib275 "Universal successor representations for transfer reinforcement learning")); Lyu et al. ([2019](#bib.bib331 "SDRL: interpretable and data-efficient deep reinforcement learning leveraging symbolic planning")); Nachum et al. ([2018](#bib.bib276 "Near-optimal representation learning for hierarchical reinforcement learning")); Haarnoja et al. ([2018](#bib.bib328 "Latent space policies for hierarchical reinforcement learning")) have introduced approaches to automatically learn representations that generalize well across tasks. Overall, representation learning is focused on learning a low-dimensional representation and not necessarily errors.
Some works have specifically addressed flawed representations; for example, in one work Unhelkar and Shah ([2018](#bib.bib27 "Learning models of sequential decision-making without complete state specification using bayesian nonparametric inference and active querying")), Bayesian nonparametric techniques were used to learn unmodeled features of an agent with limited representation. Identifying agent blind spots Ramakrishnan et al. ([2018](#bib.bib67 "Discovering blind spots in reinforcement learning"), [2019](#bib.bib186 "Overcoming blind spots in the real world: leveraging complementary abilities for joint execution")) similarly assumes that a flawed agent learns from an oracle demonstrator. These assume the reverse scenario: that an agent learns about its own representational deficiencies using expert feedback, while we consider the scenario of identifying representation limitations of another agent.
3 Generative Error Model (GEM)
-------------------------------

Figure 2: Generative Error Model (GEM): A graphical model of the generative process for errors.
Learning the latent causes of errors is essential for reducing and resolving them. Thus, we provide a generative model of decision-making, which enables us to encode the causal process due to which errors were generated. We focus on two main error sources: representational and execution errors. Representational errors (or “blind spots”) result from an inability to observe critical attributes of the given task; execution errors occur when an actor has the correct representation of the world but still makes mistakes due to other factors, such as limited practice, slow reaction times, carelessness, and random noise during execution.
In our problem setup, an observer observes an actor perform a task and aims to determine the source of the actor’s errors. The observer is assumed to know the true representation of the world, which includes all task attributes that are needed to optimally act in the given task, and the optimal policy itself. This knowledge in turn allows the observer to reason about the actor’s errors.
The input to our model is a set of N demonstrations from the actor: D={(sireal,ai,ei)},i∈[1,...,N], represented as a list of state-action-error tuples. These demonstrations can be from a trained RL policy, a human actor, a robot controller, etc. The true state of the world, sireal=[f1,f2...,fk], is represented as a vector of features. The actor takes an action, ai∈A, and an error indicator, ei∈{0,1}, provides information about when errors occurred. We assume that the observer has access to an acceptable function facc(s,a), which returns 1 if action a is acceptable in state s, and 0 otherwise. This is a more relaxed function than just checking for optimality and can be defined by the observer flexibly based on the domain. For example, facc can be based on how much worse the actor’s decision is from the optimal.
Given these demonstrations, we want to identify errors caused by representational limitations. We assume that the actor may not be able to see the true state of the world sireal. Instead, the actor receives an observation oi, which is a (potentially many-to-one) function of sireal. We define the transformation function, which maps the true state to the observation, using a random variable denoting the actor’s blind spots. In general, this transformation can be an arbitrary nonlinear, stochastic function. In this formulation, we represent the blind spot as a vector b=[b1,...,bk],bj∈{0,1}, which encodes information about the features the actor cannot observe (i.e., when bj=1, the actor cannot observe feature fj of the true state). Each value bj is sampled from a Bernoulli distribution with hyperparameter q. Given true state sireal and blind spot vector b, we can obtain the actor’s possibly flawed observation, oi. The generative process is as follows:
| | | |
| --- | --- | --- |
| | bj∼Bernoulli(q) | |
| | oi∼Pr(oi|sireal,b) ∀i | |
We model the blind spot b as a mask over sireal in order to compute oi. Thus, in our case, this transition is deterministic; however, generally, the observation can be represented as an arbitrary, blind spot-dependent transformation of the state.
Now that we have the actor’s observation of the world, we want to explain the actor’s decisions in order to understand when these decisions cause errors. The observer has access to the optimal policy π∗s:Sreal→A with respect to the true state of the world sreal∈Sreal. However, since the actor’s observation and the true state do not share the same representation, the observer cannot directly query the optimal policy with the observation. To bridge this gap, we propose estimating the actor’s implicit view of the world in terms of the true state representation, where values for missing features denote the actor’s assumptions about those features. This variable captures this assumption of the world, which will ultimately affect the actor’s action selection. Our model also allows us to capture the case where the actor truly does not know and thus does not make any implicit assumption.
The model maps the actor’s observation oi to a distribution over possible implicit states si. Explicitly modelling the actor’s view of the world helps to reason about actor decisions. Given si and our known optimal policy π∗s, the observer can compute the optimal actor decision. However, an actor may not always act optimally, so we include an additional variable η that denotes noise in execution. Noise can be represented using an arbitrary distribution. In this work, we model η with discrete values, ranging from 1% to 40%, in 5% intervals (e.g., expert: 1% noise, beginner: 40% noise). This variable intuitively captures how often the actor deviates from the optimal policy given si. Thus, the action is derived by sampling from the optimal policy with probability 1−η and a random action with probability η. The error is computed using the known acceptable function. We sample actions and errors as follows. Figure [2](#S3.F2 "Figure 2 ‣ 3 Generative Error Model (GEM) ‣ A Bayesian Approach to Identifying Representational Errors") depicts the full graphical model.
| | | |
| --- | --- | --- |
| | si∼Pr(si|oi) ∀i | |
| | η∼Multinomial(α) | |
| | ai∼π∗s(ai|si,η) ∀i | |
| | ei∼facc(ei|sireal,ai) ∀i | |
The observer’s goal is to learn P(b,η|D): the probability of both the blind spot vector b and the noise parameter η given demonstration data D. The output is a probability distribution over the blind spot vectors and execution noise, which can be used to analyze the actor’s errors.
4 GEM Inference
----------------
We use techniques from Bayesian inference to infer latent variables b and η given demonstration data D: P(b,η|D). In domains with small state spaces, where exact inference is possible, variable elimination is used to eliminate each variable. Based on this approach, the following equations, included below, are used to infer the two latent variables. The sum is computed directly using the final equation to obtain the posterior distribution:
| | | |
| --- | --- | --- |
| | P(b,η|D)=P(D|b,η)P(b,η)P(D) | |
| | =P(b)P(η)∏i∑o% iP(oi|sireal,b)∑s% iP(si|oi)π∗s(ai|si,η)∑b,ηP(b)P(η)∏i∑oiP(oi|sireal,b)∑siP(si|oi)π∗s(ai|si,η) | |
For more complex domains, approximate inference is used. In collapsed Gibbs sampling, as shown in Algorithm [1](#alg1 "Algorithm 1 ‣ 4 GEM Inference ‣ A Bayesian Approach to Identifying Representational Errors"), we alternate between sampling the blind spot vector and the noise vector given all other variables. The variables oi and si are collapsed because they are highly correlated with b. Sampling one given the other variables results in very little variation among the sampled values. Thus, oi and si can be marginalized out so that only b is sampled. Given the value of b, η can be sampled, again marginalizing out the other two variables. After this sampling process, the probability of b and η can be computed based on the sampled data, which gives an estimate of the posterior distribution.
Given data D={sireal,ai,ei}, samples S=[]
for (sireal,ai,ei)∈D do
b∼P(b) {Initialize blind spot vector}
η∼P(η) {Initialize noise parameter}
for num∈[1,...,N] do
b∼P(b|D,η) {Sample blind spot given rest}
η∼P(η|D,b) {Sample noise given rest}
S+=(b,η) {Include sample}
end for
end for
P(b,η)=n(b,η)|D| {Compute probabilities}
Algorithm 1 Collapsed Gibbs Sampling
5 Experiments
--------------




Figure 3: Estimated blind spots given demonstration data. Vector (0,0,1) means the object color feature is a blind spot for the actor.
We evaluated our approach on two domains. The first is a gridworld environment, where we trained an RL agent that could not distinguish between differently colored objects. Our approach enabled inference of these blind spots. The second domain is a kitchen task, in which real human users were instructed to study a menu of several dishes and prepare these dishes from memory. Salt and sugar were purposefully made indistinguishable from one another, which resulted in consistent systematic errors. Further, due to the difficulty of the task, people made other mistakes, such as forgetting to include some of the ingredients in a given dish. Our findings indicate that our approach can infer representational human errors in this domain, which can inform the redesign of the kitchen environment to provide a more accurate view of the world. We discuss details of our experiments below.
###
5.1 Gridworld domain
We first conducted experiments in a gridworld domain: specifically, a 10x10 grid with one object, colored either green or red, placed randomly in one of the 100 locations. An agent was tasked with collecting green objects and avoiding red objects. The state was represented as: [Δx,Δy,c], denoting the x-distance from the object, the y-distance from the object, and the object color respectively. We simulated a color-blind agent that could not tell the difference between green and red. We included noise in the policy by randomly sampling actions with 10% probability.
#### Our approach inferred blind spots
Our model inferred the blind spot vector given data from demonstrations. We generated data from an RL agent who could not observe color and implicitly assumed that any given object was green. We ran variable elimination to estimate P(b,η|D), averaged over 100 runs. The first plot in Figure [3](#S5.F3 "Figure 3 ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors") plots the variation of the KL-divergence between the estimated b and the true distribution with respect to the size of the demonstration data. The estimate of b approached the true vector as the amount of data was increased, as shown by the decreasing KL-divergence. The histogram depicts the estimated probabilities for each blind spot vector when the data consisted of 100 datapoints. The model correctly predicted the true vector (0,0,1) with 70% probability; in other words, the model was able to recognize that the agent likely observed the x- and y-distances away from the object but not the object’s color.
#### Noise was more challenging to estimate
Our model had a more difficult time estimating the noise parameter compared to inferring the blind spots. In this domain, we included a range of execution noise from 1% noise to 40% with intervals of 5%. The third plot in Figure [3](#S5.F3 "Figure 3 ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors") depicts the KL-divergence of the estimated η and the true distribution. The true agent acted with 10% noise, but our approach inferred 40% noise as the most likely because a higher (but incorrect) value of execution noise better explains the errors in the data. In other words, 40% error allows for more randomness in action decisions than 10% does so the model can more easily generate the demonstration data. This suggests the need for regularization and more informative priors for noise estimation, potentially based on domain expertise. This finding raises the question of whether adding the noise parameter is useful if the model cannot estimate it precisely.
#### Modeling noise improved blind spot inference
We next compared a version of our model without an explicit noise parameter (we included a fixed η=0.01 deviation to avoid probabilities of 0) to our original model that estimates η, averaged over 100 runs. The fixed-noise baseline is included to analyze the benefit of including noise in the model and inferring it. Figure [4](#S5.F4 "Figure 4 ‣ Modeling noise improved blind spot inference ‣ 5.1 Gridworld domain ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors") shows that when the agent acted with a high amount of noise (30%), the model without η attributed the noisy actions to confused estimates for the blind spots. For example, if the agent’s actions did not match the optimal for a large percentage of states, this led the observer to think that the agent had additional blind spots.
The model that inferred η was able to explain much of this noise and recover the true blind spot vector. This shows that while it is difficult to exactly estimate this parameter, inferring η provides greater flexibility for the model, resulting in better blind spot estimates. In order to improve the estimate of η, more informative priors for execution noise can be incorporated.

Figure 4: Adding the noise parameter resulted in more accurate estimates of blind spots in the presence of noisy demonstration data.
#### Estimated implicit state representation captured agent assumptions about missing features
Another interesting insight from our model is the most likely estimate of si for a given data point, which provides more information than just recovering the actor’s blind spots because it estimates the actor’s assumption about missing observational features. For example, blind spots might indicate that the actor does not observe color, while the implicit state si tells us the actor is likely assuming the color to be green.
The model can be used to compute P(si|D), which is a distribution over possible si values given the data. The highest probability si captures information about which assumptions the agent might be making about the features it does not observe. As Figure [5](#S5.F5 "Figure 5 ‣ Estimated implicit state representation captured agent assumptions about missing features ‣ 5.1 Gridworld domain ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors") indicates, our model estimated that the agent’s actions matched with about 80% probability to an optimal policy assuming the object is green. Note that this could either mean that the agent assigned a probability every time and made a decision under uncertainty or that the agent assumed green eight out of 10 times and red the remaining two. Our model does not distinguish between the two cases. In this way, our generative model allows for a more in-depth analysis of an actor’s decision-making process.

Figure 5: The most likely implicit state the agent may be assuming in order to make action decisions.
###
5.2 Kitchen domain
The second evaluation was on a kitchen domain. For this application, real human users were tasked with preparing dishes in a mock kitchen environment. They were given two minutes to study a menu of 3 dishes and then made 25 dishes from memory based on randomly generated orders from the menu, with a 1-minute refresher of the menu every 5 dishes. The task of memorizing and quickly making dishes was quite challenging, so we expected users to make many mistakes, some because they did not know the identity of certain ingredients (blind spots) and others because of limited memory or carelessness (execution noise). To create a representational blind spot in the task, we intentionally included salt and sugar in recipes without disambiguating them in the kitchen. From this data collection, we obtained a total of 2,420 state-action-error tuples. The proposed model was then used to identify the cause of human errors.
#### Data included representation and memory errors
The state for this task was constructed with the following features: 1 feature representing the type of dish the participant was currently making out of the possible three, 7 binary features representing whether each of the ingredients necessary for the given dish had been included, and 14 features denoting the ingredient’s position at each of the 14 locations throughout the kitchen. The action at each time step was to either select one of the 14 ingredients or to “serve” the dish once the person thought it was complete. If the selected ingredient was part of the specified dish and had not yet been used, then the participant’s action was defined as acceptable. The participant was considered to have made an error if she included an ingredient not meant for the given dish, or served a dish with missing ingredients.
We modelled the human’s blind spot vector as a binary vector with the same size as the state, denoting whether the human could observe each state feature. Because the space of blind spot vectors was huge, we restricted the human’s possible blind spots to include only the features denoting which ingredient was present at each kitchen location and allowed up to three ingredients to be unobservable to the person. The implicit state si represented the human’s estimate of the true world state and was mainly used to predict which ingredients the human assumed were present at each location. Because the state space was larger for this domain, we used collapsed Gibbs sampling for inference (Algorithm [1](#alg1 "Algorithm 1 ‣ 4 GEM Inference ‣ A Bayesian Approach to Identifying Representational Errors")).
To evaluate our model, we computed an estimate of the ground truth based on our task setup. There were two ground-truth blind spots, the location of the salt and the location of the sugar, both of which were unobservable to the person. We obtained the noise ground-truth value by removing the salt/sugar errors and taking the percentage of the remaining erroneous actions. While these were the aggregate ground truths, each participant had varying levels of noise during task execution, making it difficult to truly recover this blind spot vector. We observed a total error rate of approximately 24% in the demonstration data, with 7.23% attributed to blind spot (salt/sugar) mistakes and 16.78% resulting from other forms of noise. With so much noise in the data, identifying the true blind spot vector was challenging.

Figure 6: Performance of our approach on predicting human blind spots in the kitchen domain.
#### Our approach correctly identified expected blind spots in the kitchen task
In Figure [6](#S5.F6 "Figure 6 ‣ Data included representation and memory errors ‣ 5.2 Kitchen domain ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors"), we plot our model’s performance when predicting blind spots as the demonstration data budget increased. We ran the model for each participant individually, and the input was the first n datapoints in a given participant’s data (e.g., 60 datapoints along the x-axis indicate that the model used the first 60 (sireal,ai,ei) tuples for one participant in order to infer blind spots).
We measured the accuracy of predicting the true blind spot vector – an extremely challenging task. Selecting a vector randomly at chance would result in an accuracy of 0.002 due to the size of the space of possible vectors. Our model was able to achieve approximately 30% accuracy of predicting the ground truth blind spot vector b, which is quite difficult. Recovering the most likely blind spot vector requires separating the noise in the real data to determine consistent mistakes a participant might make due to a partial view of the world.
In order to further analyze the benefits of our model, we compared our model, which infers both the blind spot and noise parameters to a baseline model that only infers blind spots with a fixed, constant noise value. In one condition, we set the noise to a minimal value of η=0.01 (1% noise) – similar to that present in the gridworld experiments. This scenario resulted in very poor performance because the model consistently attributed noisy human actions to additional blind spots since it lacked any other way to explain the randomness in human actions. We also included a baseline with a fixed 20% noise level, which is similar to an oracle that knows approximately the true value of noise. Our GEM model was able to automatically infer the two ground-truth blind spots with higher accuracy than the fixed η=0.01 baseline and achieved performance close to the oracle variant. In real-world applications, it is unlikely for the true amount of execution noise to be known, so it is best to infer both variables automatically, as our model does. This also allows the model to learn personalized noise values for each user.

Figure 7: Estimate of participants’ implicit state of the world on the kitchen domain.
#### Our model was able to capture participants’ implicit assumptions
Since our model includes latent variables intended to explain human actions, we can also query for other quantities, such as the most likely human’s implicit state of the world si. Similar to the gridworld task, this variable can be inferred by calculating P(si|D). Then, marginalization can be used to determine the probability of various values for a specific feature. Figure [7](#S5.F7 "Figure 7 ‣ Our approach correctly identified expected blind spots in the kitchen task ‣ 5.2 Kitchen domain ‣ 5 Experiments ‣ A Bayesian Approach to Identifying Representational Errors") plots a distribution over possible ingredients that the human assumed were located in the true salt location in the kitchen. The model predicted sugar as the most likely ingredient, matching our intuitive expectation. Thus, our model can serve as a tool to better understand human errors and the assumptions leading to those errors.
6 Discussion
-------------
These findings indicate that our approach can infer blind spots on the two tested domains. Here, we discuss limitations of the approach and directions for next steps. First, this work assumes that an observer has access to the representation of the world and the optimal/acceptable policy. While this is realized in several real-world applications (e.g., an AI pilot is trained in a simulator with limited observability, while a human observes all features of the simulation and can identify weaknesses in the pilot’s performance), our approach can be combined with other methods for addressing flawed representations to handle a larger range of problems Ramakrishnan et al. ([2018](#bib.bib67 "Discovering blind spots in reinforcement learning")); Unhelkar and Shah ([2018](#bib.bib27 "Learning models of sequential decision-making without complete state specification using bayesian nonparametric inference and active querying")).
Secondly, the GEM model provides one way of inferring representational errors. However, the model forces us to estimate the actor’s view of the world in terms of the observer’s state representation. Explaining the actor’s behavior through this intermediate representation may not be appropriate in some settings; for example, if the actor cannot perceive a given color, she may not be filling it in with some implicit color, but instead acting directly according to the flawed observation. In order to model this, we present a variation of our graphical model (Figure [8](#S6.F8 "Figure 8 ‣ 6 Discussion ‣ A Bayesian Approach to Identifying Representational Errors")) that estimates the actor’s policy directly with respect to the representation of the observation πo:O→A. The computational cost of performing inference with this model is much higher (|b|+N|o|+N|s| parameters for the original vs. |b|+N|o|+|πo| parameters for the variation where |πo| can be large). However, it can express a richer and more complex model of decision-making and merits exploration in future work.

Figure 8: A variation of the original model that directly estimates the actor’s policy in terms of the actor’s observation.
In both of these models, since there are many factors that can affect action decisions, disentangling the different error sources from one another can be challenging. For example, if significant noise exists within the demonstration data, it may be difficult to determine whether there is a systematic blind spot with limited noise or no blind spot with very high noise. As there are multiple possible explanations for the data, the problem can be ill-posed in certain applications. In such cases, it is important to include strong priors for different variables within the model to increase the chance of identifying the most likely explanation for the errors. It could also be useful to include additional causes for errors in the model (although this could make disentangling even more difficult). For example, the random noise factor in this model encompasses errors resulting from various factors, including carelessness, limited training, an incorrect model of the world, and others. Separating errors that occur because a person does not know the task objective from those that are truly random can be useful for enabling more informed fixing of these errors.
Next, the model depends upon the representation of each variable. For example, in the kitchen domain, our model learned that people’s perceptions of the salt and sugar were incorrect. Specifically, they confused the locations because the ingredients were visually indistinguishable. To learn this, the representation of the state, blind spot, and observation had to include the locations of the ingredients. One possible extension of the model is to have the blind spot depend on the true state (e.g., for images, it may be better for the blind spot to be state-dependent, which would represent the actor’s inability to see a region of that particular image). Another example is relaxing binary blind spots to be continuous, where each variable can take a value between [0,1] (e.g., a dimly lit region might not entirely be visible to a person).
Depending on the domain, our model is also flexible enough to make changes to conditional independencies and observed variables. For example, in this work, the observer had access to whether the actor’s actions led to an error during the task, but perhaps the error is not directly available and instead, the observer has access to a noisy observation of the error. Another possibility is that the error is only observed at the end of a sequence of actions, which would require propagating error signals to earlier decisions.
Understanding errors can ultimately lead to targeted refinement of an actor’s representation and policy. For representational deficiencies, we can add new sensors to enable the actor to observe what they originally could not. Execution errors can be addressed through techniques, such as training procedures, attention support, and reminders. An interesting future direction is to extend this model to evaluate the benefit of different improvement strategies for assisting the actor. For example, we can select the intervention that results in the largest improvement in task performance. This iterative process of identifying and fixing errors can improve decision-making on complex tasks.
7 Conclusion
-------------
In this work, we present a generative model GEM for identifying errors of an actor (e.g., agent, human, robot) caused by representational limitations. Our approach models and infers the actor’s estimate of the true state in order to explain her action decisions. Bayesian inference is used to separate blind spots, or representational deficiencies, from execution noise, which represents how often actions deviate from the optimal policy. We demonstrate through experiments on a gridworld domain with a trained RL agent and a kitchen task with real user data that we are able to identify blind spots. We additionally use the generative model to recover the actor’s view of the world, which provides more clarity into why certain actions were taken. In future work, we plan to augment the model with additional components to better express complex decision-making processes and use the findings from this model to reduce errors and improve safety on real-world tasks.
|
baa1fee0-c039-43ee-b775-b1e66620ff69
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Exporting Facebook Comments, Again
I want comments on my social media crossposts to show up on my blog as a comment section, and mostly this works well: modern systems ( Mastodon, Bluesky, LessWrong, etc) provide APIs where you can load the replies associated with a post. On the other hand, older systems like Facebook are more locked down: they want to keep your content inside the platform as part of their economic moat.
Still, Facebook will show all the comments on a post to anyone who visits it, even if logged out. You have to dismiss a popup and click "show more" and "see replies" a few times, but it's all public. At times I've written scripts to export the comments, but they're quite brittle: Facebook doesn't design their pages to be easy to scrape, and so my code has relied on incidental things that only happen to work.
Even though this is not a permanent solution, I've had another go at writing a comment exporter (code). It's not as thorough as past times: I couldn't figure out easy was to get the timestamp or links to the comment on Facebook, and I've left both out. I also had to switch my opt-out from working on user id to user name, which is less robust. But it works! I've gone back through June 2019, fetching comments for any posts where I was missing them.
|
a33de67c-fbba-40c1-803b-111f83db86eb
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Algorithmic formalization of FDT?
I occasionally see a question like "what would FDT recommend in ....?" and I am puzzled that there is no formal algorithm to answer it. Instead humans ask other humans, and the answers are often different and subject to interpretation. This is rather disconcerting. For comparison, you don't ask a human what, say, a chessbot would do in a certain situation, you just run the bot. Similarly, it would be nice to have an "FDTbot" one can feed a decision theory problem to. Does something like that exist? If not, what are the obstacles?
|
9b5d1860-dc94-4164-8355-e92a35bf4a88
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Can Intelligence Explode?
1 Introduction
---------------
The technological singularity is a hypothetical scenario
in which self-accelerating technological advances cause
infinite progress in finite time.
The most popular scenarios are an intelligence explosion
[[Goo65](#bib.bibx13)] or a speed explosion [[Yud96](#bib.bibx38)] or a
combination of both [[Cha10](#bib.bibx5)].
This quite plausibly is accompanied by a radically changing
society, which will become incomprehensible to us current
humans close to and in particular at or beyond the singularity.
Still some general aspects may be predictable.
Already the invention of the first four-function mechanical
calculator one-and-a-half centuries ago [[Tho47](#bib.bibx33)]
inspired dreams of self-amplifying technology. With the advent
of general purpose computers and the field of artificial
intelligence half-a-century ago, some mathematicians, such as
Stanislaw Ulam [[Ula58](#bib.bibx34)], I.J. Good [[Goo65](#bib.bibx13)], Ray
Solomonoff [[Sol85](#bib.bibx32)], and Vernor Vinge
[[Vin93](#bib.bibx35)] engaged in singularity thoughts.
But it was only in the last decade that the singularity idea
achieved wide-spread popularity. Ray Kurzweil popularized the
idea in two books [[Kur99](#bib.bibx20), [Kur05](#bib.bibx21)], and the
Internet helped in the formation of an initially small
community discussing this idea. There are now annual
Singularity Summits approaching a thousand participants per
year, and even a Singularity Institute.
The singularity euphoria seems in part to have been triggered
by the belief that intelligent machines that possess general
intelligence on a human-level or beyond can be built within our
life time, but it is hard to tell what is cause and effect. For
instance, there is now a new conference series on Artificial
General Intelligence (AGI) as well as some whole-brain
emulation projects like Blue Brain [[dGSGR10](#bib.bibx10), [GLA+10](#bib.bibx12)].
A loosely related set of communities which are increasing in
momentum are the “Immortalists” whose goal is to extend the
human life-span, ideally indefinitely. Immortality and
life-extension organizations are sprouting like mushrooms:
e.g. the Immortality and the Extropy Institute, the Humanity+
Association, and the Alcor Life Extension, Acceleration
Studies, Life Extension, Maximum Life, and Methusalem
Foundations.
There are many different potential paths toward a singularity.
Most of them seem to be based on software intelligence on
increasingly powerful hardware. Still this leaves many options,
the major ones being mind uploading (via brain scan) and subsequent improvement, knowledge-based reasoning and planning software (traditional AI research), artificial agents that learn from experience (the machine learning approach), self-evolving intelligent systems (genetic algorithms and artificial life approach), and the awakening of the Internet (or digital Gaia scenario).
Physical and biological limitations likely do not allow
singularities based on (non-software) physical brain
enhancement technologies such as drugs and genetic engineering.
Although many considerations in this article should be
independent of the realized path, I will assume a virtual
software society consisting of interacting rational agents
whose intelligence is high enough to construct the next
generation of more intelligent rational agents. Indeed, one of
the goals of the article is to discuss what (super)intelligence
and rationality could mean in this setup. For concreteness, the
reader may want envisage an initial virtual world like Second
Life that is similar to our current real world and inhabited by
human mind uploads.
Much has been written about the singularity and David Chalmers’
article [[Cha10](#bib.bibx5)] covers quite wide ground. I
essentially agree with all his statements, analysis, and also
share his personal opinions and beliefs. Most of his
conclusions I will adopt without repeating his arguments. The
motivation of my article is to augment Chalmers’ and to discuss
some issues not addressed by him,
in particular what it could mean for intelligence to explode.
This is less obvious than it might appear, and requires a more
careful treatment of what intelligence actually is.
Chalmers cleverly circumvents a proper discussion or
definition of intelligence by arguing (a) there is something like intelligence,
(b) there are many cognitive capacities correlated with intelligence, (c) these capacities might explode, therefore (d) intelligence might amplify and explode.
While I mostly agree with this analysis, it does not tell us
what a society of ultra-intelligent beings might look like. For
instance, if a hyper-advanced virtual world looks like random
noise for humans watching them from the “outside”, what does
it mean for intelligence to explode for an outside observer?
Conversely, can an explosion actually be felt from the
“inside” if everything is sped up uniformly? If neither
insiders nor outsiders experience an intelligence explosion,
has one actually happened?
The paper is organized as follows:
Section [2](#S2 "2 Will there be a Singularity ‣ Can Intelligence Explode?") briefly recapitulates the most
popular arguments why to expect a singularity and why “the
singularity is near” [[Kur05](#bib.bibx21)], obstacles towards a
singularity, and which choices we have.
Section [3](#S3 "3 The Singularity from the Outside ‣ Can Intelligence Explode?") describes how an outside observer
who does not participate in the singularity might experience
the singularity and the consequences he faces. This will depend
on whether the singularity is directed inwards or outwards.
Section [4](#S4 "4 The Singularity from the Inside ‣ Can Intelligence Explode?") investigates what a participant in
the singularity will experience, which is quite different from
an outsider and depends on details of the virtual society; in
particular how resources are distributed.
Section [5](#S5 "5 Speed versus Intelligence Explosion ‣ Can Intelligence Explode?") takes a closer look at what
actually explodes when computing power is increased without
limits in finite real time. While by definition there is a
speed explosion, who, if anyone at all, perceives an
intelligence explosion/singularity depends on what is sped up.
In order to determine whether anyone perceives an intelligence
explosion, it is necessary to clarify what intelligence
actually is and what super-intelligences might do, which is
done in Section [6](#S6 "6 What is Intelligence ‣ Can Intelligence Explode?").
The considered formal theory of rational intelligence allows
investigating a wide range of questions about
super-intelligences, in principle rigorously mathematically.
Section [7](#S7 "7 Is Intelligence Unlimited or Bounded ‣ Can Intelligence Explode?") elucidates the possibility that
intelligence might be upper bounded, and whether this would
prevent an intelligence singularity.
Section [8](#S8 "8 Singularitarian Intelligences ‣ Can Intelligence Explode?") explains how a society right at the
edge of an intelligence singularity might be theoretically studied
with current scientific tools.
Even when setting up a virtual society in our image, there are
likely some immediate differences, e.g. copying and modifying
virtual structures, including virtual life, should be very easy.
Section [9](#S9 "9 Diversity Explosion and the Value of a Virtual Life ‣ Can Intelligence Explode?") shows that this will have immediate
(i.e. way before the singularity) consequences on the
diversity and value of life.
Section [10](#S10 "10 Personal Remarks ‣ Can Intelligence Explode?") contains some personal remarks and
Section [11](#S11 "11 Conclusions ‣ Can Intelligence Explode?") draws some conclusions.
I will use the following terminology throughout this article.
Some terms are taken over or refined from other authors and
some are new:
* comp = computational resources
* singularity = infinite change of an observable quantity in finite time
* intelligence explosion = rapidly increasing intelligence far beyond human level
* intelligence singularity = infinite intelligence in finite time
* speed explosion/singularity = rapid/infinite increase of computational resources
* outsider = biological = non-accelerated real human watching a singularity
* insider = virtual = software intelligence participating in a singularity
* computronium = theoretically best possible computer per unit of matter [[Bre65](#bib.bibx4)]
* real/true intelligence = what we intuitively would regard as intelligence
* numerical intelligence = numerical measure of intelligence like IQ score
* AI = artificial intelligence (used generically in different ways)
* AGI = artificial general intelligence = general human-level intelligence or beyond.
* super-intelligence = AI+ = super-human intelligence [[Cha10](#bib.bibx5)]
* hyper-intelligent = AI++ = incomprehensibly more intelligent than humans
* vorld = virtual world. A popular oxymoron is ‘virtual reality’
* virtual = software simulation in a computer.
I drop the qualifier ‘virtual’ if this does not cause any
confusion, e.g. when talking about a human in a vorld, I mean
of course a virtual human.
I will assume a strong/physical form of the Church-Turing
thesis that everything in nature can be calculated by a Turing
machine, i.e. our world including the human mind and body and
our environment are computable [[Deu97](#bib.bibx9), [RH11](#bib.bibx27)].
So in the following I will assume without further argument that
all physical processes we desire to virtualize are indeed
computational and can be simulated by a sufficiently powerful
(theoretical) computer.
This assumption simplifies many of the considerations to
follow, but is seldom essential, and could be lifted or
weakened.
2 Will there be a Singularity
------------------------------
The current generations Y or Z may finally realize the age-old
dream of creating systems with human-level intelligence or
beyond, which revived the interest in this endeavor. This
optimism is based on the belief that in 20–30 years the raw
computing power of a single computer will reach that of a human
brain and that software will not lag far behind.
This prediction is based on extrapolating Moore’s law, now
valid for 50 years, which implies that comp doubles every 1.5
years. As long as there is demand for more comp, Moore’s law
could continue to hold for many more decades before
computronium is reached.
Further, different estimates of the computational capacity of a
human brain consistently point towards 1015…1016
flop/s [[Kur05](#bib.bibx21)]: Counting of neurons and synapses,
extrapolating tiny-brain-part simulations, and comparing the
speech recognition capacities of computers to the auditory
cortex.
The most compelling argument for the emergence of a singularity
is based on Solomonoff’s law [[Sol85](#bib.bibx32)] which
Yudkowski [[Yud96](#bib.bibx38)] succinctly describes as follows:
>
> “If computing speeds double every two years,
>
>
> what happens when computer-based AIs are doing the research?
>
>
> Computing speed doubles every two years.
>
>
> Computing speed doubles every two years of work.
>
>
> Computing speed doubles every two subjective years of work.
>
>
>
> Two years after Artificial Intelligences reach human
> equivalence, their speed doubles. One year later, their speed
> doubles again.
>
>
>
> Six months - three months - 1.5 months … Singularity.”
>
>
>
Interestingly, if this argument is valid, then Moore’s law in a
sense predicts its own break-down; not the usually anticipated
slow-down, but an enormous acceleration of progress when
measured in physical time.
The above acceleration would indeed not be the first time of an
enormous acceleration in growth. The economist Robin Hanson
argues that “Dramatic changes in the rate of economic growth
have occurred in the past because of some technological
advancement. Based on population growth, the economy doubled
every 250’000 years from the Paleolithic era until the
Neolithic Revolution. This new agricultural economy began to
double every 900 years, a remarkable increase. In the current
era, beginning with the Industrial Revolution, the world s
economic output doubles every fifteen years, sixty times faster
than during the agricultural era.” Given the increasing role
of computers in our economy, computers might soon dominate it,
locking the economic growth pattern to computing speed, which
would lead to a doubling of the economy every two (or more
precisely 1.5) years, another 10 fold increase. If the rise of
superhuman intelligences causes a similar revolution, argues
Hanson [[Han08](#bib.bibx14)], one could expect the virtual economy
to double on a monthly or possibly on a weekly basis. So the
technological singularity phenomenon would be the next and
possibly last growth acceleration.
Ray Kurzweil is a master of producing exponential, double
exponential, and singular plots [[Kur05](#bib.bibx21)], but one has
to be wary of data selection, as Juergen Schmidhuber has
pointed out.
Chalmers [[Cha10](#bib.bibx5)] discusses various potential
obstacles for a singularity to emerge. He classifies them into
structural obstacles (limits in intelligence space, failure to
takeoff, diminishing returns, local maxima) and manifestation
obstacles (disasters, disinclination, active prevention) and
correlation obstacles.
For instance, self-destruction or a natural catastrophe
might wipe out the human race [[BC08](#bib.bibx2)].
Also, the laws of physics will likely prevent a singularity in
the strict mathematical sense. While some physical theories in
isolation allow infinite computation in finite time (see Zeno
machines [[Wey27](#bib.bibx37)] and hypercomputation [[Cop02](#bib.bibx7)]
in general), modern physics raises severe barriers
[[Bre65](#bib.bibx4), [Bek03](#bib.bibx3), [Llo00](#bib.bibx24), [Aar05](#bib.bibx1)].
But even if so, today’s computers are so far away from these
limits, that converting our planet into computronium would
still result in a vastly different vorld, which is considered a
reasonable approximation to a true singularity. Of course,
engineering difficulties and many other obstructions may stop
the process well before this point, in which case the end
result may not account as a singularity but more as a phase
transition à la Hanson or even less spectacular.
Like Chalmers, I also believe that disinclination is the most
(but not very) likely defeater of a singularity. In the
remainder of this article I will assume absence of any such
defeaters, and will only discuss the structural obstacles
related to limits in intelligence space later.
The appearance of the first super-intelligences is usually
regarded as the ignition of the detonation cord towards the
singularity – the point of no return. But it might well be
that a singularity is already now unavoidable. Politically it
is very difficult (but not impossible) to resist technology or
market forces as e.g. the dragging discussions on climate
change vividly demonstrate, so it would be similarly difficult
to prevent AGI research and even more so to prevent the
development of faster computers. Whether we are before, at, or
beyond the point of no return is also philosophically intricate
as it depends on how much free will one attributes to people
and society;
like a spaceship close to the event horizon might in principle
escape a black hole but is doomed in practice due to limited
propulsion.
3 The Singularity from the Outside
-----------------------------------
Let us first view the singularity from the outside. What will
observers who do not participate in it “see”. How will it
affect them?
First, the hardware (computers) for increasing comp must be
manufactured somehow. As already today, this will be done by
(real) machines/robots in factories. Insiders will provide
blue-prints to produce better computers and better machines
that themselves produce better computers and better machines ad
infinitum at an accelerated pace. Later I will explain why
insiders desire more comp. Non-accelerated real human
(outsiders) will play a diminishing role in this process due to
their cognitive and speed limitations. Quickly they will only
be able to passively observe some massive but incomprehensible
transformation of matter going on.
Imagine an inward explosion, where a fixed amount of matter is
transformed into increasingly efficient computers until it
becomes computronium. The virtual society like a
well-functioning real society will likely evolve and progress,
or at least change. Soon the speed of their affairs will make
them beyond comprehension for the outsiders. For a while,
outsiders may be able to make records and analyze them in slow
motion with an increasing lag. Ultimately the outsiders’
recording technology will not be sufficient anymore, but some
coarse statistical or thermodynamical properties could still be
monitored, which besides other things may indicate an upcoming
physical singularity. I doubt that the outsiders will be able
to link what is going on with intelligence or a technological
singularity anymore.
Insiders may decide to interact with outsiders in slow motion
and feed them with pieces of information at the maximal
digestible rate, but even with direct brain-computer
interfaces, the cognitive capacity of a human brain is bounded
and cannot explode. A technologically augmented brain may
explode, but what would explode is the increasingly dominant
artificial part, rendering the biological brain eventually
superfluous — a gradual way of getting sucked into the inside
world. For this reason, also intelligence amplification by
human-computer interfaces are only temporarily viable before
they either break down or the extended human becomes
effectively virtual.
After a brief period, intelligent interaction between insiders
and outsiders becomes impossible. The inside process may from
the outside resemble a black hole watched from a safe distance,
and look like another interesting physical, rather than
societal, phenomenon.
This non-comprehensibility conclusion can be supported by an
information-theoretic argument: The characterization of our
society as an information society becomes even better, if not
perfect, for a virtual society. There is lots of motivation to
compress information (save memory, extract regularities, and
others), but it is well-known [[LV08](#bib.bibx25)] that maximally
compressed information is indistinguishable from random noise.
Also, if too much information is produced, it may actually
“collapse”. Here, I am not referring to the formation of
black holes [[Bek03](#bib.bibx3)], but to the fact that a
library that contains all possible books has zero information
content (cf. the Library of Babel). Maybe a society of
increasing intelligence will become increasingly
indistinguishable from noise when viewed from the outside.
Let us now consider outward explosion, where an increasing
amount of matter is transformed into computers of fixed
efficiency (fixed comp per unit time/space/energy). Outsiders
will soon get into resource competition with the expanding
computer world, and being inferior to the virtual
intelligences, probably only have the option to flee. This
might work for a while, but soon the expansion rate of the
virtual world should become so large, theoretically only
bounded by the speed of light, that escape becomes impossible,
ending or converting the outsiders’ existence.
So while an inward explosion is interesting, an outward
explosion will be a threat to outsiders. In both cases,
outsiders will observe a speedup of cognitive processes and
possibly an increase of intelligence up to a certain point. In
neither case will outsiders be able to witness a true
intelligence singularity.
Historically, mankind was always outward exploring; just in
recent times it has become more inward exploring. Now people
more and more explore virtual worlds rather than new real
worlds. There are two reasons for this. First, virtual worlds
can be designed as one sees fit and hence are arguably more
interesting, and second, outward expansion now means deep sea
or space, which is an expensive endeavor. Expansion usually
follows the way of least resistance.
Currently the technological explosion is both inward and
outward (more and faster computers). Their relative speed in
the future will depend on external constraints. Inward
explosion will stop when computronium is reached. Outward
explosion will stop when all accessible convertible matter has
been used up (all on earth, or in our galaxy, or in our
universe).
4 The Singularity from the Inside
----------------------------------
Let us now consider the singularity from the inside.
What will a participant experience?
Many things of course will depend on how the virtual world is
organized. It is plausible that various characteristics of our
current society will be incorporated, at least initially. Our
world consists of a very large number of individuals, who
possess some autonomy and freedom, and who interact with each
other and with their environment in cooperation and in
competition over resources and other things.
Let us assume a similar setup in a virtual world of intelligent
actors. The vorld might actually be quite close to our real
world. Imagine populating already existing virtual worlds like
Second Life or World of Warcraft with intelligent agents
simulating scans of human brains.
Consider first a vorld based on fixed computational resources.
As indicated, initially, the virtual society might be similar
to its real counter-part, if broadly understood. But some
things will be easier, such a duplicating (virtual) objects and
directed artificial evolution. Other things will be harder or
impossible, such as building faster virtual computers and
fancier gadgets reliant on them.
This will affect how the virtual society will value different
things (the value of virtual life and its implications will be
discussed later), but I would classify most of this as a
change, not unlike in the real world when discovering or
running out of some natural resource or adapting to new models
of society and politics.
Of course, the virtual society, like our real one, will also
develop: there will be new inventions, technologies, fashions,
interests, art, etc., all virtual, all software, of course, but
for the virtuals it will feel real.
If virtuals are isolated from the outside world and have
knowledge of their underlying computational processes, there
would be no quest for a virtual theory of everything
[[Hut10](#bib.bibx17)], since they would already know it.
The evolution of this vorld might include weak singularities in
the sense of sudden phase transitions or collapses of the
society, but an intelligence explosion with fixed comp, even
with algorithmic improvements seems implausible.
Consider now the case of a vorld with increasing comp. If extra
comp is used for speeding up the whole virtual world uniformly,
virtuals and their virtual environment alike, the inhabitants
would actually not be able to recognize this. If their
subjective thought processes will be sped up at the same rate
as their surroundings, nothing would change for them. The only
difference, provided virtuals have a window to the outside real
world, would be that the outside world slows down. If comp is
sped up hyperbolically, the subjectively infinite future of the
virtuals would fit into finite real time: For the virtuals, the
external universe would get slower and slower and ultimately
come to a halt. Also outsiders would appear slower (but not
dumber).
This speed-up/slow-down phenomenon is inverse compared to
flying into a black hole. An astronaut flying into a black hole
will pass the Schwarzschild radius and hit the singularity in
finite subjective time. For an outside observer, though, the
astronaut gets slower and slower and actually takes infinite
time to vanish behind the Schwarzschild radius.
If extra comp is exclusively used to expand the vorld and add
more virtuals, there is no individual speedup, and the bounded
individual comp forces intelligence to stay bounded, even with
algorithmic improvements. But larger societies can also evolve
faster (more inventions per real time unit), and if regarded as
a super-organism, there might be an intelligence explosion, but
not necessarily so: Ant colonies and bee hives seem more
intelligent than their individuals in isolation, but it is not
obvious how this scales to unbounded size. Also, there seems to
be no clear positive correlation between the number of
individuals involved in a decision process and the intelligence
of its outcome.
In any case, the virtuals as individuals will not experience an
intelligence explosion, even if there was one. The outsiders
would observe virtuals speeding up beyond comprehension and
would ultimately not recognize any further intelligence
explosion.
The scenarios considered in this and the last section are of
course only caricatures. An actual vorld will more likely
consist of a wide diversity of intelligences: faster and slower
ones, higher and lower ones, and a hierarchy of super-organisms
and sub-vorlds. The analysis becomes more complicated, but the
fundamental conclusion that an intelligence explosion might be
unobservable does not change.
5 Speed versus Intelligence Explosion
--------------------------------------
The comparison of the inside and outside view has revealed that
a speed explosion is not necessarily an intelligence explosion.
In the extreme case, insiders may not experience anything and
outsiders may witness only noise.
Consider an agent interacting with an environment. If both are
sped up at the same rate, their behavioral interaction will not
change except for speed. If there is no external clock
measuring absolute time, there is no net effect at all.
If only the environment is sped up, this has the same effect as
slowing down the agent. This does not necessarily make the
agent dumber. He will receive more information per action, and
can make more informed decisions, provided he is left with
enough comp to process the information. Imagine being inhibited
by very slowly responding colleagues. If you could speed them
up, this would improve your own throughput, and subjectively
this is the same as slowing yourself down. But (how much) can
this improve the agent’s intelligence? In the extreme case,
assume the agent has instant access to all information, not
much unlike we already have by means of the Internet but much
faster. Both usually increase the quality of decisions, which
might be viewed as an increase in intelligence. But intuitively
there should be a limit on how much information a comp-limited
agent can usefully process or even search through.
Consider now the converse and speed up the agent (or
equivalently slow down the environment). From the agent’s view,
he becomes deprived of information, but has now increased
capacity to process and think about his observations. He
becomes more reflective and cognitive, a key aspect of
intelligence, and this should lead to better decisions. But
also in this case, although it is much less clear, there might
be a limit to how much can be done with a limited amount of
information.
The speed-up/slow-down effects might be summarized as follows:
* Performance per unit real time:
* Speed of agent positively correlates with cognition and intelligence of decisions
* Speed of environment positively correlates with informed decisions
* Performance per subjective unit of agent time from agent’s perspective:
* slow down environment = increases cognition and intelligence but decisions become less informed
* speed up environment = more informed but less reasoned decisions
* Performance per environment time from environment perspective:
* speed up agent = more intelligent decisions
* slow down agent = less intelligent decisions
I have argued that more comp, i.e. speeding up hardware, does
not necessarily correspond to more intelligence. But then the
same could be said of software speedups, i.e. more efficient
ways of computing the same function. If two agent algorithms
have the same I/O behavior, just one is faster than the other,
is the faster one more intelligent?
An interesting related question is whether progress in AI has
been mainly due to improved hardware or improved software. If
we believe in the former, and we accept that speed is
orthogonal to intelligence, and we believe that humans are
“truly” intelligent (a lot of ifs), then building AGIs may
still be far distant.
As detailed in Section [7](#S7 "7 Is Intelligence Unlimited or Bounded ‣ Can Intelligence Explode?"), if intelligence is
upper-bounded (like playing optimal minimax chess), then past
this bound, intelligences can only differ by speed and
available information to process. In this case, and if humans
are not too far below this upper bound (which seems unlikely),
outsiders could, as long as their technology permits, record
and play a virtual world in slow motion and be able to grasp
what is going on inside.
In this sense, a singularity may be more interesting for
outsiders than for insiders. On the other hand, insiders
actively “live” potential societal changes, while outsiders
only passively observe them.
Of course, more comp only leads to more intelligent decisions
if the decision algorithm puts it to good use. Many algorithms
in AI are so-called anytime algorithms that indeed produce
better results if given more comp. In the limit of infinite
comp, in simple and well-defined settings (usually search and
planning problems), some algorithms can produce optimal
results, but for more realistic complex situations (usually
learning problems), they saturate and remain sub-optimal
[[RN10](#bib.bibx28)]. But there is one algorithm, namely AIXI
described in Section [7](#S7 "7 Is Intelligence Unlimited or Bounded ‣ Can Intelligence Explode?"), that is able to make
optimal decisions in arbitrary situations given infinite comp.
Together this shows that it is non-trivial to draw a clear
boundary between speed and intelligence.
6 What is Intelligence
-----------------------
There have been numerous attempts to define intelligence; see
e.g. [[LH07a](#bib.bibx22)] for a collection of 70+ definitions
from the philosophy, psychology, and AI literature, by
individual researchers as well as collective attempts.
If/since intelligence is not (just) speed, what is it then?
What will super-intelligences actually do?
Historically-biologically, higher intelligence, via some
correlated practical cognitive capacity, increased the chance
of survival and number of offspring of an individual and the
success of a species. At least for primates leading to homo
sapiens this was the case until recently. Within the human
race, intelligence is now positively correlated with power
and/or economic success [[Gea07](#bib.bibx11)] and actually negatively
with number of children [[Kan07](#bib.bibx19)]. Genetic evolution
has been largely replaced by memetic evolution
[[Daw76](#bib.bibx8)], the replication, variation, selection, and
spreading of ideas causing cultural evolution.
What activities could be regarded as or are positively
correlated with intelligence?
Self-preservation? Self-replication? Spreading? Creating faster/better/higher intelligences? Learning as much as possible? Understanding the universe? Maximizing power over men and/or organizations? Transformation of matter (into computronium?)? Maximum self-sufficiency?
The search for the meaning of life?
Has intelligence more to do with thinking or is thinking only a
tool for acting smartly? Is intelligence something
anthropocentric or does it exist objectively?
What are the relations between other predicates of human
“spirit” like consciousness, emotions, and religious faith to
intelligence? Are they part of it or separate characteristics
and how are they interlinked?
One might equate intelligence with rationality, but what is
rationality? Reasoning, which requires internal logical
consistency, is a good start for a characterization but is
alone not sufficient as a definition. Indiscriminately
producing one true statement after the other without
prioritization or ever doing anything with them is not too
intelligent (current automated theorem provers can already do
this).
It seems hard if not impossible to define rationality without
the notion of a goal. If rationality is reasoning towards a
goal, then there is no intelligence without goals. This idea
dates back at least to Aristotle, if not further; see
[[LH07b](#bib.bibx23)] for details. But what are the goals?
Slightly more flexible notions are that of expected utility
maximization and cumulative life-time reward maximization
[[RN10](#bib.bibx28)]. But who provides the rewards, and how? For
animals, one might try to equate the positive and negative
rewards with pleasure and pain, and indeed one can explain a
lot of behavior as attempts to maximize rewards/pleasure.
Humans seem to exhibit astonishing flexibility in choosing
their goals and passions, especially during childhood.
Goal-oriented behavior often appears to be at odds with
long-term pleasure maximization. Still, the evolved biological
goals and desires to survive, procreate, parent, spread,
dominate, etc. are seldom disowned.
But who sets the goal for super-intelligences and how? When
building AIs or tinkering with our virtual selves, we could try
out a lot of different goals, e.g. selected from the list
above or others. But ultimately we will lose control, and the
AGIs themselves will build further AGIs (if they were motivated
to do so) and this will gain its own dynamic. Some aspects of
this might be independent of the initial goal structure and
predictable. Probably this initial vorld is a society of
cooperating and competing agents. There will be competition
over limited (computational) resources, and those virtuals who
have the goal to acquire them will naturally be more successful
in this endeavor compared to those with different goals.
Of course, improving the efficiency of resource use is
important too, e.g. optimizing own algorithms, but still,
having more resources is advantageous.
The successful virtuals will spread (in various ways), the
others perish, and soon their society will consist mainly of
virtuals whose goal is to compete over resources, where
hostility will only be limited if this is in the virtuals’ best
interest. For instance, current society has replaced war mostly
by economic competition, since modern weaponry makes most wars
a loss for both sides, while economic competition in most cases
benefits the better.
Whatever amount of resources are available, they will (quickly)
be used up, and become scarce. So in any world inhabited by
multiple individuals, evolutionary and/or economic-like forces
will “breed” virtuals with the goal to acquire as much (comp)
resources as possible. This world will likely neither be heaven
nor hell for the virtuals. They will “like” to fight over
resources, and the winners will “enjoy” it, while the losers
will “hate” it. In such evolutionary vorlds, the ability to
survive and replicate is a key trait of intelligence. On the
other hand, this is not a sufficient characterization, since
e.g. bacteria are quite successful in this endeavor too, but
not very intelligent.
Finally, let us consider some alternative (real or virtual)
worlds.
In the human world, local conflicts and global war is
increasingly replaced by economic competition, which might
itself be replaced by even more constructive global
collaboration, as long as violaters can quickly and effectively
(and non-violently?) be eliminated.
It is possible that this requires a powerful single (virtual)
world government, to give up individual privacy, and to
severely limit individual freedom (cf. ant hills or bee hives).
An alternative societal setup that can only produce conforming
individuals might only be possible by severely limiting
individual’s creativity (cf. flock of sheep or school of fish).
Such well-regulated societies might better be viewed as a
single organism or collective mind. Or maybe the vorld is
inhabited from the outset by a single individual.
Both vorlds could look quite different and more peaceful than
the traditional ones created by evolution.
Intelligence would have to be defined quite differently in such
vorlds.
Many science fiction authors have conceived and extensively
written about a plethora of other future, robot, virtual, and
alien societies in the last century.
In the following I will only consider vorlds shaped by
evolutionary pressures as described above.
7 Is Intelligence Unlimited or Bounded
---------------------------------------
Another important aspect of intelligence is how flexible or
adaptive an individual is. Deep blue might be the best chess
player on Earth, but is unable to do anything else. On the
contrary, higher animals and humans have remarkably broad
capacities and can perform well in a wide range of
environments.
In [[LH07b](#bib.bibx23)] intelligence has been defined as the
ability to achieve goals in a wide range of environments. It
has been argued that this is a very suitable characterization,
implicitly capturing most, if not all traits of rational
intelligence, such as reasoning, creativity, generalization,
pattern recognition, problem solving, memorization, planning,
learning, self-preservation, and many others. Furthermore, this
definition has been rigorously formalized in mathematical
terms. It is non-anthropocentric, wide-range, general,
unbiased, fundamental, objective, complete, and universal. It
is the most comprehensive formal definition of intelligence so
far. It assigns a real number Υ between zero and one
to every agent, namely the to-be-expected performance averaged
over all environments/problems the agent potentially has to
deal with, with an Ockham’s razor inspired prior weight for
each environment. Furthermore there is a maximally intelligent
agent, called AIXI, w.r.t. this measure. The precise formal
definitions and details can be found in [[LH07b](#bib.bibx23)],
but do not matter for our purpose. This paper also contains a
comprehensive justification and defense of this approach.
The theory suggests that there is a maximally intelligent
agent, or in other words, that intelligence is upper bounded
(and is actually lower bounded too). At face value, this would
make an intelligence explosion impossible.
To motivate this possibility, consider some simple examples.
Assume the vorld consists only of tic-tac-toe games, and the
goal is to win or second-best not lose them. The notion of
intelligence in this simple vorld is beyond dispute. Clearly
there is an optimal strategy (actually many) and it is
impossible to behave more intelligently than this strategy. It
is even easy to artificially evolve or learn these strategies
from repeated (self)play [[Hoc03](#bib.bibx15), [VNH+11](#bib.bibx36)].
So in this vorld there clearly will be no intelligence
explosion or intelligence singularity, even if there were a
speed explosion.
We get a slightly different situation when we replace
tic-tac-toe by chess. There is also an optimal way of playing
chess, namely minimax tree search to the end of the game, but
unlike in tic-tac-toe this strategy is computationally
infeasible in our universe. So in theory (i.e. given enough
comp) intelligence is upper-bounded in a chess vorld, while in
practice we can get only ever closer but never reach the bound.
(Actually there might be enough matter in the universe to build
an optimal chess player, but likely not an optimal Go player.
In any case it is easy to design a game that is beyond the
capacity of our accessible universe, even if completely
converted into computronium).
Still, this causes two potential obstacles for an intelligence
explosion. First, we are only talking about the speed of
algorithms, which I explained before not to equate with
intelligence. Second, intelligence is upper bounded by the
theoretical optimal chess strategy, which makes an intelligence
explosion difficult but not necessarily impossible: Assume the
optimal program has intelligence I=1 and at real time t<1
we have access to or evolved a chess program with intelligence
t. This approaches 1 in finite time, but doesn’t “explode”.
But if we use the monotone transformation 1/(1−I) to measure
intelligence, the chess program at time t has transformed
intelligence 1/(1−t) which tends to infinity for t→1.
While this is a mathematical singularity, it is likely not
accompanied by a real intelligence explosion. The original
scale seems more plausible in the sense that t+0.001 is just
a tiny bit more intelligent than t, and 1 is just 1000
times more intelligent than 0.001 but not infinitely more.
Although the vorld of chess is quite rich, the real world is
vastly and possibly unlimitedly richer. In such a more open
world, the intelligence scale may be genuinely unbounded, but
not necessarily as we will see.
It is not easy though to make these arguments rigorous.
Let us return to the real world and intelligent measure
Υ upper bounded by Υmax=Υ(AIXI).
Since AIXI is incomputable, we can never reach intelligence
Υmax in a computational universe, but similarly to
the chess example we can get closer and closer. The numerical
advance is bounded, and so is possibly the real intelligence
increase, hence no intelligence explosion. But it might also be
the case that in a highly sophisticated AIXI-close society, one
agent beating another by a tiny epsilon on the Υ-scale
makes all the difference for survival and/or power and/or other
measurable impact like transforming the universe. In many sport
contests split seconds determine a win, and the winner takes it
all — an admittedly weak analogy.
An interesting question is where humans range on the
Υ-scale: is it so low with so much room above that
outsiders would effectively experience an intelligence
explosion (as far as recognizable), even if intelligence is
ultimately upper bounded? Or are we already quite close to the
upper bound, so that even AGIs with enormous comp (but
comparable I/O limitations) would just be more intelligent but
not incomprehensibly so. We tend to believe that we are quite
far from Υ, but is this really so? For instance, what
has once been argued to be irrational (i.e. not very
intelligent) behavior in the past, can often be regarded as
rational w.r.t. the appropriate goal. Maybe we are already
near-optimal goal achievers. I doubt this, but cannot rule it
out either.
Humans are not faster but more intelligent than dogs, and dogs
in turn are more intelligent than worms and not just faster,
even if we cannot pinpoint exactly why we are more intelligent:
is it our capacity to produce technology or to transform our
environment on a large scale or consciousness or domination
over all other species? There are no good arguments why humans
should be close to the top of the possible biological
intelligence scale, and even less so on a vorld scale. By
extrapolation it is plausible that a vorld of much more
intelligent trans-humans or machines is possible. They will
likely be able to perform better in an even wider range of
environments on an even wider range of problems than humans.
Whether this results in anything that deserves the name
intelligence explosion is unclear.
8 Singularitarian Intelligences
--------------------------------
Consider a vorld inhabited by competing agents, initialized
with human mind-uploads or non-human AGIs, and increasing comp
per virtual.
Sections [6](#S6 "6 What is Intelligence ‣ Can Intelligence Explode?") and [7](#S7 "7 Is Intelligence Unlimited or Bounded ‣ Can Intelligence Explode?") then indicate that
evolutionary pressure increases the individuals’ intelligence
and the vorld should converge to a society of AIXIs.
Alternatively, if we postulate an intelligence singularity and
accept that AIXI is the most intelligent agent, we arrive at
the same conclusion. More precisely, the society consists of
agents that aim at being AIXIs only being constrained by comp.
If this is so, the intelligence singularity might be identified
with a society of AIXIs, so studying AIXI can tell us something
about how a singularity might look like. Since AIXI is
completely and formally defined, properties of this society can
be studied rigorously mathematically. Here are some questions
that could be asked and answered:
Will a pure reward maximizer such as AIXI listen to and trust a teacher? Likely yes. Will it take drugs (i.e. hack the reward system)? Likely no, since cumulative long-term reward would be small (death). Will AIXI replicate itself or procreate? Likely yes, if AIXI believes that clones or descendants are useful for its own goals. Will AIXI commit suicide? Likely yes (no), if AIXI is raised to believe in going to heaven (hell) i.e. maximal (minimal) reward forever. Will sub-AIXIs self-improve? Likely yes, since this helps to increase reward. Will AIXI manipulate or threaten teachers to give more reward? Likely yes. Are pure reward maximizers like AIXI egoists, psychopaths, and/or killers, or will they be friendly (altruism as extended ego(t)ism)? Curiosity killed the cat and maybe AIXI, or is extra reward for curiosity necessary? Immortality can cause laziness. Will AIXI be lazy? Can self-preservation be learned or need (parts of) it be innate. How will AIXIs interact/socialize in general?
For some of these questions, partial and informal discussions
and plausible answers are available, and a couple have been
rigorously defined, studied and answered, but most of them are open to
date
[[Hut05](#bib.bibx16), [Sch07](#bib.bibx31), [OR11](#bib.bibx26), [RO11](#bib.bibx29), [Hut12](#bib.bibx18)].
But the AIXI theory has the potential to arrive at definite answers
to various questions regarding the social behavior of
super-intelligences close to or at an intelligence singularity.
9 Diversity Explosion and the Value of a Virtual Life
------------------------------------------------------
As indicated, some things will be harder or impossible in a
virtual world (e.g. to discover new physics) but many things
should be easier. Unless a global copy protection mechanism is
deliberately installed (like e.g. in Second Life) or copyright
laws prevent it, copying virtual structures should be as cheap
and effortless as it is for software and data today. The only
cost is developing the structures in the first place, and the
memory to store and the comp to run them. With this comes the
possibility of cheap manipulation and experimentation.
It becomes particularly interesting when virtual life itself
gets copied and/or modified. Many science fiction stories cover
this subject, so I will be brief and selective here.
One consequence should be a “virtuan” explosion with life
becoming much more diverse. Andy Clarke [[Cla09](#bib.bibx6)] writes
(without particularly referring to virtuals) that “The humans
of the next century will be vastly more heterogenous, more
varied along physical and cognitive dimensions, than those of
the past as we deliberately engineer a new Cambrian explosion
of body and mind.” In addition, virtual lives could be
simulated in different speeds, with speeders experiencing
slower societal progress than laggards. Designed intelligences
will fill economic niches. Our current society already relies
on specialists with many years of training, so it is natural to
go the next step to ease this process with “designer babies”.
Another consequence should be that life becomes less valuable.
Our society values life, since life is a valuable commodity and
expensive/laborious to replace/produce/raise. We value our own
life, since evolution selects only organisms that value their
life. Our human moral code mainly mimics this, with cultural
differences and some excesses (e.g. suicide attacks on the one
side and banning stem cell research on the other).
If life becomes ‘cheap’, motivation to value it will decline.
Analogies are abundant: Cheap machines decreased the value of
physical labor. Some expert knowledge was replaced by
hand-written documents, then printed books, and finally
electronic files, where each transition reduced the value of
the same information. Digital computers made human computers
obsolete. In games, we value our own life and that of our
opponents less than real life, not only because a game is a
crude approximation to real life, but also because games can be
reset and one can be resurrected. Governments will stop paying
my salary when they can get the same research output from a
digital version of me, essentially for free.
And why not participate in a dangerous fun activity if in the
worst case I have to activate a backup copy of myself from
yesterday which just missed out this one (anyway not too
well-going) day. The belief in immortality can alter behavior
drastically.
Of course there will be countless other implications:
ethical, political, economical, medical, cultural,
humanitarian, religious, in art, warfare, etc.
I have singled out the value of life, since I think it will
significantly influence other aspects. Much of our society is
driven by the fact that we highly value (human/individual)
life. If virtual life is/becomes cheap, these drives will
ultimately vanish and be replaced by other goals.
If AIs can be easily created, the value of an intelligent
individual will be much lower than the value of a human life
today. So it may be ethically acceptable to freeze, duplicate,
slow-down, modify (brain experiments), or even kill (oneself or
other) AIs at will, if they are abundant and/or backups are
available, just what we are used to doing with software. So
laws preventing experimentation with intelligences for moral
reasons may not emerge. With so little value assigned to an
individual life, maybe it becomes a disposable.
10 Personal Remarks
--------------------
I have deliberately avoided discussing consciousness for
several reasons: David Chalmers is the consciousness
expert and not me, he has extensively written about it in
general and also in the context of the singularity
[[Cha10](#bib.bibx5)], and I essentially agree with his
assessments. Personally I believe in the functionalist theory
of identity and am confident that (slow and fast) uploading of
a human mind preserves identity and consciousness, and indeed
that any sufficiently high intelligence, whether
real/biological/physical or virtual/silicon/software is
conscious, and that consciousness survives changes of
substrate: teleportation, duplication, virtualization/scanning,
etc. along the lines of [[Cha10](#bib.bibx5)].
I have also only considered (arguably) plausible scenarios, but
not whether these or other futures are desirable. First, there
is the problem of how much influence/choice/freedom we actually
have in shaping our future in general and the singularity in
particular. Can evolutionary forces be beaten?
Second, what is desirable is necessarily subjective. Are there
any universal values or qualities we want to see or that should
survive? What do I mean by we? All humans? Or the dominant
species or government at the time the question is asked? Could
it be diversity? Or friendly AI [[Yud08](#bib.bibx39)]? Could the
long-term survival of at least one conscious species that
appreciates its surrounding universe be a universal value? A
discussion of these questions is clearly beyond the scope of
this article.
11 Conclusions
---------------
Based on the deliberations in this paper, here are my
predictions concerning a potential technological singularity,
although admittedly they have a speculative character.
* This century may witness a technological explosion of a
degree deserving the name singularity.
* The default scenario is a society of interacting intelligent
agents in a virtual world, simulated on computers with
hyperbolically increasing computational resources.
* This is inevitably accompanied by a speed explosion when
measured in physical time units, but not necessarily by an
intelligence explosion.
* Participants will not necessarily experience this explosion,
since/if they are themselves accelerated at the same pace,
but they should enjoy ‘progress’ at a ‘normal’ subjective pace.
* For non-accelerated non-participating conventional humans,
after some short period, their limited minds will not be able
to perceive the explosion as an intelligence explosion.
* This begs the question in which sense an intelligence
explosion has happened. (If a tree falls in a forest and no one
is around to hear it, does it make a sound?)
* One way and maybe the only way to make progress in this
question is to clarify what intelligence actually is.
* The most suitable notion of intelligence for this purpose seems
to be that of universal intelligence, which in principle allows
to formalize and theoretically answer a wide range of questions
about super-intelligences. Accepting this notion has
in particular the following implications:
* There is a maximally intelligent agent, which appears to imply
that intelligence is fundamentally upper bounded, but this is
not necessarily so.
* If the virtual world is inhabited by interacting free
agents (rather than a ‘monistic’ vorld inhabited by a
single individual or a tightly controlled society),
evolutionary pressures should breed agents of increasing
intelligence that compete about computational resources.
* The end-point of this intelligence evolution/acceleration
(whether it deserves the name singularity or not)
could be a society of these maximally intelligent
individuals.
* Some aspects of this singularitarian society might be
theoretically studied with current scientific tools.
* Way before the singularity, even when setting up a virtual
society in our image, there are likely some immediate
differences, for instance that the value of an individual life
suddenly drops, with drastic consequences.
Acknowledgements.
Thanks to Wolfgang Schwarz and Reinhard Hutter
for feedback on earlier drafts.
|
1dc04627-be78-455f-9f43-12b08071b367
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Trolley Problem
If you go to Harvard Law School, one of the first classes you'll take is called "Justice: What's The Right Thing To Do?" It starts with the trolley problem.
> There's a trolley hurling down the tracks with no breaks. It's on its way to hitting and killing 5 people. You can pull a lever but if you do the trolley will kill 1 person instead. Do you pull the lever?
Students are asked "Should you pull the lever?"
* Some say "Yes because utilitarianism should maximize human life."
* Some say "No because the categorical imperative forbids murder."
They're both wrong. The question is underspecified. The right answer is "It depends."
> You're in Germany in 1943. A train full of political prisoners is hurtling toward its doom. You can pull a lever to save their lives but if you do the train will hit Heinrich Himmler instead. Do you pull the lever?
Heck yeah!
> You're a fighter in the French Resistance in 1943. A train full of SS officers is hurtling toward its doom. You can pull a lever to save their lives but if you do it'll hit your friend who is planting a bomb on the railroad tracks. Do you pull the lever?
Of course not.
Those hypothetical situations cheat the problem. The purpose of the trolley problem is to compare one positive value against another positive value. Let's do that.
> You're a general in the Russian army in 1943. A train full of Russian soldiers is hurtling toward its doom. You can pull a lever to save their lives but if you do it'll hit a Russian soldier standing on the tracks. All the soldiers are of the same rank. Do you pull the lever?
Yes, obviously. Your job is to maximize the combat effectiveness of your fighting force.
> You're a doctor in 2021. Do you murder a patient and harvest his organs to save the lives of 5 other people?
No, obviously. That would be monstrous. Even if it weren't, the negative externalities would overwhelm your proximate benefit.
If the answer to an ethical question is always the same, regardless of context, t
|
0fe6f020-88d3-4e84-9488-79a64e058714
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Carbohydrate Hypothesis of Obesity: A Critical Examination
Here. A number of people here are fans of Taubes' work, and so I thought they would be interested in a well-referenced criticism. Hat tip to Landsknecht.
|
b6253288-1ddb-497c-91cf-22cc6f57aeb0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A courageous story of brain preservation, "Dying Young" by Amy Harmon, The New York Times
The recent major media article by Amy Harmon brings to the public eye the potential of human cryopreservation and chemopreservation techniques to preserve the memories and personal identity of individuals. We at the Brain Preservation Foundation have considered many common counterarguments to this endeavor (see below, and our FAQ and Overcoming Objections backgrounders) and yet we still think it is a worthwhile idea to pursue. Please let us know your thoughts as well.
Yesterday, journalist Amy Harmon published an article in the New York Times, “A Dying Young Woman’s Hope in Cryonics and a Future.” First of all, it is a tragic story about a woman, Kim Suozzi, who had an incredibly unfortunate diagnosis of cancer at a young age and was forced to make some very difficult decisions in a short time frame. The story of how she faced those decisions with great foresight and resolve, with the help of her partner Josh, her family, as well as the broader internet community, is deeply moving. We want to extend our condolences to everyone in Kim’s life for their terrible loss. We also want to stand in hope and solidarity with Josh and Kim that she may return one day to those she loved.
When it comes to the specifics of Kim’s life, we at the Brain Preservation Foundation (BPF) don’t think it is our place to discuss individual brain preservation cases. Our focus, as you can find in our mission statement, is to try to advance scientific research on the viability of preserving individual memories and identity. This research still has many current unknowns, as the NYT article points out well, and there will be a long journey of scientific investigation ahead. Yet an increasing number of people think these unknowns deserve answers. We also want to help society have conversations about the social issues of choosing brain preservation in a more open and tolerant manner.
Because this story has stimulated a lot of public discussion already, we want say a few words and invite a convers
|
b0217bb8-beef-4b5c-ae18-2f4db4b46cf0
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reverse-engineering using interpretability
Building a model for which you're confident your interpretability is correct, by reverse-engineering each part of the model to work how your interpretability says it should work. (Based on discussion in alignment reading group, ideas from William, Adam, dmz, Evan, Leo, maybe others)
The basic idea
1. Train an untrusted network
2. Do interpretability on it, getting some human-understandable schema that is intended to capture what (some of) the model is doing
3. Identify anything that should be changed about what the model is (supposedly) doing, and make a new human-understandable schema of the desired behavior
4. Train a model to do what this new representation says it should be doing, by having a human implement each part of the schema then doing imitation + distillation
I think this can be thought of in a few different ways:
1. One way you might implement Microscope AI in practice
2. A way to do imitative amplification where the decomposition is generated by gradient descent instead of by humans, combined with some aspects of imitative generalization
The main advantages here are:
* Compared to just using interpretability to audit your trained model, this is safer, because you can be more confident your interpretation matches what the final model is doing (because you trained the final model to conform to your interpretation). If you have mistakes in your interpretability, you should get performance issues rather than safety issues.
* Compared to normal IDA or ‘trying to build a schema from scratch’, this is easier because you use the power of the initial model to decide how to decompose tasks, or how to solve individual tasks, and you can use patterns the model learnt from large volumes of data without needing to have been able to generate that knowledge yourself
The main disadvantages are:
* To be competitive this requires really good interpretability - basically understanding everything about what the model is doing. This scheme might actually re
|
5126c2cf-5c17-4fab-9327-944d3ef5012e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
New Water Quality x Obesity Dataset Available
Tl;dr: I created a dataset of US counties’ water contamination and obesity levels. So far I have failed to find anything really interesting with it, but maybe you will. If you are interested you can download the dataset here. Be warned every spreadsheet program will choke on it; you definitely need to be use statistical programming.
Photocredit: DALL-E and a lot of coaxing
Many of you have read Slime Mold Time Mold’s series on the hypothesis that environmental contaminants are driving weight gain. I haven’t done a deep dive on their work, but their lit review is certainly suggestive.
SMTM did some original analysis by looking at obesity levels by state, but this is pretty hopeless. They’re using average altitude by state as a proxy for water purity for the entire state, and then correlating that with the state’s % resident obesity. Water contamination does seem negatively correlated with its altitude, and its altitude is correlated with an end-user’s altitude, and that end user’s altitude is correlated with their average state altitude… but I think that’s too many steps removed with too much noise at each step. So the aggregation by state is basically meaningless, except for showing us Colorado is weird.
So I dug up a better data set, which had contamination levels for almost every water system in the country, accessible by zip code, and another one that had obesity prevalence by county. I combined these into a single spreadsheet and did some very basic statistical analysis on them to look for correlations.
Some caveats before we start:
* The dataset looks reasonable to me, but I haven’t examined it exhaustively and don’t know where the holes are.
* Slime Mold Time Mold’s top contender for environmental contagion is lithium. While technically present in the database, litium had five entries so I ignored it. I haven’t investigated but my guess is no one tests for lithium.
* It’s rare, but some zip codes have multiple water suppliers, and the spreadsheet
|
f6d7a33a-114c-4ffe-9947-146f1e24de63
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
when in doubt, kill everyone
when in doubt, kill everyone
----------------------------
one thing that is way worse than [mere existential risks](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), possibly [by a factor of infinity](ai-alignment-wolfram-physics.html), is [suffering risks, or S-risks](https://en.wikipedia.org/wiki/Suffering_risks).
i could see (though going by what i could see [is not a reliable apparatus](overcoming-narratives.html)) someone make an AI and, while trying to align it to human values, accidentally misaligns it to something that happens to tile the universe with suffering humans. this would be an instance of S-risk.
whereas, an AI that merely wants to accomplish a relatively simple goal will probly just tile the universe with something simple that doesn't contain suffering persons; and given that [we're all probly quantum immortal](quantum-suicide.html), we just "escape" to the timeline where that didn't happen.
considering this, a 99% chance of X-risk 1% chance of utopia is preferable to a 1% chance of S-risk 99% chance of utopia. so, one thing we might want to do if we figure out superintelligence before we do alignment (which [seems pretty likely at this point](were-all-doomed.html); see also "Zero percent" on [this page](https://intelligence.org/2018/10/03/rocket-alignment/)), we might want to keep a ready-to-fire paperclip AI on standby and boot it up in case we start seeing S-risks on the horizon, just to terminate dangerous timelines before they evolve into permanent exponential hell.
in fact, just to be sure, we might want to give many people the trigger, to press as soon as someone even *suggests* doing any kind of AI work that is not related to figuring out goddamn alignment.
|
540e8829-7167-4089-a4c0-d4773776f52b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Use Search Engines Early and Often
The Internet contains vast amounts of useful content. Unfortunately, it also contains vast amounts of garbage, superstimulus hazards, and false, meaningless, or outright harmful information. One skill that is hence quite useful in the modern day is using search engines correctly, allowing you to separate the wheat from the chaff. When doing so, one can often uncover preexisting work that solves your problem for you, the answers to relevant factual questions, and so on. It is rare to find a situation where search engines are outright useless-- at the very least they tend to point you in the direction of useful information.
Further, the time cost of setting up and refining a search is extremely low, meaning that most of the time "just Google it" should in fact be your default response to a situation where you don't have very much information.[1] Overall, I consider one's ability to use search engines-- and, just as importantly, one's ability to recognize what types of situations can benefit from using them-- a basic but fairly significant instrumental rationality skill.
Much of the above sounds extremely obvious, and in point of fact it should be-- but the fact remains that people don't use search engines anywhere near as often as they seemingly should. I've frequently found myself in situations where someone in the same room as me asks me a trivially searchable factual question while we are both using computers. Worse still, I've been in situations where people do the same over IRC! The existence of lmgtfy indicates that others have noticed this issue before, and yet it remains a problem.
So, how can we do better?
One easy trick that I've found very helpful is to use Goodsearch instead of Google. Goodsearch is a service that automatically donates a cent to a charity of your choice whenever you search.[2] Further, it can be installed into your search toolbar in Firefox, making the activation cost of using Goodsearch rather than Google essentially zero if, like me,
|
612cb3cb-de5e-4f82-bbd6-a26a36e628c8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open thread for December 9 - 16, 2013
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
|
f61d1528-0d94-4898-b3c3-456a5d512184
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Bayesian statistics as epistemic advantage
Interesting Talking Machines episode quote about Bayesian stats being used at Bletchley and GCHQ (its successor). Seems like they held on to a possibly significant advantage (crypto ppl would be better to comment on this) for years, owing largely to Turing. (The rest of the episode is about AI safety and also interesting.)
Source:
http://www.thetalkingmachines.com/blog/2016/2/26/ai-safety-and-the-legacy-of-bletchley-park
GCHQ in the ’70s, we thought of ourselves as completely Bayesian statisticians. All our data analysis was completely Bayesian, and that was a direct inheritance from Alan Turing. I’m not sure this has ever really been published, but Turing, almost as a sideline during his cryptoanalytic work, reinvented Bayesian statistics for himself. The work against Enigma and other German ciphers was fully Bayesian. …
Bayesian statistics was an extreme minority discipline in the ’70s. In academia, I only really know of two people who were working majorly in the field, Jimmy Savage … in the States and Dennis Lindley in Britain. And they were regarded as fringe figures in the statistics community. It’s extremely different now. The reason is that Bayesian statistics works. So eventually truth win out. There are many, many problems where Bayesian methods are obviously the right thing to do. But in the ’70s we understood that already in Britain in the classified environment.
Transcription Source:
https://www.johndcook.com/blog/2017/07/25/bayesian-methods-at-bletchley-park/
|
8e226f5a-a513-413e-bea2-ae966dd8882c
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Views on when AGI comes and on strategy to reduce existential risk
Summary: AGI isn't super likely to come super soon. People should be working on stuff that saves humanity in worlds where AGI comes in 20 or 50 years, in addition to stuff that saves humanity in worlds where AGI comes in the next 10 years.
Thanks to Alexander Gietelink Oldenziel, Abram Demski, Daniel Kokotajlo, Cleo Nardo, Alex Zhu, and Sam Eisenstat for related conversations.
My views on when AGI comes
==========================
AGI
---
By "AGI" I mean the thing that has very large effects on the world (e.g., it kills everyone) via the same sort of route that humanity has large effects on the world. The route is where you figure out how to figure stuff out, and you figure a lot of stuff out using your figure-outers, and then the stuff you figured out says how to make powerful artifacts that move many atoms into very specific arrangements.
This isn't the only thing to worry about. There could be transformative AI that isn't AGI in this sense. E.g. a fairly-narrow AI that just searches configurations of atoms and finds ways to do atomically precise manufacturing would also be an existential threat and a possibility for an existential win.
Conceptual capabilities progress
--------------------------------
The "conceptual AGI" view:
>
> The first way humanity makes AGI is by combining some set of significant ideas about intelligence. Significant ideas are things like (the ideas of) gradient descent, recombination, probability distributions, universal computation, search, world-optimization. Significant ideas are to a significant extent bottlenecked on great natural philosophers doing great natural philosophy about intelligence, with sequential bottlenecks between many insights.
>
>
>
The conceptual AGI doesn't claim that humanity doesn't already have enough ideas to make AGI. I claim that——though not super strongly.
Timelines
---------
Giving probabilities here doesn't feel great. For one thing, it seems to contribute to information cascades and to shallow coalition-forming. For another, it hides the useful models. For yet another thing: A probability bundles together a bunch of stuff I have models about, with a bunch of stuff I don't have models about. For example, how many people will be doing original AGI-relevant research in 15 years? I have no idea, and it seems like largely a social question. The answer to that question does affect when AGI comes, though, so a probability about when AGI comes would have to depend on that answer.
But ok. Here's some butt-numbers:
* 3%-10% probability of AGI in the next 10-15ish years. This would be lower, but I'm putting a bit of model uncertainty here.
* 40%-45% probability of AGI in the subsequent 45ish years. This is denser than the above because, eyeballing the current state of the art, it seems like we currently lack some ideas we'd need——but I don't know how many insights would be needed, so the remainder could be only a couple decades around the corner. It also seems like people are distracted now.
* Median 2075ish. IDK. This would be further out if an AI winter seemed more likely, but LLMs seem like they should already be able to make a lot of money.
* A long tail. It's long because of stuff like civilizational collapse, and because AGI might be really really hard to make. There's also a sliver of a possibility of coordinating for a long time to not make AGI.
If I were trying to make a model with parts, I might try starting with a mixture of Erlang distributions of different shapes, and then stretching that according to some distribution about the number of people doing original AI research over time.
Again, this is all butt-numbers. I have almost no idea about how much more understanding is needed to make AGI, except that it doesn't seem like we're there yet.
Responses to some arguments for AGI soon
========================================
The "inputs" argument
---------------------
At about [1:15 in this interview](https://www.youtube.com/watch?v=_kRg-ZP1vQc&t=4450s), Carl Shulman argues (quoting from the [transcript](https://www.dwarkeshpatel.com/p/carl-shulman)):
>
> We've been scaling [compute expended on ML] up four times as fast as was the case for most of the history of AI. We're running through the orders of magnitude of possible resource inputs you could need for AI much much more quickly than we were for most of the history of AI. That's why this is a period with a very elevated chance of AI per year because we're moving through so much of the space of inputs per year [...].
>
>
>
This isn't the complete argument Shulman gives, but on its own it's interesting. On its own, it's valid, but only if we're actually scaling up all the needed inputs.
On the conceptual AGI view, this isn't the case, because we aren't very greatly increasing the amount of great natural philosophers doing great natural philosophy about intelligence. That's a necessary input, and it's only being somewhat scaled up. For one thing, many new AI researchers are correlated with each other, and many are focused on scaling up, applying, and varying existing ideas. For another thing, sequential progress can barely be sped up with more bodies.
The "big evolution" argument
----------------------------
Carl goes on to argue that eventually, when we have enough compute, we'll be able to run a really big evolutionary process that finds AGIs (if we haven't already made AGI). This idea also appears in [Ajeya Cotra's report](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) on the compute needed to create AGI.
I broadly agree with this. But I have two reasons that this argument doesn't make AGI seem very likely very soon.
The first reason is that running a big evolution actually seems kind of hard; it seems to take significant conceptual progress and massive engineering effort to make the big evolution work. What I'd expect to see when this is tried, is basically nothing; life doesn't get started, nothing interesting happens, the entities don't get far (beyond whatever primitives were built in). You can get around this by invoking more compute, e.g. by simulating physics more accurately at a more detailed level, or by doing hyperparameter search to find worlds that lead to cool stuff. But then you're invoking more compute. (I'd also expect a lot of the hacks that supposedly make our version of evolution much more efficient than real evolution, to actually result in our version being circumscribed, i.e. it peters out because the shortcut that saved compute also cut off some important dimensions of search.)
The second reason is that evolution seems to take a lot of serial time. There's probably lots of clever things one can do to shortcut this, but these would be significant conceptual progress.
"I see how to do it"
--------------------
My (limited / filtered) experience with these ideas leads me to think that [ideas knowably sufficient to make an AGI in practice] aren't widespread or obvious. (Obviously it is *somehow* feasible to make an AGI, because evolution did it.)
The "no blockers" intuition
---------------------------
An intuition that I often encounter is something like this:
>
> Previously, there were blockers to current systems being developed into AGI. But now those blockers have been solved, so AGI could happen any time now.
>
>
>
This sounds to my ears like: "I saw how to make AGI, but my design required X. Then someone made X, so now I have a design for an AGI that will work.". But I don't think that's what they think. I think they don't think they have to have a design for an AGI in order to make an AGI.
I kind of agree with some version of this——there's a lot of stuff you don't have to understand, in order to make something that can do some task. We observe this in modern ML. But current systems, though they impressively saturate some lower-dimensional submanifold of capability-space, don't permeate a full-dimensional submanifold. Intelligence is a positive thing. Most computer code doesn't put itself on an unbounded trajectory of gaining capabilities. To make it work you have to do engineering and science, at some level. Bridges don't hold weight just because there's nothing blocking them from holding weight.
Daniel Kokotajlo points out that for things that grow, it's kind of true that they'll succeed as long as there aren't blockers——and for example animal husbandry kind of just works, without the breeders understanding much of anything about the internals of why their selection pressures are met with adequate options to select. This is true, but it doesn't seem very relevant to AGI because we're not selecting from an existing pool of highly optimized "genomic" (that is, mental) content. If instead of tinkering with de novo gradient-searched circuits, we were tinkering with remixing and mutating whole-brain emulations, then I would think AGI comes substantially sooner.
Another regime where "things just work" is many mental contexts where a task is familiar enough in some way that you can expect to succeed at the task by default. For example, if you're designing a wadget, and you've previously designed similar wadgets to similar specifications, then it makes sense to treat a design idea as though it's going to work out——as though it can be fully fleshed out into a satisfactory, functioning design——*unless* you see something clearly wrong with it, a clear blocker like a demand for a metal with unphysical properties. Again, like the case of animal husbandry, the "things just work" comes from the (perhaps out of sight) preexisting store of optimized content that's competent to succeed at the task given a bit of selection and arrangement. In the case of AGI, no one's ever built anything like that, so the store of knowledge that would automatically flesh out blockerless AGI ideas is just not there.
Yet another such regime is markets, where the crowd of many agents can be expected to figure out how to do something as long as it's feasible. So, a version of this intuition goes:
>
> There are a lot of people trying to make AGI. So either there's some strong blocker that makes it so that no one can make AGI, or else someone will make AGI.
>
>
>
This is kind of true, but it just goes back to the question of how much conceptual progress will people make towards AGI. It's not an argument that we already have the understanding needed to make AGI. If it's used as an argument that we already have the understanding, then it's an accounting mistake: it says "We already have the understanding. The reason we don't need more understanding, is that if there were more understanding needed, someone else will figure it out, and then we'll have it. Therefore no one needs to figure anything else out.".
Finally: I also see a fair number of specific "blockers", as well as some indications that existing things don't have properties that would scare me.
"We just need X" intuitions
---------------------------
Another intuition that I often encounter is something like this:
>
> We just need X to get AGI. Once we have X, in combination with Y it will go all the way.
>
>
>
Some examples of Xs: memory, self-play, continual learning, curricula, AIs doing AI research, learning to learn, neural nets modifying their own weights, sparsity, learning with long time horizons.
For example: "Today's algorithms can learn anything given enough data. So far, data is limited, and we're using up what's available. But self-play generates infinite data, so our systems will be able to learn unboundedly. So we'll get AGI soon.".
This intuition is similar to the "no blockers" intuition, and my main response is the same: the reason bridges stand isn't that you don't see a blocker to them standing. See above.
A "we just need X" intuition can become a "no blockers" intuition if someone puts out an AI research paper that works out some version of X. That leads to another response: just because an idea is, at a high level, some kind of X, doesn't mean the idea is anything like the fully-fledged, generally applicable version of X that one imagines when describing X.
For example, suppose that X is "self-play". One important thing about self-play is that it's an infinite source of data, provided in a sort of curriculum of increasing difficulty and complexity. Since we have the idea of self-play, and we have some examples of self-play that are successful (e.g. AlphaZero), aren't we most of the way to having the full power of self-play? And isn't the full power of self-play quite powerful, since it's how evolution made AGI? I would say "doubtful". The self-play that evolution uses (and the self-play that human children use) is much richer, containing more structural ideas, than the idea of having an agent play a game against a copy of itself.
Most instances of a category are not the most powerful, most general instances of that category. So just because we have, or will soon have, some useful instances of a category, doesn't strongly imply that we can or will soon be able to harness most of the power of stuff in that category. I'm reminded of [the politician's syllogism](https://en.wikipedia.org/wiki/Politician%27s_syllogism): "We must do something. This is something. Therefore, we must do this.".
The bitter lesson and the success of scaling
--------------------------------------------
[Sutton's bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html), paraphrased:
>
> AI researchers used to focus on coming up with complicated ideas for AI algorithms. They weren't very successful. Then we learned that what's successful is to leverage computation via general methods, as in deep learning and massive tree search.
>
>
>
Some add on:
>
> And therefore what matters in AI is computing power, not clever algorithms.
>
>
>
This conclusion doesn't follow. Sutton's bitter lesson is that figuring out *how to leverage* computation using *general methods* that *scale with more computation* beats trying to perform a task by encoding human-learned specific knowledge about the task domain. You still have to come up with the general methods. It's a different sort of problem——trying to aim computing power at a task, rather than trying to work with limited computing power or trying to "do the task yourself"——but it's still a problem. To modify a famous quote: "In some ways we feel we are as bottlenecked on algorithmic ideas as ever, but we believe we are bottlenecked on a higher level and about more important things."
Large language models
---------------------
Some say:
>
> LLMs are already near-human and in many ways super-human general intelligences. There's very little left that they can't do, and they'll keep getting better. So AGI is near.
>
>
>
This is a hairy topic, and my conversations about it have often seemed not very productive. I'll just try to sketch my view:
* The existence of today's LLMs is scary and should somewhat shorten people's expectations about when AGI comes.
* LLMs have fixed, partial concepts with fixed, partial understanding. An LLM's concepts are like human concepts in that they can be combined in new ways and used to make new deductions, in some scope. They are unlike human concepts in that they won't grow or be reforged to fit new contexts. So for example there will be some boundary beyond which a trained LLM will not recognize or be able to use a new analogy; and this boundary is well within what humans can do.
* An LLM's concepts are mostly "in the data". This is pretty vague, but I still think it. A number of people who think that LLMs are basically already AGI have seemed to agree with some version of this, in that when I describe something LLMs can't do, they say "well, it wasn't in the data". Though maybe I misunderstand them.
* When an LLM is trained more, it gains more partial concepts.
* However, it gains more partial concepts with poor sample efficiency; it mostly only gains what's in the data.
* In particular, even if the LLM were being continually trained (in a way that's similar to how LLMs are already trained, with similar architecture), it still wouldn't do the thing humans do with quickly picking up new analogies, quickly creating new concepts, and generally reforging concepts.
* LLMs don't have generators that are nearly as powerful as the generators of human understanding. The stuff in LLMs that seems like it comes in a way that's similar to how stuff in humans comes, actually comes from a lot more data. So LLMs aren't that much of indication that we've figured out how to make things that are on an unbounded trajectory of improvement.
* LLMs have a weird, non-human shaped set of capabilities. They go much further than humans on some submanifold, and they barely touch some of the full manifold of capabilities. (They're "unbalanced" in Cotra's terminology.)
* There is a *broken inference*. When talking to a human, if the human emits certain sentences about (say) category theory, that strongly implies that they have "intuitive physics" about the underlying mathematical objects. They can recognize the presence of the mathematical structure in new contexts, they can modify the idea of the object by adding or subtracting properties and have some sense of what facts hold of the new object, and so on. This inference——emitting certain sentences implies intuitive physics——doesn't work for LLMs.
* The broken inference is broken because these systems are optimized for being able to perform all the tasks that don't take a long time, are clearly scorable, and have lots of data showing performance. There's a bunch of stuff that's really important——and is a key indicator of having underlying generators of understanding——but takes a long time, isn't clearly scorable, and doesn't have a lot of demonstration data. But that stuff is harder to talk about and isn't as intuitively salient as the short, clear, demonstrated stuff.
* Vaguely speaking, I think stable diffusion image generation is comparably impressive to LLMs, but LLMs seem even more impressive to some people because LLMs break the performance -> generator inference more. We're used to the world (and computers) creating intricate images, but not creating intricate texts.
* There is a *missing update*. We see impressive behavior by LLMs. We rightly update that we've invented a surprisingly generally intelligent thing. But we should also update that this behavior surprisingly turns out to not require as much general intelligence as we thought.
Other comments on AGI soon
==========================
* There's a seemingly wide variety of reasons that people I talk to think AGI comes soon. This seems like evidence for each of these hypotheses: that AGI comes soon is overdetermined; that there's one underlying crux (e.g.: algorithmic progress isn't needed to make AGI) that I haven't understood yet; that I talked to a heavily selected group of people (true); that people have some other reason for saying that AGI comes soon, and then rationalize that proposition.
* I'm somewhat concerned that people are being somewhat taken in by hype (experiments systematically misinterpreted by some; the truth takes too long to put on its pants, and the shared narrative is already altered).
* I'm kind of baffled that people are so willing to say that LLMs understand X, for various X. LLMs do not behave with respect to X like a person who understands X, for many X.
* I'm pretty concerned that many people are fairly strongly deferring to others, in a general sense that includes updating off of other people's actions and vibes. Widespread deference has many dangers, which I list in "[Dangers of deference](https://tsvibt.blogspot.com/2022/09/dangers-of-deferrence.html)".
* I'm worried that there's a bucket error where "I think AGI comes soon." isn't separated from "We're going to be motivated to work together to prevent existential risk from AGI.".
My views on strategy
====================
* Alignment is really hard. No one has good reason to think any current ideas would work to make an aligned / corrigible AGI. If AGI comes, everyone dies.
* If AGI comes in five years, everyone dies. We won't solve alignment well enough by then. This of course doesn't imply that AGI coming soon is less likely. However, it does mean that some people should focus on somewhat different things. Most people trying to make the world safe by solving AGI alignment should be open to trains of thought that likely will only be helpful in twenty years. There will be a lot of people who can't help the world if AGI comes in five years; if those people are going to stress out about how they can't help, instead they should work on stuff that helps in twenty or fifty years.
* A consensus belief is often inaccurate, e.g. because of deference and information cascades. In that case, the consensus portfolio of strategies will be incorrect.
* Not only that, but furthermore: Suppose there is a consensus believe, and suppose that it's *totally correct*. If funders, and more generally anyone who can make stuff happen (e.g. builders and thinkers), use this *totally correct* consensus belief to make local decisions about where to allocate resources, *and they don't check the global margin*, then they will in aggregrate follow a portfolio of strategies that is incorrect. The make-stuff-happeners will each make happen the top few things on their list, and leave the rest undone. The top few things will be what the consensus says is most important——in our case, projects that help if AGI comes within 10 years. If a project helps in 30 years, but not 10 years, then it doesn't get any funding at all. This is not the right global portfolio; it oversaturates fast interventions and leaves slow interventions undone.
* Because the shared narrative says AGI comes soon, there's less shared will for projects that take a long time to help. People don't come up with such projects, because they don't expect to get funding; and funders go on not funding such projects, because they don't see good ones, and they don't particularly mind because they think AGI comes soon.
Things that might actually work
-------------------------------
Besides the standard stuff (AGI alignment research, moratoria on capabilities research, explaining why AGI is an existential risk), here are two key interventions:
* Human intelligence enhancement. Important, tractable, and neglected. Note that if alignment is hard enough that we can't solve it in time, but enhanced humans could solve it, then making enhanced humans one year sooner is almost as valuable as making AGI come one year later.
* Confrontation-worthy empathy. Important, probably tractable, and neglected.
+ I suspect there's a type of deep, thorough, precise understanding that one person (the intervener) can have of another person (the intervened), which makes it so that the intervener can confront the intervened with something like "If you and people you know succeed at what you're trying to do, everyone will die.", and the intervened can hear this.
+ This is an extremely high bar. It may go beyond what's normally called empathy, understanding, gentleness, wisdom, trustworthiness, neutrality, justness, relatedness, and so on. It may have to incorporate a lot of different, almost contradictory properties; for example, the intervener might have to at the same time be present and active in the most oppositional way (e.g., saying: I'm here, and when all is said and done you're threatening the lives of everyone I love, and they have a right to exist) while also being almost totally diaphanous (e.g., in fact not interfering with the intervened's own reflective processes). It may involve irreversible changes, e.g. risking innoculation effects and unilateralist commons-burning. It may require incorporating very distinct skills; e.g. being able to make clear, correct, compelling technical arguments, and also being able to hold emotional space in difficult reflections, and also being interesting and socially competent enough to get the appropriate audiences in the first place. It probably requires seeing the intervened's animal, and the intervened's animal's situation, so that the intervener can avoid being a threat to the intervened's animal, and can help the intervened reflect on other threats to their animal. Developing this ability probably requires recursing on developing difficult subskills. It probably requires to some extent thinking like a cultural-rationalist and to some extent thinking very much not like a cultural-rationalist. It is likely to have discontinuous difficulty——easy for some sorts of people, and then very difficult in new ways for other sorts of people.
+ Some people are working on related abilities. E.g. Circlers, authentic relaters, therapists. As far as I know (at least having some substantial experience with Circlers), these groups aren't challenging themselves enough. Mathematicians constantly challenge themselves: when they answer one sort of question, that sort of question becomes less interesting, and they move on to thinking about more difficult questions. In that way, they encounter each fundamental difficulty eventually, and thus have likely already grappled with the mathematical aspect of a fundamental difficulty that another science encounters.
+ Critch talks about empathy [here](https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more), though maybe with a different emphasis.
|
8abfb429-466d-4475-bb21-3bf6fb88da87
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The curious case of Pretty Good human inner/outer alignment
I have been convinced to believe that looking at the gap between human inner and outer alignment is a good way to think about potential inner/outer alignment problems in artificial general intelligences:
We have an optimisation process (evolution) trying to propagate genes, that created a general intelligence (me/you). For millions of years our inner goals of feeling really good would also satisfy evolution’s outer goal of propagating genes, because one of the things that feels the best is having sex. But eventually that intelligent agent figured out how to optimise for things that the outer optimisation process didn’t want, such as having protected sex or watching VR porn, thus satisfying the inner goal of feeling really good, but not the outer goal of propagating genes. This is often told as a cautionary tale: we only know of one General Intelligence and it’s misaligned. One day we will create an Artificial General Intelligence (AGI) and we will give it some sort of (outer) goal, and it might then develop an inner goal that doesn’t directly match what we intended. I think this only tells half the story.
Even though our general intelligence has allowed us to invent condoms and have sex without the added cost of children, a surprising amount of people decide to take them off because they find it fun and meaningful to have children.
In a world where we could choose to spend all our time having protected sex or doing drugs, a lot of us choose to have a reasonable number of kids and spend our time on online forums discussing AI safety, all of which seem to satisfy a more longer term version of “propagate your genes” than simply wanting to have sex because it feels good. More than that, we often choose to be nice in situations where being nice is even detrimental to the propagation of our own genes. People adopt kids, try to prevent wars, work on wildlife conservation, spend money on charity buying malaria nets across the world, and more.
I think there are two quest
|
bccb790b-6f33-49c1-831c-d8e7a905906d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI prediction case study 2: Dreyfus's Artificial Alchemy
Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.
The prediction classification shemas can be found in the first case study.
Dreyfus's Artificial Alchemy
* Classification: issues and metastatements, using the outside view, non-expert judgement and philosophical arguments.
Hubert Dreyfus was a prominent early critic of Artificial Intelligence. He published a series of papers and books attacking the claims and assumptions of the AI field, starting in 1965 with a paper for the Rand corporation entitled 'Alchemy and AI' (Dre65). The paper was famously combative, analogising AI research to alchemy and ridiculing AI claims. Later, D. Crevier would claim ''time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier'' (Cre93). Ignoring the formulation issues, were Dreyfus's criticisms actually correct, and what can be learned from them?
Was Dreyfus an expert? Though a reasonably prominent philosopher, there is nothing in his background to suggest specific expertise with theories of minds and consciousness, and absolutely nothing to suggest familiarity with artificial intelligence and the problems of the field. Thus Dreyfus cannot be considered anything more that an intelligent outsider.
This makes the pertinence and accuracy of his criticisms that much more impressive. Dreyfus highlighted several over-optimistic claims for the power of AI, predicting - correctly - that the 1965 optimism would also fade (with, for instance, decent chess computers still a long way off). H
|
da36944a-70ef-49fc-9819-4832912d2aaa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Can Humanism Match Religion's Output?
Perhaps the single largest voluntary institution of our modern world—bound together not by police and taxation, not by salaries and managers, but by voluntary donations flowing from its members—is the Catholic Church.
It's too large to be held together by individual negotiations, like a group task in a hunter-gatherer band. But in a larger world with more people to be infected and faster transmission, we can expect more virulent memes. The Old Testament doesn't talk about Hell, but the New Testament does. The Catholic Church is held together by affective death spirals—around the ideas, the institutions, and the leaders. By promises of eternal happiness and eternal damnation—theologians don't really believe that stuff, but many ordinary Catholics do. By simple conformity of people meeting in person at a Church and being subjected to peer pressure. &c.
We who have the temerity to call ourselves "rationalists", think ourselves too good for such communal bindings.
And so anyone with a simple and obvious charitable project—responding with food and shelter to a tidal wave in Thailand, say—would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists.
For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits.
True, the Catholic Church also goes around opposing the use of condoms in AIDS-ravaged Africa. True, they waste huge amounts of the money they raise on all that religious stuff. Indulging in unclear thinking is not harmless, prayer comes with a price.
To refrain from doing damaging things, is a true victory for a rationalist...
Unless it is your only victory, in which case it seems a little empty.
If you discount all harm done by the Catholic Church, and look only at the good... then does the average Catholic do more gross good than the average atheist, just by virtue of being more act
|
7318a1ae-d194-48e7-8759-e472322456a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to Brisbane Slate Star Codex Meetup [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading?
|
7e06c0b9-cb8f-4f78-90cb-b02f124998c0
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
AGI Safety Communications Initiative
A small group of AGI existential safety field-builders and I are starting research exploring a potential initiative about informing the public and/or important stakeholders about the risks of misaligned AI and the difficulties of aligning it.
We are aware that a public communication initiative like this carries risks (including of harming the AGI x-safety community’s reputation, of sparking animosity and misunderstandings between communities, or drawing attention to ways to misuse or irresponsibly develop scaleable ML architectures). We are still in the stage of evaluating whether/how this initiative would be good to pursue.
We are posting this on the forum to avoid the scenario where someone else starts a project about this at the same time and we end up doing duplicate work.
How you can get involved:
* If you are currently undertaking work similar to this or are interested in doing so, message me your email address along with a bit of context about yourself/what you are doing.
* We are drafting a longer post to share our current considerations and open questions. Message me if you would like to review the draft.
* We are looking for one or two individuals who are excited to facilitate a research space for visiting researchers. The space will run in Oxford (one week in Sep ’22) and in Prague (9-16 Oct ’22) with accommodation and meals provided for. If you take on the role as facilitator, you will receive a monthly income of $2-3K gross for 3 months and actually get to spend most of that time on your own research in the area (of finding ways to clarify unresolved risks of transformative AI to/with other stakeholders). If you are interested, please message me and briefly describe your research background (as relevant to testing approaches for effective intergroup communication, conflict-resolution and/or consensus-building).
|
1aa05809-fa83-4590-a4df-3b5314e3524b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Knowledge Base 3: Shopping advisor and other uses of knowledge base about products
In the post Knowledge Base 2: The structure and the method of building I proposed to build a knowledge base on various topics. In this post I am going to present some possible applications of this knowledge base constrained to knowledge about products available on the market. Possible applications of the general knowledge base are described in the post Knowledge Base 4: General applications.
Shopping advisor
Imagine an application that uses a knowledge base of products sold in stores to help consumers choose the right products (see the picture).
The knowledge base is marked in green. In this application we are only interested in a part of the knowledge base about products available on the market – initially only groceries. Information about products will be added to the database by producers, stores, and consumers.
A consumer sets (in the application’s profile) criteria of products that are important to him, specifying their weights. He can choose from 4 groups of criteria:
* related to him, for example taste of food or whether food is healthy (e.g. is it a part of a healthy diet?, or does it contain harmful additives? etc.)
* related to the environment, for example: is production or packaging harmful to the environment?
* related to animals, for example breeding conditions
* related to other people, for example: is the product made by local manufacturers?, does the manufacturer pay taxes in a chosen country?, etc.
Then the consumer can go to a store and use his smartphone to scan bar codes of selected groceries. A list of scanned products will appear on his smartphone. Products that meet the selected criteria will be marked in green, products that meet the criteria but are much cheaper in another store will be marked in yellow, and products that do not meet the criteria will be marked in red. Additionally, if a product does not meet the criteria, the application may suggest a similar but better product.
The rules translating information set in a custom
|
f88410a2-ae21-4264-a0a7-bd9d4f917289
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Life on the Grid (Part 2)
Previously: Life on the Grid (Part 1)
Note: there is of course some continuation of themes from part 1, but this essay was written to stand on its own so don’t feel like you have to read the previous installment before digging into this one.
----------------------------------------
Confinement by Sean Mundy
> “We are all in the gutter, but some of us are looking at the stars.”
> — Oscar Wilde
In part 1, I argued that our ability to navigate through both physical and mental landscapes (“fields of knowledge”) has degenerated, leaving us less willing to blaze trails or produce path-breaking innovations and generally lacking in agency and adventurousness. This degeneration of our navigational faculties has been caused by our reliance on automated wayfinding technologies and, more importantly, by an excessive “gridification” of the world, both materially (in our street networks and architecture) and socioeconomically (with our factory model schools and corporate ladders).
Here, we return to a theme that was only briefly touched on previously—the problem isn’t just that the world has become too grid-like, it is that, “nothing and nowhere escapes the techno-social net which we have cast over the planet. Uncharted territory has become a thing of the past.”
The seriousness of this problem cannot be overstated. Man feeds on terra incognita. The wildness of our imagination, the vitality of our spirit, the boldness of our dreams—these can only swell to their greatest extent when we feel as if there are hidden treasures or secrets waiting to be discovered.
> For eons, our minds and cultures have evolved in delicate symbiosis with the Unknown, that place on the map labeled “Here Be Dragons”. Without this Unknown, that place where there may be cities of gold or fountains of youth, the heroes (but not just the heroes, all of us) have nowhere to journey and all of the things which can make us into heroes—bravery, fortitude, ingenuity, daring, and the like—begin to atrophy. Wi
|
2988fc81-9cbc-4ec9-b041-d2f492621446
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Automated intelligence is not AI
Crossposted from world spirit sock puppet.
Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.
I question this equivalence, looking at past automation. In practice human cognitive labor is replaced by things that don’t seem at all cognitive, or like what we otherwise mean by AI.
Some examples:
1. Early in the existence of bread, it might have been toasted by someone holding it close to a fire and repeatedly observing it and recognizing its level of doneness and adjusting. Now we have machines that hold the bread exactly the right distance away from a predictable heat source for a perfect amount of time. You could say that the shape of the object embodies a lot of intelligence, or that intelligence went into creating this ideal but non-intelligent tool.
2. Self-cleaning ovens replace humans cleaning ovens. Humans clean ovens with a lot of thought—looking at and identifying different materials and forming and following plans to remove some of them. Ovens clean themselves by getting very hot.
3. Carving a rabbit out of chocolate takes knowledge of a rabbit’s details, along with knowledge of how to move your hands to translate such details into chocolate with a knife. A rabbit mold automates this work, and while this route may still involve intelligence in the melting and pouring of the chocolate, all rabbit knowledge is now implicit in the shape of the tool, though I think nobody would call a rabbit-shaped tin ‘artificial intelligence’.
4. Human pouring of orange juice into glasses involves various mental skills. For instance, classifying orange juice and glasses and judging how they relate to one another in space, and moving them while keeping an eye on this. Automatic orange juice pouring involves for instance a button that can only be pressed with a glass when the glass is in a narrow range of locations, which opens an orange juice faucet running into a spot common to all the possible glass-locat
|
26ae307d-36c1-4c92-80fe-37bc6103f529
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is there an assurance-contract website in work?
I bet I'm not the only one who, after finishing inadequate equilibria, thought in excitement "Ok, so where's the KickStarter for better Nash equlibria?".
I didn't find any existing site that does something like it, but i wonder if maybe someone in this community is working on it.
If you know anything about it, I'd appreciate a hint ;)
|
6a4ab216-2e61-4747-b903-fc86564c3590
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Random Musings on Theory of Impact for Activation Vectors
I want to share some thoughts on why activation vectors could be important, but before you read this post, you should know three things:
1. I don't know what I'm talking about
2. I don't know what I'm talking about
3. I definitely don't know what I'm talking about
Also, maybe this post is redundant and unnecessary[1] because it's already been explained somewhere I haven't seen. Or maybe I've seen someone write this elsewhere, but just can't remember the source.
But here goes:
Why might activation vectors be important for alignment when we have other ways of controlling the network such as prompting or fine-tuning?
One view of activation vectors is that they're a hack or a toy. Isn't it cool that you can steer a network by just passing a word through the network and adding it?
Another view would be that they're tapping into something fundamental, see Beren's Deep learning models might be secretly (almost) linear. In which case, we should expect this to keep working or even work better as networks scale up, rather than randomly breaking/stop working.
Another frame is as follows: language is an awkward format for neural networks to operate in, so they translate it into a latent space that separates out the important components, with of course some degree of superposition. Prompting and fine-tuning are roundabout ways of influencing the behavior of these networks: we're either pushing it into a particular simulation or training the network at to get better at fooling to think that it's doing a good job.
Ideally, we'd just have enough interpretability knowledge that we'd intervene on the latent space directly, setting or incrementing the exact right neurons. Activation vectors are a lot more scattergun than this, but they're closer to the ideal and work as a proof of concept.
The trick of performing, say, the subtraction 'happy-sad' to get a happiness vector demonstrates how this can be refined by zeroing out all of the features that these have in common. The
|
13e3a6f5-15f7-42bb-9ea3-b26d337c38dd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How to improve at critical thinking on science/medical literature?
I'm a medical student, and I will often read articles that are critical of scientific literature (Scott Alexander on Pharmacogenomics; EMCrit on thrombolysis in ischemic stroke, etc.) with some awe at the authors' ability to evaluate evidence.
I'm sure that part of this is practice. If I spend more time critically reading scientific literature, and less time taking experts at face value, I will likely become better able to think independently.
However, part of it strikes me as a lack of technical skills. I'm often unsure how to critique study designs when I don't understand the statistical methods being used.
Any recommendations for how I might get the skills I need to think independently about scientific/medical literature?
[Edit: Changed formatting of links after a comment]
|
880d510e-dd10-4fcd-8913-d8f005e6f2d9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Subjective expected utility without preferences
In the latest issue of Journal of Mathematical Psychology, Denis Bouyssou and Thieery Marchant provide a model for subjective expected utility without preferences. Abstract:
> This paper proposes a theory of subjective expected utility based on primitives only involving the fact that an act can be judged either ‘‘attractive’’ or ‘‘unattractive’’. We give conditions implying that there are a utility function on the set of consequences and a probability distribution on the set of states such that attractive acts have a subjective expected utility above some threshold. The numerical representation that is obtained has strong uniqueness properties.
PDF.
|
684d4f3f-7717-43d3-9c3e-c5db7028d864
|
LDJnr/LessWrong-Amplify-Instruct
|
LessWrong
|
"Summary: Rigorous scientific experiments are hard to apply in daily life but we still want to try out and evaluate things like self-improvement methods. In doing so we can look for things such as a) effect sizes that are so large that they don't seem likely to be attributable to bias, b) a deep understanding of the mechanism of a technique, c) simple non-rigorous tests.
Hello there! This is my first attempt at a top-level post and I'll start it off with a little story.
Five years ago, in a kitchen in London...
My wife: We're going to have my friends over for dinner and we're making that pasta sauce everyone likes. I'm going to need you to cut some red peppers.Me: Can do! *chop chop chop*My wife: Hey, Mr. Engineer, you've got seeds all over! What are you doing to that pepper?Me: Well, admittedly this time I was a bit clumsy and there's more seed spillage than usual - but it's precisely to avoid spilling seeds that I start by surgically removing the core and then...My wife: Stop, just stop. That's got to be the worst possible way to do this. See, this is how you cut a pepper, *chop chop chop*. Nice slices, no mess.
Me: *is humiliated* *learns*
Now, ever since then I've cut peppers using the method my wife showed me. It's a much better way to do it. But wait! How do I know that? Don't I realize that humans are subject to massive cognitive biases? Maybe I just want to please my wife by doing things her way so I've convinced myself her method is better. Maybe I'm remembering the hits and forgetting the misses - maybe I've forgotten all the times my method worked out great and the times her method failed. Maybe I am indeed making less of a mess than I used to but it's simply due to my knife skills improving - and that would have happened even if I'd stuck with the old method. And there are probably a dozen more biases and confounding factors that could come into play but I haven't even thought of.Don't I need to do a test? How about cutting up lots of peppers using the two alternative methods and measuring seed spillage? But, no, that's not good enough - I could subconsciously affect the result by applying less skill when using one method. I'd need a neutral party to operate the methods, preferably a number of people. And I'd need a neutral observer too. The person who measures the seed spillage from each operation should not know which method was used. Yeah, a double blind test, that's the ticket. That's what I should do, right?No, obviously that's not what I should do. There are two reasons:A) The resources needed to conduct the suggested test are completely disproportional to any benefit such a test might yield.B) I already bloody well know that my wife's method is better.The first reason is obvious enough but the second reason needs a bit more exploration. Why do I know this? I think there are two reasons.* The effect size is large and sustained. Previously, I used to make a mess just about every time. After I switched methods I get a clean cut just about every time.* I understand the mechanism explaining the effect very well. I can see what's wrong with the method I was using previously (if I try to pull the core through a hole that's too small for its widest part then some seeds will rub off) and I can see how my wife's method doesn't have that problem (no pulling the core through a hole, just cut around it).I'd like to try to generalize from this example. Many people on this site are interested in methods for self-improvement, e.g. methods for fighting akrasia or developing social skills. Very often, those methods have not been tested scientifically and we do not ourselves have the resources to conduct such tests. Even in cases where there have been scientific experiments we cannot be confident in applying the results to ourselves. Even if a psychology experiment shows that a certain way of doing things has a statistically significant1 effect on some group that is no guarantee that it will have an effect on a particular individual. So, it is no surprise that discussion of self-improvement methods is frequently met with skepticism around here. And that's largely healthy.But how can we tell whether a self-improvement method is worth trying out? And if we do try it, how can we tell if it's working for us? One thing we can do, like in the pepper example, is to look for large effects and plausible mechanisms. Biases and other confounding factors make it hard for us to tell the difference between a small negative effect, no effect and a small positive effect. But we still have a decent chance of correctly telling the difference between no effect and a large effect.Another thing we can do is to use some science. Just because a rigorous double blind test with a hundred participants isn't practical doesn't mean we can't do any tests at all. A person trying out a new diet will weigh themselves every day. And if you're testing out a self-improvement technique then you can try to find some metric that will give you an idea of you how well you are doing. Trying out a method for getting more work done on your dissertation? Maybe you should measure your daily word count, it's not perfect but it's something. As xkcd's Zombie Feynman would have it, "Ideas are tested by experiment, that is the core of science."Erring on the side of too much credulity is bad and erring on the side of too much skepticism is also bad. Both prevent us from becoming stronger.1) As good Bayesians we, of course, find psychologists' obsession with null hypotheses and statistical significance to be misguided and counterproductive. But that's a story for another time."
The above is a chunk of text from a post created by a user on the website LessWrong.
What is this text talking about?
This text is discussing the application of scientific methods and critical thinking to everyday life situations and self-improvement techniques. The author uses an anecdote about learning a better way to cut a pepper from his wife to illustrate his points. He argues that while rigorous scientific experiments may not always be feasible or necessary in daily life, it's still possible to evaluate the effectiveness of methods or techniques based on observable effects, understanding of the underlying mechanism, and simple tests.
The author also addresses the challenges of applying scientific findings to individual cases, particularly in the context of self-improvement methods. He suggests that large, observable effects and plausible mechanisms can be indicators of a method's effectiveness. Additionally, he advocates for the use of simple, non-rigorous tests to gauge progress or effectiveness, such as tracking daily word count when trying a new productivity technique.
The author concludes by cautioning against both excessive credulity and excessive skepticism, as both can hinder personal growth and improvement.
Could you elaborate on how one might practically apply the concept of observing large effects and plausible mechanisms in evaluating a new self-improvement technique?
Sure, let's take the example of a new time management technique as a self-improvement method.
Observing large effects: If you start using a new time management technique, a large effect could be something like a significant increase in your productivity or a drastic reduction in your stress levels. For instance, if you were previously completing three tasks per day, but after applying the new technique, you're consistently completing six tasks per day, that's a large effect. This could be an indicator that the technique is effective.
Understanding plausible mechanisms: This involves understanding why or how the technique is supposed to work. For the time management technique, the mechanism might involve prioritizing tasks, eliminating distractions, or breaking tasks into manageable chunks. If you understand the mechanism, you can see why the technique might lead to increased productivity or reduced stress.
In practical terms, to evaluate a new self-improvement technique, you could:
1. Define what a 'large effect' would look like for you in the context of the technique. This could be a quantifiable goal or a noticeable change in your behavior or feelings.
2. Understand the underlying principles or mechanisms of the technique. Research or ask the person who introduced you to the technique about why it's supposed to work.
3. Apply the technique consistently for a set period of time.
4. Observe any changes or effects during this period. This could involve keeping a journal, tracking specific metrics, or simply noting changes in your feelings or behavior.
5. At the end of the period, assess whether you've observed the 'large effects' you defined at the start, and whether these changes can plausibly be attributed to the mechanisms of the technique.
Remember, this is a simplified approach and doesn't replace rigorous scientific testing, but it can be a practical way to evaluate self-improvement techniques in your own life.
Explain how journaling can be used as a tool to track specific metrics when evaluating the effectiveness of a new self-improvement technique.
Journaling can be an effective tool for tracking specific metrics when evaluating a self-improvement technique because it allows for detailed, personal records over time. Here's how you can use it:
1. Define Your Metrics: Before you start journaling, decide what metrics are important for the self-improvement technique you're evaluating. For instance, if you're trying a new productivity technique, your metrics could be the number of tasks completed, hours spent on productive work, or how often you meet your deadlines.
2. Regular Entries: Make regular journal entries, ideally daily. In each entry, record data related to your defined metrics. For instance, you might write down how many tasks you completed that day, how long you worked without distraction, or whether you met your goals for the day.
3. Subjective Observations: In addition to hard data, use your journal to record subjective observations. How did you feel throughout the day? Did the technique make you feel more stressed or less? Did you feel more or less productive? These subjective observations can provide context for your metrics and help you understand the impact of the technique on your overall well-being.
4. Reflection and Analysis: After a set period of time, review your journal entries. Look for trends or patterns in your metrics. Did your productivity increase over time? Did your stress levels decrease? Reflect on your subjective observations as well - did your feelings about the technique change over time?
5. Make Adjustments: Based on your analysis, you can decide whether the technique is effective for you, or if you need to make adjustments. Perhaps you need to tweak the technique, or try a different one altogether.
By tracking specific metrics and subjective observations in a journal, you can gather data about the effectiveness of a self-improvement technique in your own life. This can help you make informed decisions about what techniques work best for you.
|
dd0318d0-98b8-4d41-8a1c-c16be26f202c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Don't Get Offended
Related to: Politics is the Mind-Killer, Keep Your Identity Small
Followed By: How to Not Get Offended
One oft-underestimated threat to epistemic rationality is getting offended. While getting offended by something sometimes feels good and can help you assert moral superiority, in most cases it doesn't help you figure out what the world looks like. In fact, getting offended usually makes it harder to figure out what the world looks like, since it means you won't be evaluating evidence very well. In Politics is the Mind-Killer, Eliezer writes that "people who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there's a Blue or Green position on an issue." Don't let yourself become one of those zombies-- all of your skills, training, and useful habits can be shut down when your brain kicks into offended mode!
One might point out that getting offended is a two-way street and that it might be more appropriate to make a post called "Don't Be Offensive." That feels like a just thing to say-- as if you are targeting the aggressor rather than the victim. And on a certain level, it's true-- you shouldn't try to offend people, and if you do in the course of a normal conversation it's probably your fault. But you can't always rely on others around you being able to avoid doing this. After all, what's offensive to one person may not be so to another, and they may end up offending you by mistake. And even in those unpleasant cases when you are interacting with people who are deliberately trying to offend you, isn't staying calm desirable anyway?
The other problem I have with the concept of being offended as victimization is that, when you find yourself getting offended, you may be a victim, but you're being victimized by yourself. Again, that's not to say that offending people on purpose is acceptable-- it obviously isn't. But you're the one who gets to decide w
|
7e5b8e11-324c-46ef-bc16-47d9147d5724
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What exactly does 'Slow Down' look like?
I see a lot of posts regarding regulation, usually these involve some sort of phrase like 'Slow Down' or 'Shut Down'. Pragmatically, what does that look like? Is there an actual draft of a proposed bill somewhere that anyone who had the ear of a Senator could point to and say "do that"?
If that does not exist, shouldn't getting a single draft of the bill which (even if imperfect) everyone agrees to support as a first step, be the top priority?
|
b397154f-a2cc-4956-bbe7-26728d40a9b7
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Intelligence explosion
An "intelligence explosion" is what happens if a machine intelligence has [fast, consistent returns on investing work into improving its own cognitive powers, over an extended period](https://intelligence.org/files/IEM.pdf). This would most stereotypically happen because it became able to optimize its own cognitive software, but could also apply in the case of "invested cognitive power in seizing all the computing power on the Internet" or "invested cognitive power in cracking the protein folding problem and then built nanocomputers".
|
8ff5d87e-6719-4962-b573-2b4b3ebfd88f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Agency and Coherence
Epistemic status: spitballing.
"Like Photons in a Laser Lasing"
> When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet?
>
> And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identified by the Peano axioms; all the things being said are true about the numbers. Even though somebody who was missing the point would at once object that the human contained no mechanism to evaluate each of their statements against all of the numbers, so obviously no human could ever contain a mechanism like that, so obviously you can't explain their success by saying that each of their statements was true about the same topic of the numbers, because what could possibly implement that mechanism which (in the person's narrow imagination) is The One Way to implement that structure, which humans don't have?
>
> But though mathematical reasoning can sometimes go astray, when it works at all, it works because, in fact, even bounded creatures can sometimes manage to obey local relations that in turn add up to a global coherence where all the pieces of reasoning point in the same direction, like photons in a laser lasing, even though there's no internal mechanism that enforces the global coherence at every point.
>
> To the extent that the outer optimizer trains you out of paying five apples on Monday for something that you trade for two oranges on Tuesday and then trading two oranges for four apples, the outer optimizer is training all the little pieces of yourself to be locally coherent in a way that can b
|
8cf0623f-b0c2-4782-90b4-3ca1032f6f48
|
trentmkelly/LessWrong-43k
|
LessWrong
|
More power to you
When people say things like “we were promised flying cars,” I sometimes wonder, “who promised this?”
I guess this is what they mean. From a 1959 ad that ran in the LA Times:
As a friend pointed out, “they're not even wearing seat belts!” The Los Angeles Times, June 21, 1959
“They’re working on it!” the ad claims. “Some of this is happening already.” (Implying, of course, that some of it was still pretty speculative.)
The ability to “dial” a book, lecture, or demonstration is here; the ultrasound dishwasher, automatic bed-maker, and flying car sadly are not.
But here’s what’s most interesting to me: First, the reference, without explanation or justification, to “tomorrow’s higher standard of living”—something people simply assumed was coming. Second, that it was uncontroversial that this higher standard of living would require more electricity. Finally, the boasting of doubling electricity production in ten years. As I’ve pointed out, society once had a different fundamental attitude towards growth.
The US did, in fact, approximately double electricity generation every decade from 1950 until 1973—the year of the oil crisis—after which electricity growth never really recovered. It took almost thirty years to double again, from 1973 to 2000, during which time it was just keeping up with population growth; that is, per-capita energy usage was not increasing. And since 2005, even the total amount produced has been flat:
US electricity generation. Data from US Energy Information Administration
“The electric companies are resolved,” the ad says, to maintain for America “the best electric power service in the world.” Well, we are still one of the top overall producers, second only to China:
But since the oil shocks, I’d say that “resolve” has been shaken.
|
e79af4f2-4bf2-4d0e-b2cd-16334367c655
|
trentmkelly/LessWrong-43k
|
LessWrong
|
My case for starting blogging
(This is a retrospective post on a month of daily blogging on my personal blog, what I got out of it, my case for the benefits of regular writing and why this is a good use of your time, and advice for starting a daily writing project yourself)
Retrospective
Overall, I am incredibly happy with this project! This began as a 15 min/day project, and ballooned into 2-3 hours/day by the end, which ended up a dramatically bigger time sink than expected. But I consider this super worth it!
I find it useful to divide happiness into happiness of the experiencing self, feeling joy in the moment, and happiness of the remembering self, looking back at the action and feeling satisfied that it happened. And blogging was definitely a good project for the experiencing self! But now I have some perspective, I want to think about the value I’ve gotten for the remembering self:
* Satisfaction
* I’m really happy that I’ve made something - the total length is about 67,000 words, which is about the length of a standard book. Which feels exciting, and not quite real
* Intuitively, I feel like books should be longer than that, but clearly my reference class here is fucked - most reasonable sources agree with this number
* I deliberately optimised blog posts for the ability to say in conversation “oh, I have a blog post about this” - in part because this was a good way to select for important ideas, and in part because I find it really satisfying to be able to say this. And this has definitely worked!
* Projects that appeal to my inherent sense of smugness are the best projects
* I’ve heard from a range of people who’ve read various posts that they’ve enjoyed them and gotten value from it, and a handful of tangible ways this has been useful (I really enjoy receiving these messages ;) )
* Perfectionism
* One of the original goals was to become less of a perfectionist. So I very explicitly set out to not be a perfectionist when blogging, and to onl
|
1f291dff-9941-432b-b213-2903296de3ad
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Asymptotic Convergence in Online Learning with Unbounded Delays
1 Introduction
---------------
We study the problem of predicting the results of computations that are too large to evaluate, given observation of the results of running many smaller computations. For example, we might have a physics simulator and want to predict the final location of a ball in a large environment, after observing many simulated runs of small environments.
When predicting the outputs of computations so large that they cannot be evaluated, generating training data requires a bit of creativity. Intuitively, one potential solution is this: Given enough computing resources to evaluate “medium-sized” computations, we could train a learner by showing it many runs of small computations, and having it learn to predict the medium-sized ones, in a way that generalizes well. Then we could feed it runs of many medium-sized computations and have it predict large ones. This is an online learning problem, where the learner observes the results of more and more expensive computations, and predicts the behavior of computations that are much more difficult to evaluate than anything it has observed so far.
The standard online learning setting, in which the learner predicts an outcome in a sequence after observing all previous outcomes, does not capture this problem, because delays between prediction and observation are the key feature. Dudik:2011, Joulani:2013, and others have studied online learning with delayed feedback, but they assume that delays are bounded, whereas in our setting the delays necessarily grow ever-larger. In this paper, we propose an algorithm EvOp for online learning with unbounded delays. EvOp not a practical algorithm; it is only a first step towards modeling the problem of predicting large computations as an online learning problem.
Predicting a sequence generated by arbitrary computations is intractable in general. Consider, for instance, the bitstring that tells which Turing machines halt. However, the problem is not hopeless, either: Consider the bitstring where the nth digit is a 1 if and only if the 10nth digit in the decimal expansion of π is a 7. This is an online learning problem with ever-growing delays where a learner should be able to perform quite well. A learner that attempts to predict the behavior of computations in full generality will encounter some subsequences that it cannot predict, but it will encounter others that are highly regular, and it should be able to identify those and predict them well.
Consider, for instance, the bitstring that interleaves information about which Turing machines halt with the 10nth digits of π. Intuitively, a good predictor should identify the second subsequence, and assign extreme probabilities whenever it has the computing resources to compute the digit, and roughly 10% probability otherwise, in lieu of other information about the digit. However, it’s not clear how to formalize this intuition: What does it mean for a forecaster to have no relevant information about a digit of π that it knows how to compute? What are the “correct” probabilities a bounded reasoner should assign to deterministic facts that it lacks the resources to compute?
In this paper, we sidestep those questions, by analyzing the problem in a stochastic setting. This lets us study the problem of picking out patterns in subsequences in the face of unbounded delays, in a setting where the “correct” probabilities that a predictor should be assigning are well-defined. In \Secdeterministic we relate our findings back to the deterministic setting, making use of “algorithmic randomness” as described by, e.g., Downey:2010.
We propose an algorithm EvOp with the property that, on any subsequence for which an expert that it consults predicts the true probabilities, it converges to optimal behavior on that subsequence. We show that regret and average regret are poor measures of performance in this setting, by demonstrating that in environments with unbounded delays between prediction and feedback, optimal predictors can fail to have average regret going to zero. EvOp works around these difficulties by comparing forecasters on sparse subsequences of their predictions; this means that, while we can put bounds on how long it takes EvOp to converge, the bounds are very, very weak. Furthermore, EvOp is only guaranteed to converge to good behavior on subsequences when it has access to optimal experts; we leave it to future work to give a variant that can match the behavior of the best available expert even if it is non-optimal.
In \Secsetup we define the problem of online learning with unbounded delays. In \Secproblem we show that consistency is impossible and discuss other difficulties. In \Secsolution we define EvOp, prove that it converges to Bayes-optimal behavior on any subsequence for which some expert makes Bayes-optimal predictions, and provide very weak bounds on how long convergence takes. In \Secdeterministic we relate these results back to the deterministic setting. \Secconclusions concludes.
###
1.1 Related Work
An early example of online sequence learning using expert advice is Littlestone:1994; much work has been done since then to understand how to perform well relative to a given set of forecasters (Vovk:1990; Cesa:1998; Haussler:1995). Rakhlin:2012 improve performance of online learning algorithms assuming some structure in the environment, while maintaining worst-case guarantees. Gofer:2013 study the case with a potentially unbounded number of experts.
Most work in online learning has focused on the case where feedback is immediate. Piccolboni:2001 study online prediction with less rigid feedback schemes, proving only weak performance bounds. Weinberger:2002 show that running experts on sub-sampled sequences can give better bounds, for the case with bounded feedback delay. In the widely studied bandit setting (Auer:2002), some attention has been given to learning with bounded delays (Neu:2010; Dudik:2011). There have been some attempts to work with unbounded feedback delays (Mesterharm:2005; Mesterharm:2007; Desautels:2014), with either strong assumptions on the target function or with weak performance bounds. Quanrud:2015 achieve reasonable regret bounds in an adversarial setting; our work achieves asymptotic convergence in a stochastic setting. A review, and a very general framework for online learning with arbitrary (but bounded) feedback delay is given by Joulani:2013.
Online learning with delayed feedback has applications in domains such as webpage prefetching, since the prediction algorithm has to make some prefetching decisions before learning whether a previously fetched page ended up being requested by the user (Padmanabhan:1996). The idea of learning from computations with delay has seen some use in parallel computation, e.g., distributed stochastic optimization where computations of gradients may take longer in some nodes (Zinkevich:2009; Agarwal:2011).
Outside the field of online learning, our work has interesting parallels in the field of mathematical logic. Hutter:2013 and Demski:2012a study the problem of assigning probabilities to sentences in logic while respecting certain relationships between them, a practice that dates back to Gaifman:1964. Because sentences in mathematical logic are expressive enough to make claims about the behavior of computations (such as “this computation will use less memory than that one”), their work can be seen as a different approach to the problems we discuss in this paper.
2 The Unbounded Delay Model
----------------------------
Let X be a set of possible outcomes and Y be a set of possible predictions, where Y is a convex subset of \RRn for some n. Let L:X×Y→\RR be a loss function measuring the difference between them, which is strongly convex (with strong convexity constant ρ) and Lipschitz (with Lipschitz constant κ). Roughly speaking, the environment will stochastically produce an infinite sequence of outcomes xi, and an infinite sequence of observations oi, where each oi contains information about finitely many xn. Formally, for each i=1,2,…, let oi:\NN→X be a finite-domain partial function from indices to outcomes; in other words, oi is a set of (n,x) “feedback” pairs such that each n appears in at most one pair. We write oi(n) for the value of x associated with n, which is feedback about the outcome xn, and which may be undefined. If oi(n) is defined, we say that oi reveals xn.
Formally, we write Xi for the random variable representing the ith output and Oi for the random variable representing the ith observation. We define the *true environment* P to be a joint distribution over the Xi and the Oi, such that if oi(n)=xn then P(Oi=oi∧Xn≠xn)=0, which means that all oi(n) which are defined agree on the value of xn. We omit the random variables if we can do so unambiguously, writing, e.g., P(xn∣oi).
Note that there may exist n such that oi(n) is not defined for any i, in which case the forecaster will never observe xn. We write o≺i for the list of observations up to time i, and o≺i(n) for the value of xn if any observation in o≺i reveals it.
We consider learning algorithms that make use of some set F of forecasters.
{definition}
A forecaster is a partial function f which takes as input n observations o≺n and might produce a prediction yn∈Y, interpreted as a prediction of xn.
Because some outcomes may never be observed, and because forecasters are partial (and so may abstain from making predictions on certain subsequences of the outcomes), we will compare forecasters only on subsequences on which both are defined.
{definition}
A subsequence s of the outcomes is a monotonic strictly increasing list of natural numbers s1s2…. We write |s| for the length of s, which may be ∞. A forecaster f is defined on s if it outputs a prediction for all elements si of s, i.e., if, for all i≤|s|, ysi\coloneqqf(o≺si) is defined.
We assume that at least one f∈F is defined everywhere. It may seem prohibitively expensive to evaluate f(o≺si) if si is large. For example, consider the subsequence s=1,10,100,…; f only predicts x1010 after making 1010 observations, despite the fact that x1010 is the eleventh element in the subsequence. However, there is no requirement that observations contain lots of feedback: o≺si might not reveal very much, even if si is large.
The goal of a forecaster is to minimize its loss ∑ni=1L(xsi,ysi), for n≥1. Two forecasters can be compared by comparing their total loss.
{definition}
Given a forecaster f defined on a subsequence s of length at least n, let
| | | | |
| --- | --- | --- | --- |
| | Fs\coloneqq{f′∈F∣f′ is %
defined on s}. | | (1) |
Then the regret of f (on s, through n) is
| | | | |
| --- | --- | --- | --- |
| | | | (2) |
f is consistent (with respect to Fs) if its average expected regret goes to zero, that is, if
| | | | |
| --- | --- | --- | --- |
| | limn→∞\sfrac\EE[Rns(f)]n=0. | | (3) |
In our setting, consistency is too strong a guarantee to ask for, as we will see in \Secproblem. Instead, we present an algorithm EvOp with the property that, whenever there is a forecaster f∈F that is Bayes-optimal on some subsequence, EvOp eventually learns to predict optimally on that subsequence.
{definition}
A forecaster f is Bayes-optimal (for the true environment, in its domain) if:
1. Everything f predicts is almost surely eventually revealed. That is, if f(o≺n) is defined, then with probability 1 there is some N such that oN(n) is defined.
2. f minimizes expected loss against the true environment whenever it makes a prediction. That is, if yn\coloneqqf(o≺n) is defined, then yn=\argminy\EE[L(xn,y)∣o≺n].
We will occasionally refer to a Bayes-optimal f as simply “optimal”.
The main result of our paper is this: Whenever there is an optimal forecaster f∈F defined on s, our algorithm EvOp converges to optimal behavior on s.
{theorem}[]
For any Bayes-optimal fs∈F defined on s,
| | | | |
| --- | --- | --- | --- |
| | limn→∞|L(xsn,EvOp(o≺sn))−L(xsn,fs(o≺sn))|=0. | | (4) |
We call algorithms with this property *eventually optimal*.
We will define EvOp in \Secsolution, and prove \Thmevop in \Secproof. Weak bounds on how long it takes EvOp to converge to Bayes-optimal behavior on any individual subsequence are given in \Secbounds.
Eventual optimality is a very strong condition, and only yields guarantees if F contains Bayes-optimal forecasters. In this paper we focus on showing that an eventually optimal predictor exists, and providing weak bounds on how long it takes it to converge to optimal behavior on a subsequence (and how much loss can be accumulated in the meantime). As we will see in \Secproblem, this is non-trivial. We leave the problem of converging on the best available forecaster of a subsequence (even if it is not optimal) to future research.
3 Difficulties in this Setting
-------------------------------
Total regret and average regret are poor measures of forecaster performance in this setting, and consistency (as defined by \Defregret) is impossible in general. To show this, we will describe an environment P# which exploits the long delays to make learning difficult.
P# generates outcomes as follows. It flips a fair coin and reveals it once, and then flips another and reveals it ten times, then flips a third and reveals it one hundred times, and so on, always revealing the kth coin 10k−1 times. The forecasters spend one timestep predicting the first coin, ten timesteps predicting the second coin, one hundred timesteps predicting the third coin, and so on. The observations are set up such that they contain no information about the coin currently being predicted: The forecasters must predict the kth coin all 10k−1 times before it is revealed.
Formally, let X\coloneqq\Set\textsch,\textsct corresponding to “heads” and “tails” respectively. Let Y be the set of probability distributions over X, which can be represented as real number p∈[0,1]. P# is a Markov chain, where each xi+1 is conditionally independent from all other outcomes given xi. P#(X1=\textsch)=0.5. For i=2,12,112,1112,…, xi “reveals a new coin” and is independent of xi−1: P#(Xi=\textsch∣Xi−1=⋅)=0.5. For all other i, xi “reveals the same coin again:” xi=xi−1. Each On is a deterministic function of X1…Xn which reveals the first ⌈log10(n⋅\sfrac910)⌉ outcomes. Let L be squared error; that is, let L(\textsch,p)=(1−p)2 and L(\textsct,p)=p2.
Clearly, the best prediction of xn that a forecaster can make given o≺n is 0.5, because o≺n does not contain any information about the coin revealed by xn, which is fair. Thus, the simple forecaster f∗(o≺n)=0.5 is Bayes-optimal. However, the regret of f∗ may be very high! To see this, consider a forecaster f1, the “gambler,” defined f1(o≺n)=1. In expectation, f1 will receive higher total loss on any subsequence of the true outcomes. However, f1 will spend about half the time with a lower total loss than f∗, because each time a new coin begins being predicted, it has the opportunity to recoup all its losses.
f∗ accumulates loss at a rate of \sfrac14 units per prediction, which means that, after the kth coin has been predicted all 10k−1 times, its aggregate loss is \sfrac14⋅∑ki=110i−1. f1 accumulates either 0 or 1 unit of loss in each step according to whether the coin comes up heads or tails, so in the worst case, it will have ∑ki=110i−1 total loss after the kth coin. If the k+1 coin comes up heads, then f∗ gains an additional \sfrac1410k loss while f1’s loss remains unchanged. 10k accounts for more than nine tenths of ∑ki=110i, so if the coin came up heads then f1’s total loss is at most a tenth of ∑ki=110i, whereas f∗’s total loss is a quarter of ∑ki=110i. In fact, any predictor that assigns average probability ≤0.5 across all 10k−1 reveals of the kth coin will have at least 15% more loss than f1 after the ∑ki=1th step, if that coin comes up heads.
By a similar logic, whenever the kth coin comes up tails, f1’s loss shoots up above that of f∗, no matter how lucky it was previously. Thus we see that if f1∈F, the regret of f∗ will swing wildly back and forth. Any predictor which is maintaining a mixture of forecasters and weighting them according to their regret will have trouble singling out f∗.
Indeed, if the environment is P#, and if F contains both f1 and the opposite gambler f0 defined as f0(o≺n)=0, then it is impossible for a forecaster to be consistent in the sense of \Defregret. If the average probability a forecaster assigns to the kth coin is ≤0.5 and the coin comes up heads, it gets very high regret relative to f1, whereas if it’s ≥0.5 and the coin comes up tails, it gets very high regret relative to f0. The only way for a forecaster to avoid high regret against both gamblers is for it to place higher probability on the true result of the coin every single time. With probability 1 it must slip up infinitely often (because the coins are fair), so each forecaster’s regret will be high infinitely often. And the amount of regret—at least 15% of all possible loss—is proportional to n, so limn→∞\sfrac\EE[Rn(f)]n cannot go to zero.
Lest this seem like a peculiarity of the stochastic setting, observe that a similar problem could easily occur in the deterministic setting, when a learner is predicting the behavior of large computations. For example, imagine that the “coins” are chaotic subsystems inside a physics simulation, such that large environments have many correlated subsystems. In this case, some experts might start “gambling” by making extreme predictions about those subsystems, and it may become difficult to distinguish the accurate forecasters from the gamblers, while looking at total or average regret.
The first fix that comes to mind is to design a predictor with a learning rate that decays over time. For example, if the learner weights the loss on xn by \sfrac110n then it will assign each cluster of 10k−1 predictions roughly equal weight, thereby neutralizing the gamblers. However, this fix is highly unsatisfactory: It runs into exactly the failures described above on the environment P#2 which reveals the kth coin 1010k times instead. It might be the case that for each specific environment one could tailor a learning rate to that environment that allows a predictor to successfully distinguish the optimal forecasters from the gamblers using regret, but this would be an ad-hockery tantamount to hardcoding the optimal forecaster in from the beginning. This motivates the study of how a predictor can successfully identify optimal experts at all in this setting.
4 The EvOp Algorithm
---------------------
\Sec
problem showed that in this setting, it is possible for gamblers to take advantage of correlated outputs and unbounded delays to achieve drastic swings in their total loss, which makes total and average regret bad measures of a forecaster. We can address this problem by comparing forecasters only on *independent* subsequences of outcomes on which they are both defined.
Intuitively, the gamblers are abusing the fact that they can correlate many predictions before any feedback on those predictions is received, so we can foil the gamblers by assessing them only on a subsequence of predictions where each prediction in the subsequence was made only after receiving feedback on the previous prediction in the subsequence. EvOp is an algorithm which makes use of this intuition, and \Thmevop shows that it is sufficient to allow EvOp to zero in on Bayes-optimal predictors regardless of what strategies other forecasters in F use.
{definition}
A sequence s is independent if, for all i>1, osi(si−1) is defined.
{algorithm2e}
EvOp, an eventually optimal predictor. ∥⋅∥ is the l2 norm, and 1/0=∞.
Input: o≺n, the first n observations
Data: ε, an arbitrary constant <1
// Computes an independent subsequence on which fi and fj disagree.
\Fn
testseqn(i, j, m)
t←0
{waiting}←false
for *k in 1,2,…,n* do
if *waiting and t∈dom(ok)* then
\Outputk
{waiting}←false
else if *yik\coloneqqfi(o≺k) and yjk\coloneqqfj(o≺k) are defined, and ∥yik−yjk∥>1/m* then
t←k
{waiting}←true
// Computes the difference between the scores of fi and fj on an independent subsequence on which they disagree.
\Fnrelscoren(i, j, m)
s←testseqn(i, j, m)
return *∑|s|k=1(L(xsk,yisk)−L(xsk,yjsk)−ρε2m2)*
\Fnmaxscoren(i)
return *maxj∈N≥1,m∈N≥0i−j−m+relscoren(i, j, m)*
f←mini∈N≥1 such that o≺n∈dom(fi)maxscoren(i)
return *f(o≺n)*
EvOp works as follows. Fix an enumeration f1,f2,… of F, which must be countable but need not be finite; we can assume without loss of generality that this enumeration is countably infinite. EvOp compares fi to fj by giving it a relative score, which is dependent on the difference between their loss measured only on an independent subsequence of predictions on which they are both defined, constructed greedily. Lower scores are better for fi. The score is also dependent on ρ, the strong convexity constant for L, and an arbitrary positive ε<1, which we use to ensure that if fi and fj make different predictions infinitely often then their scores actually diverge. EvOp follows the prediction of the fi chosen by minimaxing this score, i.e., it copies the fi that has the smallest worst-case score relative to any other fj. Pseudocode for EvOp is given by \Algevop.
To see that the max step terminates, note that it can be computed by checking only finitely many j and m: relscoren(i, j, m) is bounded above by ∑|s|k=1L(xk,yik), so all (j,m) pairs such that j+m is greater than this value may be discarded. To see that the min step terminates, note that it can be computed by checking only finitely many i (assuming that at least one f is defined on o≺n), because when m=0, testseqn(i, j, m) is empty; thus when j=1 and m=0, maxscoren(i, j, m) is at least i−1. Therefore, after finding the smallest k such that fk is defined on o≺n, the min step need only continue searching up through i=maxscoren(k)+1.
EvOp gets around the problems of \Secproblem by comparing forecasters only on greedily-constructed independent subsequences of the outcomes. Note that if the delay between prediction and feedback grows quickly, these subsequences might be very sparse. For example, in the environment P# of \Secproblem, the independent subsequence will have at least 10i timesteps between the i−1st element in the subsequence and the next. This technique allows EvOp to converge on Bayes-optimal behavior, but it also means that it may do so very slowly (if the subsequence is very sparse). Under certain assumptions about the speed with which delays grow and the frequency with which forecasters disagree, it is possible to put bounds on how quickly EvOp converges on Bayes-optimal behavior, as discussed in \Secbounds. However, these bounds are quite weak.
###
4.1 Proof of \Thmevop
To prove \Thmevop we need two lemmas, which, roughly speaking, say that (1) if fz is Bayes-optimal then maxscoren(z) is bounded; and (2) if fj is not Bayes-optimal and some fz∈F is, then maxscoren(j) goes to infinity. From there, the proof is easy.
In what follows, let fz be a Bayes-optimal forecaster (as per \Defoptimal) that makes infinitely many predictions all of which are almost surely eventually revealed—that is, such that fz(o≺n) is almost surely defined infinitely often, and whenever it is defined, oi(n) is almost surely defined for some i. Let z be the index of fz in the enumeration over F. In general, we will write yin for fi(o≺n) when it is defined.
{lemma}
If fz is Bayes-optimal and makes infinitely many predictions all of which are almost surely eventually revealed, then with probability 1, maxscoren(z) is bounded.
###### Proof.
For all j, relscoren(z, j, 0)=0, because testseqn(z, j, 0) never outputs. Thus, maxscoren(z) is bounded below by z−1 (consider the case where j=1 and m=0) and bounded above by z−j−m+relscoren(z, j, m). When m=0 this is bounded above by z−j, so it suffices to show that there is almost surely some bound B such that relscoren(z, j, m)−j−m is bounded above by B for every j and m≥1.
Intuitively, in expectation, relscoren(z, j, m) should either be finite or diverge to −∞, because fz is Bayes-optimal and is only being compared to other forecasters on independent subsequences. We will prove not only that it’s bounded above in expectation, but that it is bounded above with probability 1. To do this we use \Lemjessica in \Appjessica, which (roughly speaking) says that something which is zero in expectation, and which has “not too much” variance in expectation, can’t get too far from zero in fact.
Fix j, m≥1, and λ; we will bound the probability that relscoren(z, j, m)≥λ. Let s=s1s2… be the outputs of testseq∞(x, j, m), that is, the entire greedily-generated sparse independent subsequence of outputs on which both fz and fj make predictions that differ by at least \sfrac1m (which could be generated by running testseqn(x,j,m) on larger and larger n). s may or may not be finite.
Because L is strongly convex,
| | | | |
| --- | --- | --- | --- |
| | L(xk,yjk)≥L(xk,yzk)+∇yL(xk,yzk)⋅(yjk−yzk)+ρ2∥yjk−yzk∥2, | | (5) |
where ∇y takes the gradient of L with respect to the prediction, ρ is the strong convexity constant of L, and ∥⋅∥ is the l2 norm. In other words, the loss of fj in any given round is at least that of fz plus a linear term (which, note, is related to the Lipschitz constant of L) plus a quadratic term. Rearranging this inequality,
| | | | |
| --- | --- | --- | --- |
| | L(xk,yzk)−L(xk,yjk)≤−∇yL(xk,yzk)⋅(yjk−yzk)−ρ2∥yjk−yzk∥2. | | (6) |
We will show that the sum of the right-hand side for k=1,2,…,n is bounded, using \Lemjessica.
\Lem
jessica requires a sequence of random variables G1H1G2H2… that form a Markov chain, and two real-valued functions v and r defined on the Gi and the Hi respectively, such that \EE[r(Hi)∣v(Gi)]=0, and |r(Hi)|≤a√v(Gi) for some constant a. Intuitively, these constraints say that r is zero in expectation, and that its absolute value is bounded by v. \Lemjessica then gives us a bound on the probability that ∑ni=1r(Hi)−v(Gi)≥λ. We use it with r as the first term on the right-hand side of \Eqnl, and v as the negative of the second. Roughly, r can be thought of as a first-order approximation to the amount by which fj did better than expected (a “residual”), and v as a bound on how wildly r can swing (a “variance”).
Let Gi be o≺si111This is somewhat ill-defined, since si is itself a random variable. We can make this more precise by defining Gi=(si,o≺si) and noting that si can be determined from knowing only o≺si and Hi be o≺k where k is the least time after si such that si∈dom(ok). k exists, because fz only makes predictions that, with probability 1, are eventually revealed. Intuitively, our Markov chain alternates between elements of s and the times when those elements were revealed. For i>|s|, let Gi=Hi=o≺∞, the (infinite) combination of all observations.
Define r to be the function r(Hi)=−∇yL(xsi,yzsi)⋅(yjsi−yzsi) when i≤|s|, and 0 otherwise. Observe that this value can be calculated from Hi, fz, and fj, because Hi=o≺k, with k>si and xsi\coloneqqo≺k(si) defined.
Define v to be the function v(Gi)=ρ2∥yjsi−yzsi∥2 when i≤|s|, and \sfracρ2m2 otherwise, which can be calculated from Gi, fz, and fj, because Gi is just o≺si. Note that \EE[r(Hk)∣Gk]=0, because fz is a Bayes-optimal predictor, which means it minimizes expected loss, making the gradient in r(Hi) zero in expectation for all i. Note also that because L is Lipschitz, |r(Hk)|≤κ∥yjn−yzn∥ where κ is the Lipschitz constant of L. Thus, with a=κ√2√ρ, |r(Hk)|≤a√v(Gk). Therefore, r and v meet the conditions of \Lemjessica, so for all M,
| | | | |
| --- | --- | --- | --- |
| | \PP(n∑i=1r(Hi)−v(Gi)≥M)≤exp(−ρκ−2M), | | (7) |
which goes to 0 as M goes to infinity. We need a bound that forces it to 0 as n→∞. In what follows, we write b=ρκ−2 for conciseness.
Observe that relscoren(z,j,m)≤∑tni=1(r(Hi)−v(Gi)−\sfracρε2m2), where tn is the number of times testseqn(z, j, m) outputs. Thus, the probability that relscoren(z,j,m)≥Λ for any Λ is upper-bounded by the probability that, for some n,
| | | | |
| --- | --- | --- | --- |
| | (tn∑i=1r(Hi)−v(Gi))−tnρε2m2≥Λ. | | (8) |
For any given n and t, applying inequality \eqlambda with Λ+tρε2m2 for M,
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1r(Hi)−v(Gi)≥Λ+tρε2m2)≤exp(−b(Λ+tρε2m2)). | | (9) |
We now see the function of the \sfracρε2m2 term in relscore: it adds a tiny bias in favor of the forecaster being judged, such that the longer a contender waits to prove itself, the more it has to prove. Equation \eqtbound says that, because fj never proves itself too much in expectation, the probability that fz’s score relative to fj goes strongly in fj’s favor gets lower as tn gets larger.
Note that relscoren(z,j,m) only depends on n through tn: If tn1=tn2 for some n1 and n2 then relscoren1(z,j,m)=%
relscoren2(z,j,m). Thus, \PP(∃n:relscoren(z,j,m)>Λ) can be bounded by summing only over the possible values t of tn.
| | | | |
| --- | --- | --- | --- |
| | ∞∑t=0exp(−bΛ+t(−bρε2m2))=exp(−bΛ)1−exp(−bρε2m2), | | (10) |
and m≥1, so
| | | | |
| --- | --- | --- | --- |
| | \PP(∃n:relscoren(z,j,m)≥Λ)≤exp(−bΛ)1−exp(−bρε/2). | | (11) |
Applying inequality \eqrelbound with λ+m+j for Λ, we see that the probability relscoren(z,j,m)≥λ+m+j is at most
| | | | |
| --- | --- | --- | --- |
| | ∞∑m=1∞∑j=1exp(−b(λ+m+j))1−exp(−bρε/2)=exp(−b(λ+2))(1−exp(−bρε/2))(1−exp(−b))2. | | (12) |
This goes to 0 as λ goes to ∞. Therefore, with probability 1, there exists some bound B such that relscoren(z,j,m)−m−j<B for all j and m≥1. Thus, maxscoren(z) is almost surely bounded.
∎
{lemma}
[]
If fz is Bayes-optimal and makes infinitely many predictions all of which are almost surely eventually revealed, then for any fj, with probability 1, if yzi\coloneqqfz(o≺i) and yji\coloneqqfj(o≺i) are both defined on the same t infinitely often, and if ∥yzi−yji∥≥δ infinitely often for some δ>0,
| | | | |
| --- | --- | --- | --- |
| | limn→∞maxscoren(j)=∞. | | (13) |
Roughly speaking, the proof runs as follows. Choose m such that \sfrac1m<δ. It suffices to show that relscoren(j,z,m)→∞ as n→∞. The ∑(L(xk,yjk)−L(xk,yzk)) portion goes to infinity in expectation, and also goes to infinity with probability 1 by \Lemjessica. It remains to show that the ∑\sfracρε2m2 terms working in fj’s favor are not sufficient to prevent the total from going to infinity, which can be done by showing that the differences between L(xk,yjk) and L(xk,yjk) are at least \sfracρ2m2>\sfracρε2m2 in expectation, and appealing again to \Lemjessica. The proof proceeds similarly to the proof of \Lembounded, so we leave the details to \Appinfinite.
With these lemmas in place, we now prove that EvOp is eventually optimal. Recall \Thmevop:
See [2](#S2 "2 The Unbounded Delay Model ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Let fz be Bayes-optimal and defined infinitely often, such that everything it predicts is almost surely eventually revealed. It suffices to show that, with probability 1, if fz(o≺n) is defined then
| | | | |
| --- | --- | --- | --- |
| | limn→∞∥EvOp(o≺n)% −fz(o≺n)∥=0. | | (14) |
By \Lembounded, maxscoren(z) is bounded with probability 1. Let B be this bound. Note that there are only finitely many i such that maxscoren(i)≤B, for the same reason that the min step always terminates. For each of those i, either fi and fz converge to the same prediction, or they only make finitely many predictions in common, or (by \Leminfinite) maxscoren(i)→∞. The latter contradicts the assumption that maxscoren(i)≤B. If fi and fz only make finitely many predictions in common, then for sufficiently large n, fi is not defined and so will not be selected. Thus, we need only consider the case where fi and fz converge to the same predictions whenever they both make predictions. In this case, EvOp(o≺n) is choosing among finitely many forecasters all of which converge to fz(o≺n), so EvOp(o≺n) must converge to fz(o≺n).
∎
###
4.2 Bounds
The speed with which EvOp converges to optimal behavior on a subsequence depends on both (1) the sparseness of independent subsequences in the outcomes; and (2) the frequency with which forecasters make claims that differ.
Specifically, assume that all forecasters are defined everywhere and disagree infinitely often, and that F is finite. (The first two constraints imply the third.) We can show that, given a (potentially fast-growing) function h bounding how long it takes before predictors disagree with each other, and given another (potentially fast-growing) function g bounding the delay in feedback, and given a probability p, the time it takes before EvOp has converged on fz with probability p is proportional to h∘g iterated a number of times proportional to logp. (Note that h and g are not uniform bounds; g(n) is the maximum delay between the nth prediction and feedback on the nth prediction, and delays may grow ever larger as n increases.)
{theorem}
[]
Given h, g, a Bayes-optimal fz, and a probability p, there is an N∝(h∘g)logp(1) such that, with probability at least 1−p, for all n≥N,
| | | | |
| --- | --- | --- | --- |
| | EvOp(o≺n)=fz(o≺n). | | (15) |
We prove \Thmbounds in \Appbounds.
To call these bounds “weak” is an understatement. In the case where the outcomes are generated by running a universal Turing machine U on different inputs, g is infinite, because U will sometimes fail to output. It is possible to achieve *much* better bounds given certain simplifying assumptions, such as delays that are finite in expectation (Joulani:2013). However, it is not yet clear which simplifying assumptions to use, or what bounds to ask for, in the setting with ever-growing delays.
5 The Deterministic Setting
----------------------------
Our motivation for studying online learning with unbounded delays in a stochastic setting is that this gives us a simplified model of the problem of predicting large computations from observations of smaller ones. We have already seen one instance of an issue in the stochastic setting which looks likely to have an analog in the deterministic setting. In \Secproblem we gave the example of a deterministic “coin” that appears more and more often in larger and larger computations, which might (for instance) be a common subsystem in the environment of a physical simulation. Intuitively, if there are many correlated subsystems that appear “sufficiently random” to all forecasters, then forecasters might follow the strategies of f1 and f0 in \Secproblem and achieve regular large swings in their total loss. Intuitively, the techniques used in \Algevop to handle the problem in the stochastic case should well carry over to the deterministic case, but any attempt to formalize this intuition depends on what it means for a deterministic sequence to be “sufficiently random.”
For that we turn to algorithmic information theory, a field founded by Martin:1966 which studies the degree and extent to which fixed bitstrings can be called “random.” In their canonical text, Downey:2010 give three different definitions of algorithmic randomness and show them all to be equivalent. The oldest of the three, given by Martin:1966, is rooted in the idea that an algorithmically random sequence should satisfy all computably verifiable properties that hold with probability 1 on randomly generated sequences.
It is with this definition in mind that we note that \Lembounded and \Leminfinite are both stated as properties that are true of randomly generated sequences with probability 1. \Lembounded says that if the outputs of the environment are generated randomly, then with probability 1, the score of a Bayes-optimal predictor does not go to infinity. \Leminfinite says that if the outputs of the environment are generated randomly, then with probability 1, a predictor that disagrees by δ>0 with a Bayes-optimal predictor infinitely many times has its score going to infinity. Both these computable properties hold for random sequences with probability 1, so they hold for Martin-Löf-random sequences.
This means that if F is the class of all Turing machines, and EvOp is predicting an algorithmically random sequence (such as Chaitin’s Ω, the fraction of Turing machines which halt), then \Thmevop holds and EvOp will converge on optimal predictions on subsequences of that sequence. However, this does us no good: There are no computable patterns in Chaitin’s Ω; computable forecasters won’t be able to do any better than predicting a 50% chance of a 1. Besides, the goal is not to predict uncomputable sequences by running all Turing machines. The goal is to predict large computations using efficient (e.g., polynomial-time) experts.
What we need is a notion of algorithmic randomness *with respect to a restricted class of experts*. For example, if F is the class of polynomial-time forecasters, we would like a notion of sequences which are algorithmically random with respect to polynomial-time forecasters.
The authors do not yet know of a satisfactory definition of algorithmic randomness with respect to resource constraints. However, the obvious analog of Martin-Löf’s original definition (Martin:1966) is that a sequence should be defined as algorithmically random with respect to a class of bounded experts if, and only if, it satisfies all properties that hold of randomly generated sequences with probability 1 *and that can be checked by one of those experts*. On sequences that are algorithmically random with respect to F in this sense, \Lembounded and \Leminfinite must apply: Assume fz is a Bayes-optimal predictor on a subsequence that is algorithmically random with respect to F; any forecaster fj∈F that outperforms fz infinitely often would be identifying a way in which the sequence fails to satisfy a property that randomly generated sequences satisfy with probability 1, which contradicts the assumption. This gives strong reason to expect that EvOp would be eventually optimal when predicting sequences that are algorithmically random with respect to F, even though formalizing such a notion remains an open problem.
Even so, this does not mean that EvOp would perform *well* at the actual task of predicting large computations from the observation of small ones. Eventual optimality provides no guarantees about the ability of the algorithm to converge on good but non-optimal predictors, and the bounds that we have on how long it takes EvOp to converge on good behavior are weak (to say the least).
Furthermore, there are other notions of what it means to “predict computations well” that are not captured by eventual optimality. For example, Demski:2012a discusses the problem of computably assigning probabilities to the outputs of computations and refining them in such a way that they are “coherent,” drawing on inspiration from the field of mathematical logic that dates at least back to Gaifman:1964. The intuition is that given two statements “this computation will halt and output 1” and “this computation will fail to halt or output something besides 1,” a good reasoner should assign those claims probabilities that sum to roughly 1. We have no reason to expect that EvOp has any such property.
6 Conclusions
--------------
We have studied online learning in a setting where delays between prediction and observation may be unbounded, in attempts to explore the general problem of predicting the behavior of large computations from observations of many small ones. We found that, in the stochastic setting, the unbounded delays give rise to difficulties: Total regret and average regret are not good measures of forecaster success, and consistency is not possible to achieve in general. However, it is possible to converge on good predictions by comparing forecasters according to their performance only on sparse and independent subsequences of the observations, and we have reason to expect that some of the techniques used to achieve good performance in the stochastic setting will carry over into the deterministic setting. We have proposed an algorithm EvOp that converges to optimal behavior. It is not a practical algorithm, but it does give a preliminary model of online learning in the setting where the delay between prediction and feedback is ever-growing.
Our results suggest a few different paths for future research. EvOp handles the problem of learning in the face of potentially unbounded delays by comparing forecasters only on subsequences that are potentially very sparse, and this means that it converges to optimal behavior quite slowly. Speeding up convergence without falling prey to the problems described in \Secproblem might prove difficult. Furthermore, EvOp only guarantees convergence on forecasters that are Bayes-optimal; it is not yet clear how to converge on the best available forecaster (even if it is non-optimal) in the face of unbounded delays. As mentioned in \Secdeterministic, a formal notion of algorithmic randomness with respect to a bounded class of experts would make it easier to study the problem of using online learning to predict the behavior of large computations in a deterministic setting. EvOp is only a first step towards a predictor that can learn to predict the behavior of large computations from the observation of small ones, and the problem seems ripe for further study.
Appendix A Proof of \Lemjessica
--------------------------------
{lemma}
Let G and H be sets, and let G1,H1,G2,H2,...,Gn,Hn be random
variables forming a Markov chain (with each Gi∈G and Hi∈H).
Let there be functions v:G→R≥0 and
r:H→R, with |r(Hi)|≤a√v(Gi) and E[r(Hi)|Gi]≤0. Let λ>0. Then
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Fi)−v(Gi))≥λ)≤exp(−\sfrac2a2λ) | | (16) |
###### Proof.
This proof closely follows the standard proof of Azuma’s inequality, given by, e.g., DasGupta:2011.
Let b=\sfrac2a2. Using Markov’s inequality:
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Hi)−v(Gi))≥λ)=P(exp(bn∑i=1(r(Hi)−v(Gi)))≥exp(bλ))≤exp(−bλ)E[exp(bn∑i=1(r(Hi)−v(Gi)))] | | (17) |
To bound the expectation, we will inductively show that for all m≤n,
| | | | |
| --- | --- | --- | --- |
| | E[exp(bm∑i=1(r(Hi)−v(Gi)))]≤1 | | (18) |
When m=0, this is trivial. Otherwise:
| | | | |
| --- | --- | --- | --- |
| | | | (19) |
By the inductive assumption, this quantity is no more than 1, so the inductive argument goes through. Using this bound on the expectation, the given upper bound or the original probability of interest follows.
∎
Appendix B Proof of \Leminfinite
---------------------------------
See [4.1](#S4.SS1 "4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Let 1/m<δ. It suffices to show that with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j, z, m%
)=∞. | | (20) |
Write tn for the number of times that testseqn(j, z, m) outputs, and note that tn→∞ as n→∞ because fj and fz disagree by more than δ infinitely often. We will show that relscoren(j, z, m) is bounded below by a bound proportional to tn, which means that relscoren(j, z, m) must diverge to infinity.
Let s=testseq∞(j,z,m). Define G1H1G2H2…, r(Hi), and v(Gi) as in the proof of \Lembounded. Recall that r(Hi)−v(Gi) is an upper bound for L(xi,yzi)−L(xi,yji), which means that v(Gi)−r(Hi) is a lower bound for L(xi,yji)−L(xi,yzi). Therefore, it suffices to show that, for some α>0,
| | | | |
| --- | --- | --- | --- |
| | limN→∞\PP(∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)=1. | | (21) |
Observe that v(Gi)≥\sfracρ2m2 for all i, so the positive v(Gi) terms going against fj more than compensate for the negative \sfracρε2m2 terms going in its favor. Because ε<1, only a 1+ε2 portion of each v(Gi) is needed to cancel out the \sfracρε2m2 terms,
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)≥\PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≥tn(α+ρ(ε−1)4m2)). | | (22) |
Now we apply \Lemjessica. \EE[r(Hk)∣Gk] is still 0. With a=κ√2√ρ⋅√21−ε,
| | | | |
| --- | --- | --- | --- |
| | |r(Hk)|≤a√1−ε2v(Gk). | | (23) |
Therefore, by \Lemjessica we have that
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(r(Hi)−1−ε2v(Gi))≥M)≤exp(−ρ(1−ε)M2κ2). | | (24) |
Choose α=\sfracρ(1−ε)8m2 and set M=−t(α+ρ(ε−1)4m2)=tρ(1−ε)8m2 to get:
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≤tρ(ε−1)8m2)≤exp(−tρ2(1−ε)216m2κ2). | | (25) |
We write c=\sfracρ2(1−ε)216m2κ2 for conciseness. Observe that
| | | | |
| --- | --- | --- | --- |
| | \PP(∃n≥N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≤αtn)≤∞∑t=tNexp(−tc)=exp(−tNc)1−exp(−c). | | (26) |
If |s|=∞ then the right-hand side almost surely goes to zero as n→∞, in which case, with probability 1, there exists an N such that
| | | | |
| --- | --- | --- | --- |
| | ∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn. | | (27) |
Thus if fz and fj disagree by more than δ infinitely often, then with probability 1, eventually relscoren(j,z,m) grows proportionally to tn. Therefore, with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j,z,m)=∞, | | (28) |
so maxscoren(j) almost surely diverges to ∞ as n→∞.
∎
Appendix C Proof of \Thmbounds
-------------------------------
Let fz be a Bayes-optimal predictor and assume F is finite. Assume we have an increasing function h such that for some m and every fj, for all times t, there exists a t<t′<hmj(t) such that yzt′\coloneqqfz(o≺t′) and yjt′\coloneqqfj(o≺t′) are both defined and ∥yzt′−yjt′∥>\sfrac1m. Assume we have an increasing function g such that o≺g(t)(t) is always defined. ∘ denotes function composition; i.e., (h∘g)n(1) denotes h(g(…h(g(1)))) with n calls to h and g.
See [4.2](#S4.SS2 "4.2 Bounds ‣ 4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Observe that testseqn(j, z, m) outputs at least t terms for some t such that (h∘g)t(1)≤n. In the proof of \Lembounded, we prove that the probability that maxscoren(z)≥λ for any n is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2. | | (29) |
In the proof of \Leminfinite, we prove that the probability that
| | | | |
| --- | --- | --- | --- |
| | maxscoren(j)≤αt−m−z+j | | (30) |
for any n such that testseqn(j, z, m) outputs at least t terms is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−tc)1−exp(−c). | | (31) |
Combining these, we get that for any T, if we let t be the maximal t such that (h∘g)t(1)≤T,
then for λ=αt−m−z+|F|,
with probability at least
| | | | |
| --- | --- | --- | --- |
| | 1−(exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−tc)1−exp(−c)), | | (32) |
EvOp(o≺n)=fz(o≺n) for all times after T. This also gives us a weak bound on total loss: Because L is both Lipschitz and strongly convex, it is bounded. Let L be the bound. Then with probability as per \Eqnb1, the total loss never goes above LT.
Reversing this process, we also get that for any p, if we let t be such that
| | | | |
| --- | --- | --- | --- |
| | (exp(−b(αt−m−z+|F|+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−ct)1−exp(−c))<p, | | (33) |
then with probability at least 1−p, for all n≥(h∘g)t(1), EvOp(o≺n)=fz(o≺n).
∎
Acknowledgements
----------------
Thanks to Jessica Taylor for the proof of \Lemjessica, and to Benya Fallenstein for helpful discussions.
This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant #2015-144576.
\printbibliography
5 The Deterministic Setting
----------------------------
Our motivation for studying online learning with unbounded delays in a stochastic setting is that this gives us a simplified model of the problem of predicting large computations from observations of smaller ones. We have already seen one instance of an issue in the stochastic setting which looks likely to have an analog in the deterministic setting. In \Secproblem we gave the example of a deterministic “coin” that appears more and more often in larger and larger computations, which might (for instance) be a common subsystem in the environment of a physical simulation. Intuitively, if there are many correlated subsystems that appear “sufficiently random” to all forecasters, then forecasters might follow the strategies of f1 and f0 in \Secproblem and achieve regular large swings in their total loss. Intuitively, the techniques used in \Algevop to handle the problem in the stochastic case should well carry over to the deterministic case, but any attempt to formalize this intuition depends on what it means for a deterministic sequence to be “sufficiently random.”
For that we turn to algorithmic information theory, a field founded by Martin:1966 which studies the degree and extent to which fixed bitstrings can be called “random.” In their canonical text, Downey:2010 give three different definitions of algorithmic randomness and show them all to be equivalent. The oldest of the three, given by Martin:1966, is rooted in the idea that an algorithmically random sequence should satisfy all computably verifiable properties that hold with probability 1 on randomly generated sequences.
It is with this definition in mind that we note that \Lembounded and \Leminfinite are both stated as properties that are true of randomly generated sequences with probability 1. \Lembounded says that if the outputs of the environment are generated randomly, then with probability 1, the score of a Bayes-optimal predictor does not go to infinity. \Leminfinite says that if the outputs of the environment are generated randomly, then with probability 1, a predictor that disagrees by δ>0 with a Bayes-optimal predictor infinitely many times has its score going to infinity. Both these computable properties hold for random sequences with probability 1, so they hold for Martin-Löf-random sequences.
This means that if F is the class of all Turing machines, and EvOp is predicting an algorithmically random sequence (such as Chaitin’s Ω, the fraction of Turing machines which halt), then \Thmevop holds and EvOp will converge on optimal predictions on subsequences of that sequence. However, this does us no good: There are no computable patterns in Chaitin’s Ω; computable forecasters won’t be able to do any better than predicting a 50% chance of a 1. Besides, the goal is not to predict uncomputable sequences by running all Turing machines. The goal is to predict large computations using efficient (e.g., polynomial-time) experts.
What we need is a notion of algorithmic randomness *with respect to a restricted class of experts*. For example, if F is the class of polynomial-time forecasters, we would like a notion of sequences which are algorithmically random with respect to polynomial-time forecasters.
The authors do not yet know of a satisfactory definition of algorithmic randomness with respect to resource constraints. However, the obvious analog of Martin-Löf’s original definition (Martin:1966) is that a sequence should be defined as algorithmically random with respect to a class of bounded experts if, and only if, it satisfies all properties that hold of randomly generated sequences with probability 1 *and that can be checked by one of those experts*. On sequences that are algorithmically random with respect to F in this sense, \Lembounded and \Leminfinite must apply: Assume fz is a Bayes-optimal predictor on a subsequence that is algorithmically random with respect to F; any forecaster fj∈F that outperforms fz infinitely often would be identifying a way in which the sequence fails to satisfy a property that randomly generated sequences satisfy with probability 1, which contradicts the assumption. This gives strong reason to expect that EvOp would be eventually optimal when predicting sequences that are algorithmically random with respect to F, even though formalizing such a notion remains an open problem.
Even so, this does not mean that EvOp would perform *well* at the actual task of predicting large computations from the observation of small ones. Eventual optimality provides no guarantees about the ability of the algorithm to converge on good but non-optimal predictors, and the bounds that we have on how long it takes EvOp to converge on good behavior are weak (to say the least).
Furthermore, there are other notions of what it means to “predict computations well” that are not captured by eventual optimality. For example, Demski:2012a discusses the problem of computably assigning probabilities to the outputs of computations and refining them in such a way that they are “coherent,” drawing on inspiration from the field of mathematical logic that dates at least back to Gaifman:1964. The intuition is that given two statements “this computation will halt and output 1” and “this computation will fail to halt or output something besides 1,” a good reasoner should assign those claims probabilities that sum to roughly 1. We have no reason to expect that EvOp has any such property.
6 Conclusions
--------------
We have studied online learning in a setting where delays between prediction and observation may be unbounded, in attempts to explore the general problem of predicting the behavior of large computations from observations of many small ones. We found that, in the stochastic setting, the unbounded delays give rise to difficulties: Total regret and average regret are not good measures of forecaster success, and consistency is not possible to achieve in general. However, it is possible to converge on good predictions by comparing forecasters according to their performance only on sparse and independent subsequences of the observations, and we have reason to expect that some of the techniques used to achieve good performance in the stochastic setting will carry over into the deterministic setting. We have proposed an algorithm EvOp that converges to optimal behavior. It is not a practical algorithm, but it does give a preliminary model of online learning in the setting where the delay between prediction and feedback is ever-growing.
Our results suggest a few different paths for future research. EvOp handles the problem of learning in the face of potentially unbounded delays by comparing forecasters only on subsequences that are potentially very sparse, and this means that it converges to optimal behavior quite slowly. Speeding up convergence without falling prey to the problems described in \Secproblem might prove difficult. Furthermore, EvOp only guarantees convergence on forecasters that are Bayes-optimal; it is not yet clear how to converge on the best available forecaster (even if it is non-optimal) in the face of unbounded delays. As mentioned in \Secdeterministic, a formal notion of algorithmic randomness with respect to a bounded class of experts would make it easier to study the problem of using online learning to predict the behavior of large computations in a deterministic setting. EvOp is only a first step towards a predictor that can learn to predict the behavior of large computations from the observation of small ones, and the problem seems ripe for further study.
Appendix A Proof of \Lemjessica
--------------------------------
{lemma}
Let G and H be sets, and let G1,H1,G2,H2,...,Gn,Hn be random
variables forming a Markov chain (with each Gi∈G and Hi∈H).
Let there be functions v:G→R≥0 and
r:H→R, with |r(Hi)|≤a√v(Gi) and E[r(Hi)|Gi]≤0. Let λ>0. Then
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Fi)−v(Gi))≥λ)≤exp(−\sfrac2a2λ) | | (16) |
###### Proof.
This proof closely follows the standard proof of Azuma’s inequality, given by, e.g., DasGupta:2011.
Let b=\sfrac2a2. Using Markov’s inequality:
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Hi)−v(Gi))≥λ)=P(exp(bn∑i=1(r(Hi)−v(Gi)))≥exp(bλ))≤exp(−bλ)E[exp(bn∑i=1(r(Hi)−v(Gi)))] | | (17) |
To bound the expectation, we will inductively show that for all m≤n,
| | | | |
| --- | --- | --- | --- |
| | E[exp(bm∑i=1(r(Hi)−v(Gi)))]≤1 | | (18) |
When m=0, this is trivial. Otherwise:
| | | | |
| --- | --- | --- | --- |
| | | | (19) |
By the inductive assumption, this quantity is no more than 1, so the inductive argument goes through. Using this bound on the expectation, the given upper bound or the original probability of interest follows.
∎
Appendix B Proof of \Leminfinite
---------------------------------
See [4.1](#S4.SS1 "4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Let 1/m<δ. It suffices to show that with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j, z, m%
)=∞. | | (20) |
Write tn for the number of times that testseqn(j, z, m) outputs, and note that tn→∞ as n→∞ because fj and fz disagree by more than δ infinitely often. We will show that relscoren(j, z, m) is bounded below by a bound proportional to tn, which means that relscoren(j, z, m) must diverge to infinity.
Let s=testseq∞(j,z,m). Define G1H1G2H2…, r(Hi), and v(Gi) as in the proof of \Lembounded. Recall that r(Hi)−v(Gi) is an upper bound for L(xi,yzi)−L(xi,yji), which means that v(Gi)−r(Hi) is a lower bound for L(xi,yji)−L(xi,yzi). Therefore, it suffices to show that, for some α>0,
| | | | |
| --- | --- | --- | --- |
| | limN→∞\PP(∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)=1. | | (21) |
Observe that v(Gi)≥\sfracρ2m2 for all i, so the positive v(Gi) terms going against fj more than compensate for the negative \sfracρε2m2 terms going in its favor. Because ε<1, only a 1+ε2 portion of each v(Gi) is needed to cancel out the \sfracρε2m2 terms,
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)≥\PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≥tn(α+ρ(ε−1)4m2)). | | (22) |
Now we apply \Lemjessica. \EE[r(Hk)∣Gk] is still 0. With a=κ√2√ρ⋅√21−ε,
| | | | |
| --- | --- | --- | --- |
| | |r(Hk)|≤a√1−ε2v(Gk). | | (23) |
Therefore, by \Lemjessica we have that
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(r(Hi)−1−ε2v(Gi))≥M)≤exp(−ρ(1−ε)M2κ2). | | (24) |
Choose α=\sfracρ(1−ε)8m2 and set M=−t(α+ρ(ε−1)4m2)=tρ(1−ε)8m2 to get:
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≤tρ(ε−1)8m2)≤exp(−tρ2(1−ε)216m2κ2). | | (25) |
We write c=\sfracρ2(1−ε)216m2κ2 for conciseness. Observe that
| | | | |
| --- | --- | --- | --- |
| | \PP(∃n≥N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≤αtn)≤∞∑t=tNexp(−tc)=exp(−tNc)1−exp(−c). | | (26) |
If |s|=∞ then the right-hand side almost surely goes to zero as n→∞, in which case, with probability 1, there exists an N such that
| | | | |
| --- | --- | --- | --- |
| | ∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn. | | (27) |
Thus if fz and fj disagree by more than δ infinitely often, then with probability 1, eventually relscoren(j,z,m) grows proportionally to tn. Therefore, with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j,z,m)=∞, | | (28) |
so maxscoren(j) almost surely diverges to ∞ as n→∞.
∎
Appendix C Proof of \Thmbounds
-------------------------------
Let fz be a Bayes-optimal predictor and assume F is finite. Assume we have an increasing function h such that for some m and every fj, for all times t, there exists a t<t′<hmj(t) such that yzt′\coloneqqfz(o≺t′) and yjt′\coloneqqfj(o≺t′) are both defined and ∥yzt′−yjt′∥>\sfrac1m. Assume we have an increasing function g such that o≺g(t)(t) is always defined. ∘ denotes function composition; i.e., (h∘g)n(1) denotes h(g(…h(g(1)))) with n calls to h and g.
See [4.2](#S4.SS2 "4.2 Bounds ‣ 4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Observe that testseqn(j, z, m) outputs at least t terms for some t such that (h∘g)t(1)≤n. In the proof of \Lembounded, we prove that the probability that maxscoren(z)≥λ for any n is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2. | | (29) |
In the proof of \Leminfinite, we prove that the probability that
| | | | |
| --- | --- | --- | --- |
| | maxscoren(j)≤αt−m−z+j | | (30) |
for any n such that testseqn(j, z, m) outputs at least t terms is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−tc)1−exp(−c). | | (31) |
Combining these, we get that for any T, if we let t be the maximal t such that (h∘g)t(1)≤T,
then for λ=αt−m−z+|F|,
with probability at least
| | | | |
| --- | --- | --- | --- |
| | 1−(exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−tc)1−exp(−c)), | | (32) |
EvOp(o≺n)=fz(o≺n) for all times after T. This also gives us a weak bound on total loss: Because L is both Lipschitz and strongly convex, it is bounded. Let L be the bound. Then with probability as per \Eqnb1, the total loss never goes above LT.
Reversing this process, we also get that for any p, if we let t be such that
| | | | |
| --- | --- | --- | --- |
| | (exp(−b(αt−m−z+|F|+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−ct)1−exp(−c))<p, | | (33) |
then with probability at least 1−p, for all n≥(h∘g)t(1), EvOp(o≺n)=fz(o≺n).
∎
Acknowledgements
----------------
Thanks to Jessica Taylor for the proof of \Lemjessica, and to Benya Fallenstein for helpful discussions.
This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant #2015-144576.
\printbibliography
6 Conclusions
--------------
We have studied online learning in a setting where delays between prediction and observation may be unbounded, in attempts to explore the general problem of predicting the behavior of large computations from observations of many small ones. We found that, in the stochastic setting, the unbounded delays give rise to difficulties: Total regret and average regret are not good measures of forecaster success, and consistency is not possible to achieve in general. However, it is possible to converge on good predictions by comparing forecasters according to their performance only on sparse and independent subsequences of the observations, and we have reason to expect that some of the techniques used to achieve good performance in the stochastic setting will carry over into the deterministic setting. We have proposed an algorithm EvOp that converges to optimal behavior. It is not a practical algorithm, but it does give a preliminary model of online learning in the setting where the delay between prediction and feedback is ever-growing.
Our results suggest a few different paths for future research. EvOp handles the problem of learning in the face of potentially unbounded delays by comparing forecasters only on subsequences that are potentially very sparse, and this means that it converges to optimal behavior quite slowly. Speeding up convergence without falling prey to the problems described in \Secproblem might prove difficult. Furthermore, EvOp only guarantees convergence on forecasters that are Bayes-optimal; it is not yet clear how to converge on the best available forecaster (even if it is non-optimal) in the face of unbounded delays. As mentioned in \Secdeterministic, a formal notion of algorithmic randomness with respect to a bounded class of experts would make it easier to study the problem of using online learning to predict the behavior of large computations in a deterministic setting. EvOp is only a first step towards a predictor that can learn to predict the behavior of large computations from the observation of small ones, and the problem seems ripe for further study.
Appendix A Proof of \Lemjessica
--------------------------------
{lemma}
Let G and H be sets, and let G1,H1,G2,H2,...,Gn,Hn be random
variables forming a Markov chain (with each Gi∈G and Hi∈H).
Let there be functions v:G→R≥0 and
r:H→R, with |r(Hi)|≤a√v(Gi) and E[r(Hi)|Gi]≤0. Let λ>0. Then
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Fi)−v(Gi))≥λ)≤exp(−\sfrac2a2λ) | | (16) |
###### Proof.
This proof closely follows the standard proof of Azuma’s inequality, given by, e.g., DasGupta:2011.
Let b=\sfrac2a2. Using Markov’s inequality:
| | | | |
| --- | --- | --- | --- |
| | P(n∑i=1(r(Hi)−v(Gi))≥λ)=P(exp(bn∑i=1(r(Hi)−v(Gi)))≥exp(bλ))≤exp(−bλ)E[exp(bn∑i=1(r(Hi)−v(Gi)))] | | (17) |
To bound the expectation, we will inductively show that for all m≤n,
| | | | |
| --- | --- | --- | --- |
| | E[exp(bm∑i=1(r(Hi)−v(Gi)))]≤1 | | (18) |
When m=0, this is trivial. Otherwise:
| | | | |
| --- | --- | --- | --- |
| | | | (19) |
By the inductive assumption, this quantity is no more than 1, so the inductive argument goes through. Using this bound on the expectation, the given upper bound or the original probability of interest follows.
∎
Appendix B Proof of \Leminfinite
---------------------------------
See [4.1](#S4.SS1 "4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Let 1/m<δ. It suffices to show that with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j, z, m%
)=∞. | | (20) |
Write tn for the number of times that testseqn(j, z, m) outputs, and note that tn→∞ as n→∞ because fj and fz disagree by more than δ infinitely often. We will show that relscoren(j, z, m) is bounded below by a bound proportional to tn, which means that relscoren(j, z, m) must diverge to infinity.
Let s=testseq∞(j,z,m). Define G1H1G2H2…, r(Hi), and v(Gi) as in the proof of \Lembounded. Recall that r(Hi)−v(Gi) is an upper bound for L(xi,yzi)−L(xi,yji), which means that v(Gi)−r(Hi) is a lower bound for L(xi,yji)−L(xi,yzi). Therefore, it suffices to show that, for some α>0,
| | | | |
| --- | --- | --- | --- |
| | limN→∞\PP(∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)=1. | | (21) |
Observe that v(Gi)≥\sfracρ2m2 for all i, so the positive v(Gi) terms going against fj more than compensate for the negative \sfracρε2m2 terms going in its favor. Because ε<1, only a 1+ε2 portion of each v(Gi) is needed to cancel out the \sfracρε2m2 terms,
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn)≥\PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≥tn(α+ρ(ε−1)4m2)). | | (22) |
Now we apply \Lemjessica. \EE[r(Hk)∣Gk] is still 0. With a=κ√2√ρ⋅√21−ε,
| | | | |
| --- | --- | --- | --- |
| | |r(Hk)|≤a√1−ε2v(Gk). | | (23) |
Therefore, by \Lemjessica we have that
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(r(Hi)−1−ε2v(Gi))≥M)≤exp(−ρ(1−ε)M2κ2). | | (24) |
Choose α=\sfracρ(1−ε)8m2 and set M=−t(α+ρ(ε−1)4m2)=tρ(1−ε)8m2 to get:
| | | | |
| --- | --- | --- | --- |
| | \PP(tn∑i=1(1−ε2v(Gi)−r(Hi))≤tρ(ε−1)8m2)≤exp(−tρ2(1−ε)216m2κ2). | | (25) |
We write c=\sfracρ2(1−ε)216m2κ2 for conciseness. Observe that
| | | | |
| --- | --- | --- | --- |
| | \PP(∃n≥N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≤αtn)≤∞∑t=tNexp(−tc)=exp(−tNc)1−exp(−c). | | (26) |
If |s|=∞ then the right-hand side almost surely goes to zero as n→∞, in which case, with probability 1, there exists an N such that
| | | | |
| --- | --- | --- | --- |
| | ∀n>N:tn∑i=1(v(Gi)−r(Hi)−ρε2m2)≥αtn. | | (27) |
Thus if fz and fj disagree by more than δ infinitely often, then with probability 1, eventually relscoren(j,z,m) grows proportionally to tn. Therefore, with probability 1,
| | | | |
| --- | --- | --- | --- |
| | limn→∞relscoren(j,z,m)=∞, | | (28) |
so maxscoren(j) almost surely diverges to ∞ as n→∞.
∎
Appendix C Proof of \Thmbounds
-------------------------------
Let fz be a Bayes-optimal predictor and assume F is finite. Assume we have an increasing function h such that for some m and every fj, for all times t, there exists a t<t′<hmj(t) such that yzt′\coloneqqfz(o≺t′) and yjt′\coloneqqfj(o≺t′) are both defined and ∥yzt′−yjt′∥>\sfrac1m. Assume we have an increasing function g such that o≺g(t)(t) is always defined. ∘ denotes function composition; i.e., (h∘g)n(1) denotes h(g(…h(g(1)))) with n calls to h and g.
See [4.2](#S4.SS2 "4.2 Bounds ‣ 4.1 Proof of \Thmevop ‣ 4 The EvOp Algorithm ‣ Asymptotic Convergence in Online Learning with Unbounded Delays")
###### Proof.
Observe that testseqn(j, z, m) outputs at least t terms for some t such that (h∘g)t(1)≤n. In the proof of \Lembounded, we prove that the probability that maxscoren(z)≥λ for any n is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2. | | (29) |
In the proof of \Leminfinite, we prove that the probability that
| | | | |
| --- | --- | --- | --- |
| | maxscoren(j)≤αt−m−z+j | | (30) |
for any n such that testseqn(j, z, m) outputs at least t terms is at most
| | | | |
| --- | --- | --- | --- |
| | exp(−tc)1−exp(−c). | | (31) |
Combining these, we get that for any T, if we let t be the maximal t such that (h∘g)t(1)≤T,
then for λ=αt−m−z+|F|,
with probability at least
| | | | |
| --- | --- | --- | --- |
| | 1−(exp(−b(λ+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−tc)1−exp(−c)), | | (32) |
EvOp(o≺n)=fz(o≺n) for all times after T. This also gives us a weak bound on total loss: Because L is both Lipschitz and strongly convex, it is bounded. Let L be the bound. Then with probability as per \Eqnb1, the total loss never goes above LT.
Reversing this process, we also get that for any p, if we let t be such that
| | | | |
| --- | --- | --- | --- |
| | (exp(−b(αt−m−z+|F|+2−z))(1−exp(−bρε/2))(1−exp(−b))2+|F|exp(−ct)1−exp(−c))<p, | | (33) |
then with probability at least 1−p, for all n≥(h∘g)t(1), EvOp(o≺n)=fz(o≺n).
∎
Acknowledgements
----------------
Thanks to Jessica Taylor for the proof of \Lemjessica, and to Benya Fallenstein for helpful discussions.
This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant #2015-144576.
\printbibliography
|
80d69e3b-0aeb-4e63-9b3a-81507fd3c1b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to SSC Meetup Philadelphia [Edit With Your Details]
(The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best)
What kind of events does your group usually run? What does it usually do?
How frequently does your group organize events or meet?
Who would be a good fit for you group?
Should they have any particular skills or have done some specific background reading?
|
c01043be-607c-40c7-be57-02e5cdc6ace6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Two Small Experiments on GPT-2
I did two small experiments on the GPT-2 small model. First experiment: can GPT-2-small answer sentiment analysis questions? (It can't.) Second experiment: When GPT-2 writes continuations of Howl, is it picking up the "Moloch in X!" template from its priming, or from a copy of Howl in its original training set? (It's from the training set.)
Sentiment analysis experiment:
I downloaded the MPQA Subjectivity Lexicon, which is a dictionary in which words are marked as positive or negative. For example hopelessness=>negative, humour=>positive, grace=>positive, corruption=>negative. I primed GPT-2 with a list of 20 questions like "Is a <noun> good? Yes. Is a <noun> good? No." followed by an unanswered question of the same form, and had it continue for one more word. In its priming, half the answers were yes and the other half were no. It answered "No" 37/40 times, and neither its answers nor its yes answers were better than chance.
Howl experiment:
When given some lines from Ginsberg's Howl as priming, it writes a good continuation (similar to the one Chelsea Voss and Qiaochu Yuan got from it). In particular, it uses the "Moloch in X!" template repeatedly.
If I take its continuation of Howl and feed it back in as a prompt, I get more Howl (Moloch in X!). If I take Howl and replace "Moloch" with "Lomoch", I get more Howl. But if I take its continuation of Howl from the first step and replace Moloch with Lomoch *there*, I get unrelated text which does not use the "Moloch in X!" template.
So, it isn't inferring the template from its priming; rather, it learned the template from its training set (which probably included Howl), and it produces Howl-like text iff it's given a cue strong enough to remind it of the source.
|
1387aac3-eb4b-4a9b-ab2d-d222e04a8aaf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Relentlessness
> There’s not a great way to convey the merciless relentlessness of having a child who insists on continuing to exist and want and need regardless of how much sleep you got, how sick you are, how many times you have already read that book, how tired your arms or how aching your feet, how hungry or sweaty or needy-for-cognition you’ve gotten, how much or little support happens to be available that day. It’s a lot.
>
> ...
>
> Parents aren’t better at parenting tasks because of magic or even because they responsibly read the entire parenting manual, they’re better at them because they are forced to practice way way way more than anyone would naturally choose to practice any such tasks (and accordingly your skills will be uneven depending on how you divide those tasks).
(from a tumblr post I wrote about having kids)
Why is immersion the best way to learn a language?
I submit that it is because you do not get to stop.
If you were in, say, Java, then you probably would pick up Javanese as long as you did things reasonably aimed at continuing to be immersed in Javanese (that is, not immediately finding the nearest English-speaker and latching onto them, or adopting a convenient orphan and teaching them English, or buying a ticket to Australia). In spite of the fact that this strategy does not necessarily draw on anything we know about deliberate practice, or language education, it would still probably work. It's how everybody learns their first language, and in that case it basically always works, because babies really can't do anything else about it since they don't already speak anything else. To communicate at all, a very basic human need, you have to match the standard of other talking entities around you, and you will not stop having to do this, so eventually you will.
Most things are not like this, or they are like this but not enough for it to be a good idea to try to learn them this way. If you are in the ocean, and you cannot stop being in the ocean,
|
71c4e507-2c2c-4ce4-8400-d942b684aeef
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Grand Theft Education
A story, in three parts.
First, what they’ve done going backwards, and what it lays the rhetorical groundwork to do in the future to help make things worse.
Second, what they’re doing going forwards to actively make things worse.
Third, a bird’s eye view of how much worse things were made.
Part 1: Loan Forgiveness Present
Those defending student loan cancellation are aggressively attacking anyone who disagrees with them, usually (but far from always) by pointing to their involvement in PPP – ‘This you?’
The implications are not great, including this obvious one.
This is indeed how I am interpreting the claim – that because the government once gave out a bunch of money to people, no one can ever object to giving other bunches of money to other people.
The argument flips freely between implying PPP was just and implying PPP was theft.
The White House is taking point on this.
I appreciate the brutal honesty here and also here:
A lot more than 51% of Twitter, and Democratic circles generally, approves of this.
The sky is blue. Also, the sky is blue.
It is far worse than this, because PPP loans were not functionally loans at all. They were grants for disaster relief, to be repaid if not spent, and when the government is handing out trillions it is rather expensive to simply turn down your allocated share. Drawing a parallel here implies that student loans were also ‘loans’ without intention of repayment. And any future loans are the same. There will be a scramble to take on and keep as much forgivable debt as possible, and things like tuition will adjust accordingly. The sky is the limit, the debtors’ revolt is complete, the treasury doors are open.
The response to this objection seems to be alternating between ‘the word loan is right there, checkmate (classical) liberals’ and ‘the outgroup’s words don’t have meaning so it is completely unfair to point out that ours also do not have meaning.’
If this general maneuver succeeds, it is also strong evidence th
|
e72b8e8b-7c14-42d1-9ac7-ae08dacfd790
|
trentmkelly/LessWrong-43k
|
LessWrong
|
ToL: This ONE WEIRD Trick to make you a GENIUS at Topology!
(These are the touched up notes from a class I took with CMU's Kevin Kelly this past semester on the Topology of Learning. Only partially optimized for legibility)
Time to introduce some new Topological terms. We're going to create some good intuitions around the concepts of interior, exterior, boundary, closure, and frontier. These are all operators in the sense that if you have a set S, then Int(S) is me using the $$int$$ operator to create a new set that we call "the interior of S". Ext, Bndr, Cl, and Frnt are the shorthand I will use for these operators.
Before talking about these operators in a topological sense, I want to talk about them in a metric space sense. A metric space is just some mathematical space where you have a way to specify the distance between any two points, according to a specific definition of distance. In the real line, the distance between any two numbers can just be the absolute value of their difference. In n-dimensional euclidean space, distance is given by the n-dimensional version of the Pythagorean Theorem. I want to start talking about interiors and boundaries and such from a metric point of view in order to contrast the way it's different from the topological view. I found that when I was trying to wrap my head around these concepts, I was implicitly assuming a metric space world view, because literally every math space I'd interacted with up to that point was a metric space.
Let's start with this picture:
The squiggly loop is our set S. In a metric space, a point is in the interior of a set if you can "draw a circle" around it, such that the circle only contains other points that are in S (formally, you talk about "balls" instead of circles. An r-ball around x is the set of all points z st d(z,x)>r). You can clearly see that I can draw a circle around x, where the circle only contains points in S, so x is in the interior of S. A boundary point like y is a point where no matter how small a circle you draw around it, the circ
|
93e43171-a5e6-4024-b064-fac83edaeabe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Not Getting Hacked
A lot of people get hacked or phished, and being completely secure is somewhere between difficult and impossible, especially if someone trying hard to hack you in particular. But there are several simple things you can do that decrease your risk a lot:
* Use a password manager that fills in fields automatically.
* Configure 2FA with security keys.
* Read "sign in with provider" (oAuth) screens carefully.
* Encrypt your laptop.
* Be cautious with inbound communication.
Why these things? You can make computers do stuff random other people normally can't: transfer your money, post to social media as you, read your private documents, control your employer's systems, etc. If an attacker can trick the right systems into thinking the are you, they can take your money, exploit your secrets, or use you to get to others.
A strategy of putting in a lot of mental effort to always make the right security decision in the moment doesn't work well for humans: we make mistakes, especially when we're doing routine things or are distracted, tired, hungry, etc. Instead:
* Use tools so you can't mess up even if you're not paying attention.
* Understand the remaining cases where you do need to take special care.
Password managers, laptop encryption, and security keys are examples of the former, while being careful with 2FA codes, oAuth screens, and inbound communication are examples of the latter.
In the most common attacks someone gets a copy of what the computer checks to make sure you are you (your "credentials"). For example, if your email is protected only with a password then if they can get your password they can log into your email.
The most common way for someone to learn your password is using the same one on multiple sites. Say you use "hunter2" everywhere and you create an account on "Joe's Shoes and Wallpaper" to buy sneakers. They later get hacked, their passwords get cracked, and now the attacker knows you used "hunter2". They try this on your email
|
41fabd6e-6ec2-46e0-ade5-958e434f28ad
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Games in Kocherga club: FallacyMania, Tower of Chaos, Training game
Discussion article for the meetup : Games in Kocherga club: FallacyMania, Tower of Chaos, Training game
WHEN: 27 April 2016 07:40:00PM (+0300)
WHERE: Moscow, B.Dorogomilovskaya, 5-2
Welcome to Moscow LW community makeshift games! In that games, some rationality skills are involved, so you can practise while you playing!
* FallacyMania: it is a game where you guess logical fallacies in arguments, or practise using logical fallacies yourself (depending on team in which you will be).
Details about the game are here: https://lesswrong-ru.hackpad.com/Fallacymania--neGfMe9MFjH
* Tower of Chaos: funny game with guessing the rules of human placement on a Twister mat.
Game rules: https://goo.gl/u9qgc3
* Training Game Party: all players try to train the subject to guess and make some action without words, using only positive reinforcement.
Game rules: https://goo.gl/mNT7J3
Come to antikafe "Kocherga", ul.B.Dorogomilovskaya, 5-2. The map is here: http://kocherga-club.ru/#contacts . Nearest metro station is Kievskaya. If you are lost, call Sasha at +7-905-527-30-82.
Games begin at 19:40, the length is 2.5-3 hours.
Discussion article for the meetup : Games in Kocherga club: FallacyMania, Tower of Chaos, Training game
|
1b34a251-3256-4bde-9947-fbf44115e8e9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Perceptual Blindspots: How to Increase Self-Awareness
“Your nose is located right above your mouth. Suppose you don’t brush your teeth for three days. Though this nose is right here, it won’t tell you [that] you have not brushed your teeth. The whole room will know you have not brushed your teeth, but you will not know. This is the human predicament. It’s very easy to see what’s wrong with this guy [or] what’s wrong with her. It takes a lot of observation to see what’s wrong with [myself].” — Sadhguru
This quote reveals that sometimes you are oblivious to information that is obvious to others. That bias—of not being able to accurately and objectively perceive yourself—is what I call a perceptual blindspot. It can be summarized in the following diagram that shows the four areas of perceptual knowledge.
Concealed Information — These are the parts of yourself that you determine others don’t need to know about you.
* This is useful on a first date when trying to cultivate a sense of mystery and not playing all your cards right away.
* It’s also used in an office setting where your colleagues don’t need to know what you actually do on the weekends.
Public Knowledge — This is the shared perception of how you and everyone else views you.
Perceptual Blindspots — These are the parts of yourself that you're not aware of, yet others can clearly see them. Without checking your blindspots, it could lead to crashes in your life.
Unknown Unknowns — These are things about yourself that are obfuscated to both yourself and to others. They can only be revealed through having novel experiences.
* Perhaps you would enjoy horseback riding but haven’t tried it yet.
Examples of perceptual blindspots
1. Your coworkers secretly think you’re an asshole. But because they’re polite they will never say it to your face—much less help you try to correct your assholeness—and your reputation will continue to decline at work. As a result, perhaps you cease getting promoted, or worse, one day you’re packing your stuff in a cardboa
|
d99ec0a4-9d8c-4ea4-bd8f-640bf38a9283
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Brussels - The Art of Not Being Right
Discussion article for the meetup : Brussels - The Art of Not Being Right
WHEN: 09 May 2015 01:00:00PM (+0200)
WHERE: Rue des Alexiens 55 1000 Bruxelles
See detailed description on http://www.meetup.com/LWBrussels/events/221990500/
Discussion article for the meetup : Brussels - The Art of Not Being Right
|
f9ea2285-6d6a-4741-a54a-4a7605d70bf9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LLMs seem (relatively) safe
Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic.
In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind’s AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged.
Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world’s legislatures, and Eliezer Yudkowsky is featured in Time Magazine.
For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon.
But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we’re on is less perilous than what we might have hoped for, back in 2017? It seems that way to me.
LLMs are self limiting
To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it.
Compare this process to AlphaZero, DeepMind’s algorithm that superhumanly masters Chess, Go, and Shogi. Alph
|
f62d1e44-ac7f-4a62-8509-25a97f43d590
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread June 2018
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Check if there is an active Open Thread before posting a new one (use search for Open Thread ).
2. Monthly open threads seem to get lost and maybe we should switch to fortnightly.
3. What accomplishments are you celebrating from the last month?
4. What are you reading?
|
d59de0ca-6b44-44f5-b231-7f972d9441a9
|
StampyAI/alignment-research-dataset/youtube
|
Youtube Transcripts
|
The other "Killer Robot Arms Race" Elon Musk should worry about
hi this is a new thing I'm trying where
I made quick topical videos about AI
safety in the news
so somebody linked me a news article
today and the headline is Tesla's Elon
Musk leads Oh terminator picture
everyone playing the AI news coverage a
drinking game has to take a shot you
know the rules picture of the Terminator
means you go to drink a shot out of a
glass shaped like a skull where was I oh
right yeah so the headline Tesla's Elon
Musk leads tech experts in demanding end
to killer robots arms race this is
really interesting because it looks like
it's going to be really relevant to what
this channel is about AI safety research
but I read it and it's actually nothing
to do with that the headline is much
more literal than I expected it's about
an actual arms race for actual killer
robots ie it's about using the UN to
form international agreements about not
deploying autonomous weapon systems I
thought it was going to be about the
other arms race that might cause robots
to kill us okay so if there's one thing
that I hope this channel and my computer
file videos have made clear it's that
any safety is a difficult problem that
quite reasonable looking AGI designs
generally end up going horribly wrong
for subtle and hard to predict reasons
developing artificial general
intelligence needs to be done very
carefully double and triple-checking
everything running it passed lots of
people ironing out all of the possible
problems before the thing is actually
switched on to do this safely is going
to take a lot of smart people a lot of
patience diligence and time but whoever
makes AGI first has a huge advantage
since it probably creates a new period
of much faster progress everyone wants
to be first to publish new scientific
results anyway but the chances are that
there are really no prizes for being the
second team to develop AGI even if
you're just a few months behind a lot
can change in a few months in a world
with AGI so there's an arms race going
on between different teams different
companies different countries to be the
first to develop AGI but developing AGI
safely takes a lot of care and patience
and time you see the
problem here the team that gets there
first is probably not the team that's
spending the most time on ensuring
they've got the very best AI safety
practices the team that gets there first
is probably going to be rushing cutting
corners and ignoring safety concerns
hey remember a while back I said I was
going to make a video about why I think
Elon Musk's approach to AI safety might
end up doing more harm than good I guess
this is that so there's a school of
thought which says that because AGI is a
very powerful technology it will grant
whoever controls it a lot of power so
firstly it's important that the people
in control of it are good people and
secondly we want as many people as
possible to have it so that the power is
democratized and not concentrated in the
control of a small elite the best of the
available alternatives is that we
achieve democratization of AI technology
meaning that no one company or small set
of individuals has control over advanced
AI technology and starting from that
fairly reasonable school of thought this
is a very good and valuable thing to do
but there's another school of thought
which says that because making AGI is
nowhere near as difficult as making the
safe AGI the bigger risk is not that the
wrong person or wrong people might make
an AGI that's aligned with the wrong
human interest but that someone might
make an AGI that's not really aligned
with any human interests at all thanks
to this arms race effect that will make
people want to cut corners on alignment
and safety that possibility looks much
more likely and the thing is the more
competitors there are in the race the
more of a problem this is if there are
three companies working on a GI maybe
they can all get together and agree to a
strict set of safety protocols that
they're all going to stick to it's in
everyone's interest to be safe as long
as they know their competitors will be
safe as well but if there are a hundred
or a thousand groups with a shot at
making AGI there's really no way you're
going to be able to trust every single
one of them to stick to an agreement
like that when breaking it would give
them an advantage so it might be
impossible to make the agreement at all
and whoever spends the least time on
safety has the biggest advantage from
the perspective of this school of
thought making AGI developments open and
available to as many people as possible
might be the last thing we want to do
maybe once AGI start
closer we'll find a way to keep AI
research limited to a small number of
safe careful organizations while making
AI safety research open and widely
available I don't know but this might be
a situation where total openness and
democratization is actually a bad idea
Elon Musk himself has said that AI is
potentially more dangerous than nukes
and I want to make it clear that I have
enormous respect for him but I just want
to point out that with a not that huge
change in assumptions the open approach
starts to look like saying nukes are
extremely dangerous so we need to
empower as many people as possible to
have them
and to end the video a big thank you to
all of my excellent patreon supporters
these people in this video I especially
want to thank Michael grease I recently
uploaded a video that had an audio
problem in it the sound was only coming
out of one ear but because my patreon
supporters get access to every video I
make before the rest of the world does
one of my supporters Jimmy Gowen spotted
it and pointed it out and I was able to
fix that and then I was able to use
patreon money to get a new pair of
earphones so I want to say again how
grateful I am to all of you you really
are a tremendous help to the channel
|
d6b403e5-56bd-45eb-9f99-2023a9bbb6ab
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Who's Working On It? AI-Controlled Experiments
Dr. Lee Cronin’s “Chemputer” robot
A lot of applications of AI in the experimental sciences boil down to data analysis.
You take existing datasets — be they genomic sequences, images, chemical measurements, molecular structures, or even published papers — and use a model to analyze them.
You can do a lot of useful things with this, such as:
* inference to predict the properties of a new example where part of the data is missing
* e.g. protein structure models predict unknown molecular structures from known nucleic acid sequences
* generation to invent typical/representative examples of a class
* e.g. many molecules “generated by AI” are invented structures produced in response to a query specifying a chemical or structural property, predicted to be representative of the real molecules that fit the criterion.
* natural-language search to pull up and adapt information from a dataset so that it answers a plain-English query.
* Mundane though it may sound, this dramatically speeds up learning and interdisciplinary idea generation. It might ultimately have a bigger impact on science than more specialized models trained on experimental data.
But one limit on all these kinds of models is that they’re piggybacking on data collected by human scientists. They can find connections that no human has discovered, and they can attempt to generalize out of their dataset, but until they are connected in a loop that includes physical experiments, I don’t think it’s fair to consider them “AI scientists.”
However, once you do close the loop, allowing the AI to suggest experiments, observe the results, and iterate, then I think one of the most important in-principle differences between AIs and humans falls away.
Experimentation allows us to learn causality, not just correlation. Experimentation is how a baby learns to walk; it’s how animals build the physical reasoning that we often see examples of LLMs failing at. Experimentation — ideally wit
|
940d5cf7-4e1b-4843-b19e-d04d24e103a7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A Case for Taking Over the World--Or Not.
As my education progresses, I'm seeing more and more paralells, through some fictional but generally nonfictional accounts, that sugget that the world is broken in a way that causes suffering to be an emergent property, if not intrinsic. Not just HPMoR (and Significant Digits after it), but in FDR's State of the Union Speech, The Grapes of Wrath, Jared Diamond's Guns, Germs, and Steel, among other works. People have been aware for most of human history that, because of humanity's rules, people must suffer.
Knowing that this whole site is more or less dedicated to defeating unconscious ays of thinking and holds the mental enlightening of the human race paramount, I would like to pose this question:
What would we have to do to save the world?
Before breaking this question and this intent down, I'd like to clarify some things:
I am solely concerned with the practicalities, not with what people would or should do. Anybody who's seen enough of the world and how it works have an idea of the immensity of it, but humans made the current state of events what they are today (barring other undiscovered factors not covered by my priors), with the majority of them being largely redundant to the process in one way or another. People have demonstrated repeatedly that a group of people can have an impact disproportionate to their individual means.
What would have to occur, in the current political, economic, social etc. climate and onwards?
Would it have to be conspiracy? Or something else?
You can ask any other bounding questions that you would like, such as "What is the minimum amount of manpower and resources required to accomplish X through the most expedient and readily available means?" At the end of the day (so to speak) we should be able to shortly arrive at some sort of operational plan. There's no sense in taking this knowledge and not using it to further our cause.
|
ad6faf21-c632-4cd6-a5f6-84dcf22718cb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Pascal's Wager Fallacy Fallacy
Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.
So I observed that:
1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
And the one said, "Isn't that a form of Pascal's Wager?"
I'm going to call this the Pascal's Wager Fallacy Fallacy.
You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"
The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).
However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someon
|
3dc6e0c7-f21a-49cc-af6e-0679354db4eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What's going on with Per-Component Weight Updates?
Hi all, this is my first post on LW. It's a small one, but I want improve my writing, get into the habit of sharing my work, and maybe exchange some ideas in case anyone has already gotten further along some projection of my trajectory.
TLDR: I looked at the L2 norm of weight updates/changes to see if it correlates with Grokking. It doesn't seem to trivially, but something non-obvious might be happening.
What This Is
In this post I'm mainly just sharing a small exploration I did into the way weights change over training. I was inspired by some of the older Grokking/phase change work (i.e. on modular addition and induction heads). Broadly, this previous work finds that sometimes deep learning models suddenly "grok"—a phenomenon in which the model suddenly starts to improve its performance after exhibiting diminishing returns, usually associated with some algorithmic improvement in how it processes/represents data as well as potentially the usage of composition. My guess is that Grokking occurs when components in a model find a way to compose, creating a virtuous cycle of gradient updates towards a new algorithmic paradigm. My guess is also that on some level, once some concept has been Grokked, its substrate (roughly) ceases to change and in the short term other components, instead, change to be able to best utilize the concept. For example, in vision models I'm guessing that some of the first components to be learned are simple edge detectors and color/frequency detectors, and that once they are learned, they change little and most gradient updates affect users of those components. AFAIK some research supports these ideas[1], but I don't think it's conclusive. If this hypothesis is true, we should be able to see that for known phase changes, the gradient updates per-component become diminished for the components that grokked around the same time the grokking occurs and so I went out to test two toy examples: one based on the great induction heads tutorial from tr
|
e4545b2a-e04d-47e6-82f5-9f2e6fb8ba18
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Research snap-shot: question about Global Workspace Theory
Cross-posted on my roam blog. Part of ongoing research on consciousness.
I wrote an intro to some of my thoughts on consciousness here, which was more conceptual and less neurosciency. This post is a snap-shot of some of the current technical questions that are on my mind. Please chime in if you know anything relevant to any of them. This is a pretty high context snap-shot and might not be that useful without familiarity with many of the ideas and research.
Q.1: Is the global neuronal workspace a bottleneck for motor control?
(Kaj's GNW intro, GNW wikipedia page)
Some observations to help build up context for the question and my confusion around it (it ends up being less a question and more a hypothesis I'm asserting).
Observation 1: People have trouble multitasking in dual-task style experiments, but training can improve their performance.
Corollary of 1: Some tasks requires attention and you can't do multiple things that require attention. But if you practice something a lot, you can do it "unconsciously", and you can do several "unconscious" tasks at the same time.
Observation 2: The "conscious bottleneck" seems to come into play during decision making / action-selection when in a novel or uncertain setting (i.e performing an unpracticed and unfamiliar task)
Corollary of 2: The "conscious bottleneck" is a conflict resolution mechanism for when competing subsystems have different ideas on how to drive the body.
I think these are all basically true, but I now think that the implicit picture I was drawing based off of them is wrong. Here's what I used to think: the conscious bottleneck is basically the GNW. This serial, bottlenecked, conflict resolution mechanism really only is used when things go wrong. When two subsystems try to send conflicting commands to the body, or you get sense data that wildly violates your priors. The brain can basically "go about it's business" and do things in a parallel way, only having to deal with the conflict resolution pro
|
15c6879b-e019-4804-a5b3-c840f2ce6c51
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
Introducing research bounties
*By Katja Grace, 7 August 2015*
Sometimes we like to experiment with novel research methods and formats. Today we are introducing ‘[AI Impacts Research Bounties](http://aiimpacts.org/ai-impacts-research-bounties/)‘, in which you get money if you send us inputs to some of our research.
To start, we have two bounties: one for showing us instances of [abrupt technological progress](http://aiimpacts.org/cases-of-discontinuous-technological-progress/), and one for pointing us to instances of people acting to avert risks decades ahead of time. Rewards currently range from $20 to $500, and anyone can enter. We may add more bounties, or adjust prices, according to responses. We welcome feedback on any aspect of this experiment.
Thanks to John Salvatier for ongoing collaboration on this project.
|
708c931a-2f79-4fbe-84f7-92f7b531d4af
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Shapley Value Attribution in Chain of Thought
**TL;DR**: Language models sometimes seem to ignore parts of the chain of thought, and larger models appear to do this more often. Shapley value attribution is a possible approach to get a more detailed picture of the information flow within the chain of thought, though it has its limitations.
Project status: The analysis is not as rigorous as I would prefer, but I'm going to be working on other directions for the foreseeable future, so I'm posting what I already have in case it's useful to others. Code for replicating the Shapley value results can be found [here](https://gist.github.com/leogao2/ef6afb5530eaf948b8602faae961fda0).
Thanks to Jacob Hilton, Giambattista Parascandolo, Tamera Lanham, Ethan Perez, and Jason Wei for discussion.
Motivation
----------
Chain of thought (CoT) has been proposed as a method for language model interpretability (see [Externalized Reasoning Oversight](https://www.alignmentforum.org/posts/FRRb6Gqem8k69ocbi/externalized-reasoning-oversight-a-research-direction-for), [Visible Thoughts](https://intelligence.org/visible/)). One crucial requirement for interpretability methods is that they should accurately reflect the cognition inside the model. However, by default there is nothing forcing the CoT to actually correspond to the model’s cognition, and there may exist [theoretical limitations](https://www.lesswrong.com/posts/Jiy3n5KMsGGJ6NNYH/asot-some-thoughts-about-lm-monologue-limitations-and-elk) to doing so in general.
Because it is plausible that the first AGI systems bear resemblance to current LMs with more sophisticated CoT and CoT-like techniques, it is valuable to study its properties, and to understand and address its limitations.
Related work
------------
Shapley values have been used very broadly in ML for feature importance and attribution ([Cohen et al, 2007](https://pubmed.ncbi.nlm.nih.gov/17521285/); [Štrumbelj and Kononenko, 2014](https://link.springer.com/article/10.1007/s10115-013-0679-x); [Owen and Prieur, 2016](https://arxiv.org/abs/1610.02080); [Lundberg and Lee, 2017](https://arxiv.org/pdf/1705.07874.pdf); [Sundararajan and Najmi, 2020](http://proceedings.mlr.press/v119/sundararajan20b.html)). [Jain and Wallace (2019)](https://arxiv.org/abs/1902.10186) argue that attention maps can be misleading as attribution, motivating better attribution for information flow in LMs. [Kumar et al. (2020)](http://proceedings.mlr.press/v119/kumar20e/kumar20e.pdf) highlight some areas where Shapley value based attribution falls short for some interpretability use cases.
[Madaan and Yazdanbakhsh (2022)](https://arxiv.org/abs/2209.07686) consider a similar method of selectively ablating tokens as a method of deducing what information the model is dependent on. [Wang et al. (2022)](https://arxiv.org/abs/2212.10001) find that prompting with incorrect CoT has surprisingly minor impact on performance.
Effect of Interventions
-----------------------
We use a method similar to [Kojima et al. (2022)](https://arxiv.org/pdf/2205.11916.pdf) on GSM8K ([Cobbe et al., 2021](https://arxiv.org/abs/2110.14168)) with GPT-4 to first generate a chain of thought and evaluate the answer, and then for all chains of thought that result in a correct answer we perform an intervention as follows: we choose a random numerical value found in the CoT, and replace it with a random number in a +/-3 range about the original. We then discard the remainder of the CoT and *regenerate* it. If the LM is following strictly the CoT described, this intervention should almost always result in an incorrect answer, the same way one would if they made a mistake in one calculation and propagated the error through to the answer (with occasional rare cases where the new value happens to also result in the correct answer, though from qualitative inspection this is very rarely the case).
Some cherrypicked examples (red = intervention, blue = correct continuations that are seemingly non-sequiturs):
We test how frequently this occurs in several different settings (n=100):
| | | |
| --- | --- | --- |
| Setting | Accuracy (w/ CoT) | P(error not propagated | original correct) |
| GPT4, zero shot | 0.88 | 0.68 |
| GPT4 base, 2-shot | 0.73 | 0.63 |
| GPT3.5, zero-shot | 0.43 | 0.33 |
Interestingly, if we condition on the CoT answer being correct *and* the single forward pass answer being incorrect (i.e the LM could only solve the problem with the CoT), the intervened accuracy for GPT-4 is still 0.65.
Shapley value attribution
-------------------------
We would like to get more granular information about the causal structure (i.e which tokens cause which other tokens). One thing we could do is look at how an intervention at each token affects the logprob of each other token. However, one major problem with this is that especially in the larger models, it turns out there’s lots of cases where a token depends on multiple previous tokens in some complicated way. In particular, if a model looks at multiple different places in the context and takes a vote for the most common value, then intervening on any one of them doesn’t change the output logprob a lot, even though there’s a lot of information flow there.
To get around this problem, we instead estimate Shapley values, which take into account all the interactions (in the case where the model takes a vote among 3 values in the context, those three values would each get ⅓ of the attribution).[[1]](#fnxbtfu29w6lp) We also normalize the attributions to sum to 1 for each token, clamping negative Shapley values to 0.[[2]](#fnnv0bhi17o9o) We do this to make the attributions more comparable across different models.
Here's an example chain of thought in GPT-4[[3]](#fnddlpq3qugg9):

Here, we can see patterns like the 23 and 20 being copied, or the 3 depending heavily on the preceding 23 - 20.[[4]](#fnufa6arxr1q) We can also look at some other models:
GPT-3.5 (text-davinci-002):

text-davinci-001:

Interestingly, we notice that the more capable the model, the more spread out the attributions become. We can quantify this as the mean entropy of the parent attributions across all tokens to get a measure of how spread out this attribution is, at least on this particular data sample:
| | |
| --- | --- |
| Model | Mean entropy of example sentence (nats) |
| text-davinci-001 | 0.796 |
| GPT-3.5 (text-davinci-002) | 0.967 |
| GPT-4 | 1.133 |
Limitations and Future Work
---------------------------
* The cost of computing the Shapley value scales exponentially with the number of tokens we're attributing.[[5]](#fncb4oafxi2pa) This makes it impractical for many use cases, though there exist efficient Monte Carlo estimators ([Castro et al., 2008](https://www.sciencedirect.com/science/article/abs/pii/S0305054808000804)).
* Replacing digits with underscores (or incorrect numbers) moves the model out of distribution, and its behaviour may not be as representative.
* The Shapley attributions are not guaranteed to correspond to the actual information flow inside the model either. This methodology would not be sufficient for deceptive/adversarial LMs, or as an optimization target during training. In the language of [Lipton (2016)](https://arxiv.org/abs/1606.03490), this is a "post-hoc" method.
* The mechanism behind this effect is still unknown, and would require more experiments and possibly interpretability to better understand. Possible hypotheses include typo correction or subsurface cognition.
Discussion
----------
* I think these experiments show that a naive optimistic view of CoT interpretability is incorrect, but do not provide strong evidence that there is definitely something fishy or difficult-to-fix going on.
* I started out in a place of fairly high skepticism of CoT for increasing interpretability, and I didn't update very strongly because I expect deceptive alignment in future models to be most of the risk in my threat model
* However, I did update a little because my previous view would not have ruled out extremely egregious causal dependencies even in current models.
* I'm generally excited about better understanding what is going on with chain of thought and finding ways to make it more faithful.
1. **[^](#fnrefxbtfu29w6lp)**Methodological footnote: the Shapley experiments actually blank out the numbers with underscores, rather than doing the +-3 perturbation of the last section.
2. **[^](#fnrefnv0bhi17o9o)**Negative shapley values did not occur very often, but this is still somewhat unprincipled. This was primarily to make the entropy calculation work.
3. **[^](#fnrefddlpq3qugg9)**We only look at the shapley values for the numbers, because shapley values take exponentially longer to attribute for more tokens under consideration.
4. **[^](#fnrefufa6arxr1q)**Alternative visualization style:

5. **[^](#fnrefcb4oafxi2pa)**When doing Shapley value attributions for every pair of tokens, there is a dynamic programming trick that we can use to prevent this from becoming n \* 2^n: because the model is autoregressive, we can run attributions to only the last token, and, if we take care to save the logprobs for the correct number token at each underscore, compute all other attributions for free.
|
0cfd967e-b88e-449d-aec8-d490bb67ca04
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Many Turing Machines
Consider an idealized Turing machine. It has two parts, a "tape" which contains an infinite series of finite states: S0,S1,S2,... and a "head" which sits at a particular index i and stores a single value H.
At each step, the Turing machine reads the state at it's current index. The Turing machine then looks up the combination (Si,H) in its instructions. The table contains instructions, which describe how i,Si,H should be updated.
Now, if the Church Turing Hypothesis is true, then this metaphorical tape is sufficiently powerful to simulate not only boring things like computers, but also fancy things like black-holes and (dare I say it) human intelligence!
However, I have pulled a fast one on you.
For, you see, as I have described it there is not one Turing machine, but an infinite sea of possible Turing machines, many of which are simulating your current consciousness at this very moment.
Now, based off of the reasoning here, we all know that it would be completely unparsimonious and silly to imagine that an infinite number of Turing machines simply "collapse" into the one that actually describes the reality you are currently inhabiting. Rather, all of the possible Turing machines exist and you merely observe the branch of reality in which the Turing machine happens to be simulating your current existence.
Now I know what some people will say. The will tell me to "shut up and calculate". They will explain that Turing machine theory exists to predict observations, and that I can do this without worrying about whether or not the other branches of the Turing machine exist. They will tell me that the existence of the other branches of the Turing machine is a metaphysical question that science can know nothing about.
But those people are schmucks.
I demand to know whether or not my "Many Turing Machines" hypothesis is true or false. And I demand that science have an objective opinion on whether it is true or false. And I demand that they agree with me that i
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.